Redis内存溢出的问题

开发的电脑内存增加到24G,启动Redis失败,报:

L480@luo-zip MINGW64 /d/develop/tools/redis64-2.8.12
$ ./redis-server.exe
[13308] 12 Dec 15:13:41.994 #
The Windows version of Redis allocates a memory mapped heap for sharing with
the forked process used for persistence operations. In order to share this
memory, Windows allocates from the system paging file a portion equal to the
size of the Redis heap. At this time there is insufficient contiguous free
space available in the system paging file for this operation (Windows error
0x5AF). To work around this you may either increase the size of the system
paging file, or decrease the size of the Redis heap with the –maxheap flag.
Sometimes a reboot will defragment the system paging file sufficiently for
this operation to complete successfully.

Please see the documentation included with the binary distributions for more
details on the –maxheap flag.

Redis can not continue. Exiting.

解决方法就是启动的时候添加–maxheap参数,如:

./redis-server.exe –maxheap 10240000

 

 

Spring Cloud Ribbon找不到对应的服务

问题分析:

配置一个ribbon作为负载均衡,然后创建了两个微服务,一个是api,另外一个是admin。那么使用RestTemplate去请求各个微服务的时候,会出现问题,就是你请求的是api的服务,却请求到admin的服务中去了。

 

跟了一上午的代码,找出大致就是负载均衡规则的问题,都想直接换nginx了。

解决方法就是,屏蔽掉自定义的IRule规则:

    /**
     * RoundRobinRule:轮询
     * RandomRule:随机
     * AvailabilityFilteringRule: 会先过滤掉由于多次访问故障而处于断路器跳闸状态的服务,以及并发的连接数量
     * 超过阈值的服务,然后对剩余的服务列表按照轮询策略进行访问;
     * WeightedResponseTimeRule: 根据平均响应时间计算所有服务的权重,响应时间越快,服务权重越大,被选中的机率越高;
     * 刚启动时,如果统计信息不足,则使用RoundRobinRule策略,等统计信息足够时,会切换到WeightedResponseTimeRule
     * RetryRule: 先按照RoundRobinRule的策略获取服务,如果获取服务失败,则在指定时间内会进行重试,获取可用的服务;
     * BestAvailableRule: 会先过滤掉由于多次访问故障而处于断路器跳闸状态的服务,然后选择一个并发量最小的服务;
     * ZoneAvoidanceRule: 默认规则,复合判断server所在区域的性能和server的可用性选择服务器;
     *
     * @return
     */
//    @Bean
//    public IRule ribbonRule() {
//        return new RoundRobinRule();
//    }

 

 

 

MyBatis Generator相关问题

MySql使用说明

Unsigned Fields

MySql支持signed and unsigned字段。这些不是JDBC类型,因此MyBatis生成器无法自动考虑这些类型的字段。Java数据类型始终是signed。使用unsigned字段时,这可能导致数据类型不精确。解决方案是为MySql中的任何unsigned字段提供 <columnOverride>。这是一个如何处理MySql中未签名的bigint字段的示例:

 <table tableName="ALLTYPES" >
    <columnOverride column="UNSIGNED_BIGINT_FIELD" javaType="java.lang.Object" jdbcType="LONG" />
 </table>

* 您必须自己将返回值强制转换为适当的类型(在本例中为 java.math.BigInteger

Catalogs and Schema

MySql不正确地支持SQL catalogs and schema。如果您在MySql中运行create schema命令,它实际上会创建一个database  – JDBC驱动程序会将其报告为catalog。但是MySql语法不支持标准catalog..table SQL语法。

因此,最好不要在生成器配置中指定catalog or schema。只需指定表名、数据库即可。

如果您使用的是version 8.x of Connector/J,您会注意到,生成器同时会为MySql内置表,如(sys, information_schema, performance_schema, etc.)生成代码。这应该不是您想要的,要禁用这种行为,请将属性“ nullCatalogMeansCurrent = true”添加到JDBC URL。

例如:

 <jdbcConnection driverClass="com.mysql.jdbc.Driver" connectionURL="jdbc:mysql://localhost/my_schema"
            userId="my_user" password="my_password">
      <property name="nullCatalogMeansCurrent" value="true" />
  </jdbcConnection>

以下是我生成mysql8.0,test数据库中的xml配置:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE generatorConfiguration
   PUBLIC "-//mybatis.org//DTD MyBatis Generator Configuration 1.0//EN"
   "http://mybatis.org/dtd/mybatis-generator-config_1_0.dtd">
 <generatorConfiguration>
     <!--数据库驱动-->
     <classPathEntry location="mysql-connector-java-8.0.18.jar"/>
     <context id="shangdaowuenlu" targetRuntime="MyBatis3">
        <commentGenerator>
            <property name="suppressDate" value="true"/>
            <property name="suppressAllComments" value="true"/>
         </commentGenerator>
         <!--数据库链接地址账号密码-->
         <jdbcConnection 
         driverClass="com.mysql.cj.jdbc.Driver" connectionURL="jdbc:mysql://127.0.0.1:3306/test?serverTimezone=UTC"
         userId="test" 
         password="test">
           <property name="nullCatalogMeansCurrent" value="true"/>
         </jdbcConnection>
   
         <javaTypeResolver>
            <property name="forceBigDecimals" value="false"/>
         </javaTypeResolver>
       
         <!--生成Model类存放位置-->
         <javaModelGenerator targetPackage="com.ikonke.mapper.repository" targetProject="d:\develop\tools\mybatisGenerator">
             <property name="enableSubPackages" value="true"/>
             <property name="trimStrings" value="true"/>
         </javaModelGenerator>
       
         <!--生成映射文件存放位置-->
         <sqlMapGenerator targetPackage="com.ikonke.mapper.repository.mapper" targetProject="d:\develop\tools\mybatisGenerator">
            <property name="enableSubPackages" value="true"/>
        </sqlMapGenerator>
      
         <!--生成Dao类存放位置-->
         <javaClientGenerator type="XMLMAPPER" targetPackage="com.ikonke.mapper.repository.mapper" targetProject="d:\develop\tools\mybatisGenerator">
             <property name="enableSubPackages" value="true"/>
         </javaClientGenerator>
       
         <!--生成对应表及类名-->
         <table tableName="%"  enableCountByExample="false" enableUpdateByExample="false" enableDeleteByExample="false" enableSelectByExample="false" selectByExampleQueryId="false">
         <property name="useActualColumnNames" value="true"/>
       </table>
    
   </context>
</generatorConfiguration>

官方文档:

http://mybatis.org/generator/usage/mysql.html

kafka常见问题

1 启动advertised.listeners配置异常:

java.lang.IllegalArgumentException: requirement failed: advertised.listeners cannot use the nonroutable meta-address 0.0.0.0. Use a routable IP address.
    at scala.Predef$.require(Predef.scala:277)
    at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:1203)
    at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:1170)
    at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:881)
    at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:878)
    at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
    at kafka.Kafka$.main(Kafka.scala:82)
    at kafka.Kafka.main(Kafka.scala)

1.1 解决方法:修改server.properties

advertised.listeners=PLAINTEXT://{ip}:9092  # ip可以内网、外网ip、127.0.0.1 或域名

1.2 解析:

server.properties中有两个listeners。 listeners:启动kafka服务监听的ip和端口,可以监听内网ip和0.0.0.0(不能为外网ip),默认为java.net.InetAddress.getCanonicalHostName()获取的ip。advertised.listeners:生产者和消费者连接的地址,kafka会把该地址注册到zookeeper中,所以只能为除0.0.0.0之外的合法ip或域名 ,默认和listeners的配置一致。

2 启动PrintGCDateStamps异常

[0.004s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:/data/service/kafka_2.11-0.11.0.2/bin/../logs/kafkaServer-gc.log instead.
Unrecognized VM option 'PrintGCDateStamps'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

2.1 解决方法: 更换jdk1.8.x版本或者使用>=kafka1.0.x的版本。

2.2 解析:

只有在jdk1.9并且kafka版本在1.0.x之前的版本才会出现。

3 生成者发送message失败或消费者不能消费(kafka1.0.1)

#(java)org.apache.kafka警告
Connection to node 0 could not be established. Broker may not be available.


# (nodejs) kafka-node异常 (执行producer.send后的异常)
{ TimeoutError: Request timed out after 30000ms
    at new TimeoutError (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\errors\TimeoutError.js:6:9)
    at Timeout.setTimeout [as _onTimeout] (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:737:14)
    at ontimeout (timers.js:466:11)
    at tryOnTimeout (timers.js:304:5)
    at Timer.listOnTimeout (timers.js:264:5) message: 'Request timed out after 30000ms' }

3.1 解决方法: 检查advertised.listeners的配置(如果有多个Broker可根据java版本的对应的node号检查配置),判断当前的网络是否可以连接到地址(telnet等)

4 partitions配置的值过小造成错误(kafka1.0.1)

#(java)org.apache.kafka(执行producer.send)
Exception in thread "main" org.apache.kafka.common.KafkaException: Invalid partition given with record: 1 is not in the range [0...1).
    at org.apache.kafka.clients.producer.KafkaProducer.waitOnMetadata(KafkaProducer.java:908)
    at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:778)
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:768)
    at com.wenshao.dal.TestProducer.main(TestProducer.java:36)


# (nodejs) kafka-node异常 (执行producer.send后的异常)
{ BrokerNotAvailableError: Could not find the leader
    at new BrokerNotAvailableError (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\errors\BrokerNotAvailableError.js:11:9)
    at refreshMetadata.error (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:831:16)
    at D:\project\node\kafka-test\src\node_modules\kafka-node\lib\client.js:514:9
    at KafkaClient.wrappedFn (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:379:14)
    at KafkaClient.Client.handleReceivedData (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\client.js:770:60)
    at Socket.<anonymous> (D:\project\node\kafka-test\src\node_modules\kafka-node\lib\kafkaClient.js:618:10)
    at Socket.emit (events.js:159:13)
    at addChunk (_stream_readable.js:265:12)
    at readableAddChunk (_stream_readable.js:252:11)
    at Socket.Readable.push (_stream_readable.js:209:10) message: 'Could not find the leader' }

4.1 解决方法: 修改num.partitions的值,partitions在是在创建topic的时候默认创建的partitions节点的个数,只对新创建的topic生效,所有尽量在项目规划时候定一个合理的值。也可以通过命令行动态扩容()

./bin/kafka-topics.sh --zookeeper  localhost:2181 --alter --partitions 2 --topic  foo

 

Spring boot模块化项目Mybatis无法找到Mapper解决办法

最近设计系统架构,需要把项目功能子模块化,框架使用Spring Cloud、Spring Boot,在子模块化的时候遇到DAO层分离的问题,比较复杂难解,另外一个就是maven打包的问题,但是相对好解决一点。

首先创建一个子模块(DAO),配置好并且可以启动,然后再创建一个子模块(API),API再maven中继承了DAO模块,配置application.yml文件之后会出现错误:

 Invalid bound statement not found xxxx.xxMapper.selectByxxx()

原因有两个:配置的问题、xml放置位置的问题。

  • 配置
* POM文件加入:
    <dependency>
        <groupId>com.test</groupId>
        <artifactId>dao</artifactId>
        <version>1.0</version>
        <scope>compile</scope>
    </dependency>

* build栏目加入:
    <resource>
        <directory>src/main/resources</directory>
        <includes>
            <include>**/*.*</include>
        </includes>
        <filtering>false</filtering>
    </resource>
    
* 使用注解扫描mapper包
@MapperScan(value = "com.test.mapper.repository.mapper")

* 配置项(application.yml):
    spring:
      application:
        name: api-server
      profiles:
        active: test
      thymeleaf:
        cache: false
        prefix:
          classpath: /templates
      datasource:
        driver-class-name: com.mysql.cj.jdbc.Driver
        type: org.apache.commons.dbcp2.BasicDataSource
        url: jdbc:mysql://127.0.0.1:3306/test?serverTimezone=UTC&characterEncoding=utf-8
        username: root
        password: root
        dbcp2:
          initial-size: 5
          max-idle: 100
          min-idle: 5
          max-wait-millis: 3000
          test-on-borrow: true
          test-on-return: false
          test-while-idle: true
          validation-query: SELECT 1
          time-between-eviction-runs-millis: 30000
          soft-min-evictable-idle-time-millis: 1800000
          num-tests-per-eviction-run: 3
          remove-abandoned-timeout: 180
          pool-prepared-statements: true
          max-open-prepared-statements: 15
          
    mybatis:
      type-aliases-package: com.test.mapper.repository
      type-handlers-package: com.test.mapper.repository.mapper
      configuration:
        map-underscore-to-camel-case: true
        default-fetch-size: 100
        default-statement-timeout: 30
      mapper-locations: classpath*:mapper/*.xml

* 最后测试
  • xml位置

位置的问题就是xml需要放置在src/main/resources目录下,我之前是一直放在src/main/java/com/test/repository/src/main/resources目录下,所以一直都是失败的。

配置还需要注意的是:

mapper-locations: classpath*:mapper/*.xml

 

191011121331
 
Copyright © 2008-2021 lanxinbase.com Rights Reserved. | 粤ICP备14086738号-3 |