Aug 13 01:34:29.993754 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 13 01:34:29.993784 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 01:34:29.993794 kernel: BIOS-provided physical RAM map: Aug 13 01:34:29.993802 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:34:29.993808 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:34:29.993818 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:34:29.993825 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:34:29.993831 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:34:29.993840 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:34:29.993851 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:34:29.993861 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:34:29.993871 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:34:29.993889 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:34:29.993897 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:34:29.993915 kernel: NX (Execute Disable) protection: active Aug 13 01:34:29.993923 kernel: APIC: Static calls initialized Aug 13 01:34:29.993929 kernel: SMBIOS 2.8 present. Aug 13 01:34:29.993936 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:34:29.993943 kernel: Hypervisor detected: KVM Aug 13 01:34:29.993953 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:34:29.993960 kernel: kvm-clock: using sched offset of 9532784330 cycles Aug 13 01:34:29.993968 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:34:29.993976 kernel: tsc: Detected 2000.000 MHz processor Aug 13 01:34:29.993983 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:34:29.993991 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:34:29.993997 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:34:29.994005 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:34:29.994012 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:34:29.994022 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:34:29.994029 kernel: Using GB pages for direct mapping Aug 13 01:34:29.994036 kernel: ACPI: Early table checksum verification disabled Aug 13 01:34:29.994045 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:34:29.994055 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:34:29.994062 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:34:29.994069 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:34:29.994076 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:34:29.994083 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:34:29.994095 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:34:29.994105 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:34:29.994117 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:34:29.994132 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:34:29.994140 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:34:29.994147 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:34:29.994157 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:34:29.994164 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:34:29.997201 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:34:29.997220 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:34:29.997229 kernel: No NUMA configuration found Aug 13 01:34:29.997237 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:34:29.997244 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Aug 13 01:34:29.997252 kernel: Zone ranges: Aug 13 01:34:29.997265 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:34:29.997272 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:34:29.997280 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:34:29.997287 kernel: Movable zone start for each node Aug 13 01:34:29.997295 kernel: Early memory node ranges Aug 13 01:34:29.997302 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:34:29.997309 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:34:29.997325 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:34:29.997333 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:34:29.997340 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:34:29.997351 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:34:29.997359 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:34:29.997367 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:34:29.997374 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:34:29.997382 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:34:29.997389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:34:29.997396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:34:29.997404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:34:29.997411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:34:29.997422 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:34:29.997429 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:34:29.997437 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:34:29.997444 kernel: TSC deadline timer available Aug 13 01:34:29.997452 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 01:34:29.997459 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:34:29.997466 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:34:29.997474 kernel: kvm-guest: setup PV sched yield Aug 13 01:34:29.997481 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:34:29.997491 kernel: Booting paravirtualized kernel on KVM Aug 13 01:34:29.997499 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:34:29.997506 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:34:29.997514 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 01:34:29.997521 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 01:34:29.997528 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:34:29.997536 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:34:29.997543 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:34:29.997552 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 01:34:29.997563 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:34:29.997570 kernel: random: crng init done Aug 13 01:34:29.997577 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:34:29.997585 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:34:29.997592 kernel: Fallback order for Node 0: 0 Aug 13 01:34:29.997600 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Aug 13 01:34:29.997607 kernel: Policy zone: Normal Aug 13 01:34:29.997614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:34:29.997625 kernel: software IO TLB: area num 2. Aug 13 01:34:29.997632 kernel: Memory: 3964164K/4193772K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 229348K reserved, 0K cma-reserved) Aug 13 01:34:29.997640 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:34:29.997647 kernel: ftrace: allocating 37942 entries in 149 pages Aug 13 01:34:29.997655 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 01:34:29.997662 kernel: Dynamic Preempt: voluntary Aug 13 01:34:29.997670 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:34:29.997678 kernel: rcu: RCU event tracing is enabled. Aug 13 01:34:29.997686 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:34:29.997696 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:34:29.997704 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:34:29.997711 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:34:29.997719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:34:29.997726 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:34:29.997734 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:34:29.997741 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:34:29.997748 kernel: Console: colour VGA+ 80x25 Aug 13 01:34:29.997755 kernel: printk: console [tty0] enabled Aug 13 01:34:29.997766 kernel: printk: console [ttyS0] enabled Aug 13 01:34:29.997773 kernel: ACPI: Core revision 20230628 Aug 13 01:34:29.997780 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:34:29.997788 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:34:29.997804 kernel: x2apic enabled Aug 13 01:34:29.997815 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:34:29.997823 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:34:29.997831 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:34:29.997838 kernel: kvm-guest: setup PV IPIs Aug 13 01:34:29.997846 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:34:29.997854 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 01:34:29.997861 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 01:34:29.997872 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:34:29.997880 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:34:29.997888 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:34:29.997896 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:34:29.997906 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:34:29.997914 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:34:29.997922 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:34:29.997930 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:34:29.997937 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:34:29.997949 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:34:29.997962 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:34:29.997974 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:34:29.997982 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:34:29.997994 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:34:29.998001 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:34:29.998009 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:34:29.998017 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:34:29.998025 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:34:29.998033 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:34:29.998041 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:34:29.998049 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:34:29.998059 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:34:29.998067 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:34:29.998075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 01:34:29.998083 kernel: landlock: Up and running. Aug 13 01:34:29.998090 kernel: SELinux: Initializing. Aug 13 01:34:29.998098 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:34:29.998106 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:34:29.998114 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:34:29.998122 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:34:29.998133 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:34:29.998141 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:34:29.998148 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:34:29.998156 kernel: ... version: 0 Aug 13 01:34:29.998164 kernel: ... bit width: 48 Aug 13 01:34:29.998192 kernel: ... generic registers: 6 Aug 13 01:34:29.998211 kernel: ... value mask: 0000ffffffffffff Aug 13 01:34:29.998236 kernel: ... max period: 00007fffffffffff Aug 13 01:34:29.998256 kernel: ... fixed-purpose events: 0 Aug 13 01:34:29.998268 kernel: ... event mask: 000000000000003f Aug 13 01:34:29.998276 kernel: signal: max sigframe size: 3376 Aug 13 01:34:29.998283 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:34:29.998291 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:34:29.998299 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:34:29.998307 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:34:29.998315 kernel: .... node #0, CPUs: #1 Aug 13 01:34:29.998322 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:34:29.998330 kernel: smpboot: Max logical packages: 1 Aug 13 01:34:29.998338 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 01:34:29.998348 kernel: devtmpfs: initialized Aug 13 01:34:29.998356 kernel: x86/mm: Memory block size: 128MB Aug 13 01:34:29.998364 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:34:29.998371 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:34:29.998378 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:34:29.998386 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:34:29.998393 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:34:29.998400 kernel: audit: type=2000 audit(1755048868.362:1): state=initialized audit_enabled=0 res=1 Aug 13 01:34:29.998408 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:34:29.998418 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:34:29.998425 kernel: cpuidle: using governor menu Aug 13 01:34:29.998433 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:34:29.998440 kernel: dca service started, version 1.12.1 Aug 13 01:34:29.998448 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 01:34:29.998455 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:34:29.998462 kernel: PCI: Using configuration type 1 for base access Aug 13 01:34:29.998470 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:34:29.998480 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:34:29.998487 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:34:29.998495 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:34:29.998502 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:34:29.998509 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:34:29.998516 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:34:29.998524 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:34:29.998531 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:34:29.998538 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 01:34:29.998548 kernel: ACPI: Interpreter enabled Aug 13 01:34:29.998556 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:34:29.998563 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:34:29.998570 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:34:29.998578 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:34:29.998585 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:34:29.998592 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:34:29.998996 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:34:29.999156 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:34:30.000405 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:34:30.000420 kernel: PCI host bridge to bus 0000:00 Aug 13 01:34:30.000760 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:34:30.000891 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:34:30.001020 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:34:30.001146 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:34:30.001347 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:34:30.001478 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:34:30.001607 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:34:30.001781 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 01:34:30.001941 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 01:34:30.002083 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 01:34:30.004325 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 01:34:30.004475 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:34:30.004614 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:34:30.004782 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Aug 13 01:34:30.004923 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 01:34:30.005063 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 01:34:30.006246 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:34:30.006434 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 01:34:30.006585 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 01:34:30.006724 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 01:34:30.006861 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:34:30.006998 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:34:30.007151 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 01:34:30.008353 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:34:30.008556 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 01:34:30.008699 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Aug 13 01:34:30.008838 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Aug 13 01:34:30.009005 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 01:34:30.009150 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 01:34:30.009161 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:34:30.009170 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:34:30.010218 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:34:30.010227 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:34:30.010235 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:34:30.010243 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:34:30.010251 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:34:30.010259 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:34:30.010268 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:34:30.010276 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:34:30.010284 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:34:30.010295 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:34:30.010303 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:34:30.010311 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:34:30.010320 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:34:30.010328 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:34:30.010336 kernel: iommu: Default domain type: Translated Aug 13 01:34:30.010344 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:34:30.010352 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:34:30.010360 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:34:30.010371 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:34:30.010379 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:34:30.010531 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:34:30.010670 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:34:30.010819 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:34:30.010831 kernel: vgaarb: loaded Aug 13 01:34:30.010840 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:34:30.010848 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:34:30.010856 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:34:30.010912 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:34:30.010926 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:34:30.010936 kernel: pnp: PnP ACPI init Aug 13 01:34:30.011110 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:34:30.011124 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:34:30.011132 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:34:30.011140 kernel: NET: Registered PF_INET protocol family Aug 13 01:34:30.011149 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:34:30.011162 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:34:30.012205 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:34:30.012222 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:34:30.012230 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:34:30.012238 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:34:30.012246 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:34:30.012254 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:34:30.012262 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:34:30.012270 kernel: NET: Registered PF_XDP protocol family Aug 13 01:34:30.012484 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:34:30.012667 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:34:30.012807 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:34:30.012976 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:34:30.013141 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:34:30.014348 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:34:30.014364 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:34:30.014373 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:34:30.014387 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:34:30.014396 kernel: Initialise system trusted keyrings Aug 13 01:34:30.014405 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:34:30.014413 kernel: Key type asymmetric registered Aug 13 01:34:30.014421 kernel: Asymmetric key parser 'x509' registered Aug 13 01:34:30.014429 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 01:34:30.014437 kernel: io scheduler mq-deadline registered Aug 13 01:34:30.014445 kernel: io scheduler kyber registered Aug 13 01:34:30.014453 kernel: io scheduler bfq registered Aug 13 01:34:30.014465 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:34:30.014474 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:34:30.014482 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:34:30.014490 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:34:30.014499 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:34:30.014507 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:34:30.014515 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:34:30.014523 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:34:30.014531 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:34:30.014737 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:34:30.014879 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:34:30.015011 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:34:29 UTC (1755048869) Aug 13 01:34:30.016217 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:34:30.016232 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:34:30.016241 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:34:30.016249 kernel: Segment Routing with IPv6 Aug 13 01:34:30.016257 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:34:30.016271 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:34:30.016279 kernel: Key type dns_resolver registered Aug 13 01:34:30.016287 kernel: IPI shorthand broadcast: enabled Aug 13 01:34:30.016295 kernel: sched_clock: Marking stable (2814004720, 225214900)->(3161383100, -122163480) Aug 13 01:34:30.016303 kernel: registered taskstats version 1 Aug 13 01:34:30.016311 kernel: Loading compiled-in X.509 certificates Aug 13 01:34:30.016320 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 13 01:34:30.016328 kernel: Key type .fscrypt registered Aug 13 01:34:30.016336 kernel: Key type fscrypt-provisioning registered Aug 13 01:34:30.016347 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:34:30.016356 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:34:30.016364 kernel: ima: No architecture policies found Aug 13 01:34:30.016372 kernel: clk: Disabling unused clocks Aug 13 01:34:30.016380 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 13 01:34:30.016388 kernel: Write protecting the kernel read-only data: 38912k Aug 13 01:34:30.016396 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 13 01:34:30.016404 kernel: Run /init as init process Aug 13 01:34:30.016412 kernel: with arguments: Aug 13 01:34:30.016423 kernel: /init Aug 13 01:34:30.016432 kernel: with environment: Aug 13 01:34:30.016439 kernel: HOME=/ Aug 13 01:34:30.016447 kernel: TERM=linux Aug 13 01:34:30.016455 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:34:30.016465 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:34:30.016476 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:34:30.016488 systemd[1]: Detected virtualization kvm. Aug 13 01:34:30.016497 systemd[1]: Detected architecture x86-64. Aug 13 01:34:30.016505 systemd[1]: Running in initrd. Aug 13 01:34:30.016514 systemd[1]: No hostname configured, using default hostname. Aug 13 01:34:30.016523 systemd[1]: Hostname set to . Aug 13 01:34:30.016532 systemd[1]: Initializing machine ID from random generator. Aug 13 01:34:30.016554 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:34:30.016566 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:34:30.016575 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:34:30.016585 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:34:30.016594 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:34:30.016603 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:34:30.016613 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:34:30.016626 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:34:30.016635 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:34:30.016644 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:34:30.016654 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:34:30.016663 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:34:30.016672 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:34:30.016680 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:34:30.016689 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:34:30.016698 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:34:30.016710 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:34:30.016719 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:34:30.016728 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:34:30.016737 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:34:30.016746 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:34:30.016755 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:34:30.016764 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:34:30.016773 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:34:30.016785 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:34:30.016794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:34:30.016803 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:34:30.016812 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:34:30.016821 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:34:30.016830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:34:30.016839 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:34:30.016848 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:34:30.016889 systemd-journald[178]: Collecting audit messages is disabled. Aug 13 01:34:30.016914 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:34:30.016924 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:34:30.016934 systemd-journald[178]: Journal started Aug 13 01:34:30.016956 systemd-journald[178]: Runtime Journal (/run/log/journal/cfae56379c1b4a21864ac154b36625e0) is 8M, max 78.3M, 70.3M free. Aug 13 01:34:29.990087 systemd-modules-load[179]: Inserted module 'overlay' Aug 13 01:34:30.065460 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:34:30.065484 kernel: Bridge firewalling registered Aug 13 01:34:30.024944 systemd-modules-load[179]: Inserted module 'br_netfilter' Aug 13 01:34:30.075199 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:34:30.076172 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:34:30.077855 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:34:30.080424 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:34:30.090415 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:34:30.093326 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:34:30.118349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:34:30.120420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:34:30.123914 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:34:30.133243 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:34:30.134009 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:34:30.146832 dracut-cmdline[208]: dracut-dracut-053 Aug 13 01:34:30.149767 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:34:30.150786 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 01:34:30.152603 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:34:30.160351 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:34:30.197264 systemd-resolved[228]: Positive Trust Anchors: Aug 13 01:34:30.198029 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:34:30.198078 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:34:30.203607 systemd-resolved[228]: Defaulting to hostname 'linux'. Aug 13 01:34:30.204931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:34:30.205828 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:34:30.238252 kernel: SCSI subsystem initialized Aug 13 01:34:30.247199 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:34:30.258205 kernel: iscsi: registered transport (tcp) Aug 13 01:34:30.280722 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:34:30.280799 kernel: QLogic iSCSI HBA Driver Aug 13 01:34:30.327817 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:34:30.334339 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:34:30.361772 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:34:30.361850 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:34:30.361893 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 01:34:30.407231 kernel: raid6: avx2x4 gen() 30221 MB/s Aug 13 01:34:30.425220 kernel: raid6: avx2x2 gen() 28489 MB/s Aug 13 01:34:30.443648 kernel: raid6: avx2x1 gen() 18221 MB/s Aug 13 01:34:30.443729 kernel: raid6: using algorithm avx2x4 gen() 30221 MB/s Aug 13 01:34:30.462733 kernel: raid6: .... xor() 4423 MB/s, rmw enabled Aug 13 01:34:30.462800 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:34:30.483217 kernel: xor: automatically using best checksumming function avx Aug 13 01:34:30.639237 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:34:30.652322 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:34:30.657367 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:34:30.684984 systemd-udevd[397]: Using default interface naming scheme 'v255'. Aug 13 01:34:30.691471 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:34:30.702776 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:34:30.715811 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Aug 13 01:34:30.752168 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:34:30.757350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:34:30.870990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:34:30.880454 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:34:30.898786 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:34:30.901814 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:34:30.903310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:34:30.905656 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:34:30.915325 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:34:30.929243 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:34:30.958309 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:34:30.962204 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:34:30.979211 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 01:34:30.985221 kernel: AES CTR mode by8 optimization enabled Aug 13 01:34:30.989987 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:34:30.994139 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:34:30.994284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:34:30.995012 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:34:30.998340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:34:30.998880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:34:31.001083 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:34:31.111580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:34:31.118161 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:34:31.140643 kernel: libata version 3.00 loaded. Aug 13 01:34:31.231955 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:34:31.232346 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:34:31.236204 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 01:34:31.236439 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:34:31.241211 kernel: scsi host1: ahci Aug 13 01:34:31.241467 kernel: scsi host2: ahci Aug 13 01:34:31.241655 kernel: scsi host3: ahci Aug 13 01:34:31.242232 kernel: scsi host4: ahci Aug 13 01:34:31.253855 kernel: scsi host5: ahci Aug 13 01:34:31.254090 kernel: scsi host6: ahci Aug 13 01:34:31.256218 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Aug 13 01:34:31.256264 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Aug 13 01:34:31.256277 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Aug 13 01:34:31.256288 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Aug 13 01:34:31.256298 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Aug 13 01:34:31.256309 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Aug 13 01:34:31.316572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:34:31.320331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:34:31.339367 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:34:31.573195 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:34:31.573276 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:34:31.573291 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:34:31.573302 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:34:31.573313 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:34:31.574200 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:34:31.615710 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:34:31.615986 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:34:31.618230 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:34:31.619707 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:34:31.619905 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:34:31.626426 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:34:31.626472 kernel: GPT:9289727 != 9297919 Aug 13 01:34:31.626485 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:34:31.627880 kernel: GPT:9289727 != 9297919 Aug 13 01:34:31.630111 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:34:31.630139 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:34:31.633014 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:34:31.671215 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (466) Aug 13 01:34:31.682200 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/sda3 scanned by (udev-worker) (459) Aug 13 01:34:31.691771 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:34:31.701338 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:34:31.716489 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:34:31.723858 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:34:31.724554 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:34:31.731320 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:34:31.737557 disk-uuid[570]: Primary Header is updated. Aug 13 01:34:31.737557 disk-uuid[570]: Secondary Entries is updated. Aug 13 01:34:31.737557 disk-uuid[570]: Secondary Header is updated. Aug 13 01:34:31.743197 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:34:31.748198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:34:32.751317 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:34:32.752029 disk-uuid[571]: The operation has completed successfully. Aug 13 01:34:32.811256 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:34:32.811387 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:34:32.848330 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:34:32.851818 sh[585]: Success Aug 13 01:34:32.866733 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 01:34:32.918438 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:34:32.927282 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:34:32.928762 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:34:32.958729 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 13 01:34:32.958804 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:34:32.960206 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 01:34:32.964072 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 01:34:32.964111 kernel: BTRFS info (device dm-0): using free space tree Aug 13 01:34:32.973218 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 01:34:32.974979 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:34:32.976131 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:34:32.981330 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:34:32.984350 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:34:33.004840 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:34:33.004900 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:34:33.006613 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:34:33.014332 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:34:33.014375 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 01:34:33.021293 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:34:33.022852 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:34:33.028339 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:34:33.141653 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:34:33.150368 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:34:33.282368 systemd-networkd[763]: lo: Link UP Aug 13 01:34:33.282385 systemd-networkd[763]: lo: Gained carrier Aug 13 01:34:33.301995 systemd-networkd[763]: Enumeration completed Aug 13 01:34:33.302571 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:34:33.302576 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:34:33.304624 systemd-networkd[763]: eth0: Link UP Aug 13 01:34:33.304631 systemd-networkd[763]: eth0: Gained carrier Aug 13 01:34:33.304642 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:34:33.305477 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:34:33.307629 systemd[1]: Reached target network.target - Network. Aug 13 01:34:33.390606 ignition[680]: Ignition 2.20.0 Aug 13 01:34:33.391256 ignition[680]: Stage: fetch-offline Aug 13 01:34:33.391340 ignition[680]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:34:33.391356 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:34:33.391544 ignition[680]: parsed url from cmdline: "" Aug 13 01:34:33.393872 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:34:33.391552 ignition[680]: no config URL provided Aug 13 01:34:33.391563 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:34:33.391581 ignition[680]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:34:33.391588 ignition[680]: failed to fetch config: resource requires networking Aug 13 01:34:33.392344 ignition[680]: Ignition finished successfully Aug 13 01:34:33.401489 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:34:33.469772 ignition[771]: Ignition 2.20.0 Aug 13 01:34:33.469788 ignition[771]: Stage: fetch Aug 13 01:34:33.469995 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:34:33.470010 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:34:33.470118 ignition[771]: parsed url from cmdline: "" Aug 13 01:34:33.470123 ignition[771]: no config URL provided Aug 13 01:34:33.470129 ignition[771]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:34:33.470141 ignition[771]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:34:33.470167 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:34:33.470506 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:34:33.670712 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:34:33.670942 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:34:33.972272 systemd-networkd[763]: eth0: DHCPv4 address 172.233.223.240/24, gateway 172.233.223.1 acquired from 23.215.118.19 Aug 13 01:34:34.071591 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:34:34.182272 ignition[771]: PUT result: OK Aug 13 01:34:34.182364 ignition[771]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:34:34.313729 ignition[771]: GET result: OK Aug 13 01:34:34.313918 ignition[771]: parsing config with SHA512: 198d7779b670cc163455b055df501c74756861c1bdade11e7b879d7831712699ef209afd42a291576f33099e01bc753509be13c5eda65a843af5649cc3f72323 Aug 13 01:34:34.320383 unknown[771]: fetched base config from "system" Aug 13 01:34:34.320416 unknown[771]: fetched base config from "system" Aug 13 01:34:34.320985 ignition[771]: fetch: fetch complete Aug 13 01:34:34.320422 unknown[771]: fetched user config from "akamai" Aug 13 01:34:34.320992 ignition[771]: fetch: fetch passed Aug 13 01:34:34.321053 ignition[771]: Ignition finished successfully Aug 13 01:34:34.324511 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:34:34.331424 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:34:34.439147 ignition[778]: Ignition 2.20.0 Aug 13 01:34:34.439167 ignition[778]: Stage: kargs Aug 13 01:34:34.439393 ignition[778]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:34:34.439407 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:34:34.440460 ignition[778]: kargs: kargs passed Aug 13 01:34:34.442161 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:34:34.440512 ignition[778]: Ignition finished successfully Aug 13 01:34:34.449353 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:34:34.466866 ignition[785]: Ignition 2.20.0 Aug 13 01:34:34.466881 ignition[785]: Stage: disks Aug 13 01:34:34.467093 ignition[785]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:34:34.467107 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:34:34.470462 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:34:34.468034 ignition[785]: disks: disks passed Aug 13 01:34:34.498322 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:34:34.468083 ignition[785]: Ignition finished successfully Aug 13 01:34:34.499195 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:34:34.500506 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:34:34.501886 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:34:34.503231 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:34:34.511389 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:34:34.548704 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 01:34:34.552597 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:34:34.558343 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:34:34.720242 kernel: EXT4-fs (sda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 13 01:34:34.721094 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:34:34.723288 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:34:34.730281 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:34:34.733336 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:34:34.735588 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:34:34.736323 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:34:34.736355 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:34:34.749213 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (802) Aug 13 01:34:34.749683 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:34:34.751214 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:34:34.751261 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:34:34.751279 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:34:34.760206 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:34:34.760256 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 01:34:34.763565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:34:34.776414 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:34:34.823584 systemd-networkd[763]: eth0: Gained IPv6LL Aug 13 01:34:34.834203 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:34:34.840850 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:34:34.846475 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:34:34.851158 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:34:34.981558 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:34:34.988271 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:34:34.993939 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:34:35.024115 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:34:35.027822 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:34:35.055606 ignition[915]: INFO : Ignition 2.20.0 Aug 13 01:34:35.056624 ignition[915]: INFO : Stage: mount Aug 13 01:34:35.057620 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:34:35.057620 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:34:35.061322 ignition[915]: INFO : mount: mount passed Aug 13 01:34:35.061322 ignition[915]: INFO : Ignition finished successfully Aug 13 01:34:35.062204 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:34:35.063837 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:34:35.071380 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:34:35.733351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:34:35.747313 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (927) Aug 13 01:34:35.747360 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:34:35.750751 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:34:35.754123 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:34:35.759263 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:34:35.759298 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 01:34:35.763056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:34:35.784408 ignition[944]: INFO : Ignition 2.20.0 Aug 13 01:34:35.784408 ignition[944]: INFO : Stage: files Aug 13 01:34:35.786738 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:34:35.786738 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:34:35.786738 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:34:35.786738 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:34:35.786738 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:34:35.791013 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:34:35.791013 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:34:35.791013 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:34:35.790480 unknown[944]: wrote ssh authorized keys file for user: core Aug 13 01:34:35.794312 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 01:34:35.794312 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 01:34:36.086922 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:34:36.561566 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 01:34:36.561566 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:34:36.563809 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:34:36.786410 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:34:37.063918 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:34:37.065744 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 01:34:37.474348 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 01:34:39.001613 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:34:39.001613 ignition[944]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 01:34:39.003968 ignition[944]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:34:39.003968 ignition[944]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:34:39.003968 ignition[944]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 01:34:39.003968 ignition[944]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 01:34:39.003968 ignition[944]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:34:39.003968 ignition[944]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:34:39.003968 ignition[944]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 01:34:39.003968 ignition[944]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:34:39.003968 ignition[944]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:34:39.003968 ignition[944]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:34:39.003968 ignition[944]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:34:39.003968 ignition[944]: INFO : files: files passed Aug 13 01:34:39.003968 ignition[944]: INFO : Ignition finished successfully Aug 13 01:34:39.005277 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:34:39.014682 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:34:39.018909 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:34:39.021712 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:34:39.021840 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:34:39.033041 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:34:39.033041 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:34:39.036647 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:34:39.039098 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:34:39.040032 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:34:39.047353 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:34:39.071702 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:34:39.071884 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:34:39.073396 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:34:39.074470 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:34:39.075733 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:34:39.082335 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:34:39.095570 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:34:39.101340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:34:39.117416 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:34:39.118158 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:34:39.119727 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:34:39.121057 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:34:39.121210 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:34:39.122575 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:34:39.123439 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:34:39.124654 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:34:39.125822 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:34:39.126891 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:34:39.128154 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:34:39.129455 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:34:39.130716 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:34:39.131968 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:34:39.133148 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:34:39.134256 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:34:39.134394 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:34:39.135678 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:34:39.136497 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:34:39.137603 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:34:39.137708 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:34:39.138910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:34:39.139089 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:34:39.140623 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:34:39.140748 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:34:39.141542 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:34:39.141707 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:34:39.148425 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:34:39.149861 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:34:39.150495 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:34:39.152841 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:34:39.153600 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:34:39.153724 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:34:39.179939 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:34:39.180088 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:34:39.190189 ignition[997]: INFO : Ignition 2.20.0 Aug 13 01:34:39.190189 ignition[997]: INFO : Stage: umount Aug 13 01:34:39.190189 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:34:39.190189 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:34:39.190189 ignition[997]: INFO : umount: umount passed Aug 13 01:34:39.190189 ignition[997]: INFO : Ignition finished successfully Aug 13 01:34:39.190820 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:34:39.191244 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:34:39.193219 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:34:39.193286 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:34:39.193981 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:34:39.194036 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:34:39.196296 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:34:39.196373 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:34:39.199456 systemd[1]: Stopped target network.target - Network. Aug 13 01:34:39.200403 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:34:39.200489 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:34:39.225516 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:34:39.226480 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:34:39.230266 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:34:39.231019 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:34:39.232270 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:34:39.233781 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:34:39.233845 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:34:39.235219 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:34:39.235268 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:34:39.236593 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:34:39.236664 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:34:39.237983 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:34:39.238063 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:34:39.239298 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:34:39.240558 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:34:39.242873 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:34:39.243484 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:34:39.243610 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:34:39.247386 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:34:39.247504 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:34:39.253031 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:34:39.253206 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:34:39.256869 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:34:39.257128 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:34:39.257517 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:34:39.259857 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:34:39.260931 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:34:39.261006 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:34:39.267297 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:34:39.268606 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:34:39.268695 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:34:39.269462 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:34:39.269528 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:34:39.271229 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:34:39.271309 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:34:39.272050 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:34:39.272113 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:34:39.274033 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:34:39.276944 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:34:39.277027 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:34:39.290196 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:34:39.290334 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:34:39.293921 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:34:39.294126 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:34:39.295465 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:34:39.295531 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:34:39.296510 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:34:39.296553 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:34:39.297628 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:34:39.297684 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:34:39.299359 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:34:39.299415 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:34:39.300514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:34:39.300568 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:34:39.309400 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:34:39.309976 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:34:39.310044 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:34:39.313105 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 01:34:39.313163 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:34:39.314516 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:34:39.314570 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:34:39.316030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:34:39.316085 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:34:39.317789 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:34:39.317860 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:34:39.318287 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:34:39.318413 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:34:39.319926 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:34:39.327532 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:34:39.335510 systemd[1]: Switching root. Aug 13 01:34:39.370552 systemd-journald[178]: Journal stopped Aug 13 01:34:40.863796 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Aug 13 01:34:40.863829 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:34:40.863843 kernel: SELinux: policy capability open_perms=1 Aug 13 01:34:40.863854 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:34:40.863865 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:34:40.863886 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:34:40.863898 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:34:40.863909 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:34:40.863919 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:34:40.863930 kernel: audit: type=1403 audit(1755048879.519:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:34:40.863942 systemd[1]: Successfully loaded SELinux policy in 52.477ms. Aug 13 01:34:40.863962 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.650ms. Aug 13 01:34:40.863974 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:34:40.863986 systemd[1]: Detected virtualization kvm. Aug 13 01:34:40.863998 systemd[1]: Detected architecture x86-64. Aug 13 01:34:40.864010 systemd[1]: Detected first boot. Aug 13 01:34:40.864029 systemd[1]: Initializing machine ID from random generator. Aug 13 01:34:40.864041 zram_generator::config[1041]: No configuration found. Aug 13 01:34:40.864053 kernel: Guest personality initialized and is inactive Aug 13 01:34:40.864064 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:34:40.864083 kernel: Initialized host personality Aug 13 01:34:40.864094 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:34:40.864105 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:34:40.864125 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:34:40.864137 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:34:40.864148 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:34:40.864160 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:34:40.864810 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:34:40.864834 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:34:40.864848 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:34:40.864873 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:34:40.864886 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:34:40.864897 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:34:40.864909 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:34:40.864920 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:34:40.864931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:34:40.864943 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:34:40.864954 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:34:40.864965 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:34:40.864985 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:34:40.865010 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:34:40.865022 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:34:40.865034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:34:40.865053 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:34:40.865065 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:34:40.865077 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:34:40.865103 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:34:40.865122 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:34:40.865135 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:34:40.865147 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:34:40.865159 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:34:40.865185 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:34:40.865197 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:34:40.865209 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:34:40.865221 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:34:40.865244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:34:40.865256 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:34:40.865268 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:34:40.865279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:34:40.865299 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:34:40.865311 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:34:40.865323 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:34:40.865334 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:34:40.865346 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:34:40.865365 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:34:40.865377 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:34:40.865416 systemd[1]: Reached target machines.target - Containers. Aug 13 01:34:40.865439 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:34:40.865452 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:34:40.865464 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:34:40.865476 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:34:40.865487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:34:40.865499 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:34:40.865511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:34:40.865522 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:34:40.865534 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:34:40.865558 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:34:40.865577 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:34:40.865590 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:34:40.865602 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:34:40.865614 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:34:40.865626 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:34:40.865637 kernel: fuse: init (API version 7.39) Aug 13 01:34:40.865659 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:34:40.865671 kernel: ACPI: bus type drm_connector registered Aug 13 01:34:40.865689 kernel: loop: module loaded Aug 13 01:34:40.865884 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:34:40.865895 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:34:40.865907 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:34:40.865945 systemd-journald[1121]: Collecting audit messages is disabled. Aug 13 01:34:40.865979 systemd-journald[1121]: Journal started Aug 13 01:34:40.866003 systemd-journald[1121]: Runtime Journal (/run/log/journal/6268e98314f8496f937f85af362ae84f) is 8M, max 78.3M, 70.3M free. Aug 13 01:34:40.327319 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:34:40.337947 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:34:40.338685 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:34:40.871311 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:34:40.880108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:34:40.880159 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:34:40.881400 systemd[1]: Stopped verity-setup.service. Aug 13 01:34:40.889298 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:34:40.889340 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:34:40.892018 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:34:40.892815 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:34:40.893507 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:34:40.894422 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:34:40.895206 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:34:40.897100 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:34:40.900683 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:34:40.901583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:34:40.902605 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:34:40.902849 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:34:40.905561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:34:40.905793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:34:40.906766 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:34:40.907041 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:34:40.908197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:34:40.908413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:34:40.909624 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:34:40.909835 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:34:40.911099 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:34:40.911400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:34:40.912398 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:34:40.913460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:34:40.914714 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:34:40.915793 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:34:40.928965 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:34:40.949245 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:34:40.970250 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:34:40.971021 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:34:40.971132 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:34:40.973829 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:34:40.978738 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:34:40.988534 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:34:40.989281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:34:40.991524 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:34:40.996402 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:34:40.997014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:34:41.000345 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:34:41.001299 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:34:41.004338 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:34:41.025543 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:34:41.058417 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:34:41.067240 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:34:41.080821 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:34:41.081775 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:34:41.093799 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 01:34:41.095893 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:34:41.097872 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:34:41.115761 systemd-journald[1121]: Time spent on flushing to /var/log/journal/6268e98314f8496f937f85af362ae84f is 51.776ms for 998 entries. Aug 13 01:34:41.115761 systemd-journald[1121]: System Journal (/var/log/journal/6268e98314f8496f937f85af362ae84f) is 8M, max 195.6M, 187.6M free. Aug 13 01:34:41.269541 systemd-journald[1121]: Received client request to flush runtime journal. Aug 13 01:34:41.163661 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:34:41.196707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:34:41.212562 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 01:34:41.241837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:34:41.259247 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 01:34:41.271841 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:34:41.284463 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:34:41.294845 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:34:41.298327 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Aug 13 01:34:41.298348 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Aug 13 01:34:41.307366 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:34:41.316603 kernel: loop1: detected capacity change from 0 to 138176 Aug 13 01:34:41.315236 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:34:41.340996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:34:41.374230 kernel: loop2: detected capacity change from 0 to 147912 Aug 13 01:34:41.484600 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:34:41.496329 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:34:41.608209 kernel: loop3: detected capacity change from 0 to 8 Aug 13 01:34:41.610547 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Aug 13 01:34:41.610930 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Aug 13 01:34:41.619992 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:34:41.673659 kernel: loop4: detected capacity change from 0 to 224512 Aug 13 01:34:41.715722 kernel: loop5: detected capacity change from 0 to 138176 Aug 13 01:34:41.846209 kernel: loop6: detected capacity change from 0 to 147912 Aug 13 01:34:41.914562 kernel: loop7: detected capacity change from 0 to 8 Aug 13 01:34:41.921091 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:34:41.932881 (sd-merge)[1197]: Merged extensions into '/usr'. Aug 13 01:34:41.940288 systemd[1]: Reload requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:34:41.940304 systemd[1]: Reloading... Aug 13 01:34:42.260249 zram_generator::config[1224]: No configuration found. Aug 13 01:34:42.436062 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:34:42.435620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:34:42.503018 systemd[1]: Reloading finished in 562 ms. Aug 13 01:34:42.519964 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:34:42.521421 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:34:42.541438 systemd[1]: Starting ensure-sysext.service... Aug 13 01:34:42.550705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:34:42.631995 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:34:42.632016 systemd[1]: Reloading... Aug 13 01:34:42.735003 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:34:42.735334 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:34:42.737908 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:34:42.738382 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Aug 13 01:34:42.738752 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Aug 13 01:34:42.746141 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:34:42.746365 systemd-tmpfiles[1269]: Skipping /boot Aug 13 01:34:42.770711 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:34:42.771041 systemd-tmpfiles[1269]: Skipping /boot Aug 13 01:34:42.801210 zram_generator::config[1295]: No configuration found. Aug 13 01:34:42.933843 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:34:42.994752 systemd[1]: Reloading finished in 362 ms. Aug 13 01:34:43.010743 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:34:43.023320 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:34:43.034444 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:34:43.038099 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:34:43.043438 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:34:43.048598 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:34:43.056867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:34:43.067339 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:34:43.070942 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:34:43.071159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:34:43.082519 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:34:43.089490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:34:43.093436 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:34:43.094191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:34:43.094483 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:34:43.094574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:34:43.110325 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:34:43.111581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:34:43.112267 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:34:43.120291 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:34:43.123683 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:34:43.132109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:34:43.132936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:34:43.135779 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:34:43.136007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:34:43.140035 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:34:43.140291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:34:43.149798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:34:43.161106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:34:43.171267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:34:43.172039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:34:43.172218 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:34:43.177483 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:34:43.179360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:34:43.181171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:34:43.182483 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:34:43.194906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:34:43.195198 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:34:43.197951 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:34:43.198747 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:34:43.201930 augenrules[1383]: No rules Aug 13 01:34:43.204965 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:34:43.208220 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:34:43.208528 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:34:43.214441 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Aug 13 01:34:43.220587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:34:43.220830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:34:43.226467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:34:43.238837 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:34:43.239622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:34:43.239674 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:34:43.239732 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:34:43.239787 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:34:43.239818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:34:43.241283 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:34:43.242536 systemd[1]: Finished ensure-sysext.service. Aug 13 01:34:43.244659 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:34:43.245792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:34:43.246055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:34:43.251558 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:34:43.251785 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:34:43.256959 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:34:43.266172 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:34:43.274668 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:34:43.282381 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:34:43.379729 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:34:43.570593 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:34:43.571383 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:34:43.656757 systemd-resolved[1347]: Positive Trust Anchors: Aug 13 01:34:43.657132 systemd-resolved[1347]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:34:43.657259 systemd-resolved[1347]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:34:43.663150 systemd-resolved[1347]: Defaulting to hostname 'linux'. Aug 13 01:34:43.684032 systemd-networkd[1404]: lo: Link UP Aug 13 01:34:43.684045 systemd-networkd[1404]: lo: Gained carrier Aug 13 01:34:43.685449 systemd-networkd[1404]: Enumeration completed Aug 13 01:34:43.685574 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:34:43.706463 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:34:43.732434 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:34:43.734630 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:34:43.735334 systemd[1]: Reached target network.target - Network. Aug 13 01:34:43.736415 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:34:43.818215 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:34:43.829381 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:34:43.829754 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 01:34:43.829996 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:34:43.831556 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:34:43.831570 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:34:43.833273 systemd-networkd[1404]: eth0: Link UP Aug 13 01:34:43.833285 systemd-networkd[1404]: eth0: Gained carrier Aug 13 01:34:43.833305 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:34:43.840288 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:34:43.844608 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:34:43.853666 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:34:43.853742 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:34:43.883212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:34:43.949212 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:34:43.968218 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1410) Aug 13 01:34:44.026987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:34:44.083025 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 01:34:44.084388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:34:44.091470 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 01:34:44.095369 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:34:44.122679 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:34:44.136602 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:34:44.210734 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 01:34:44.211758 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:34:44.212402 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:34:44.213090 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:34:44.213742 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:34:44.214841 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:34:44.215784 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:34:44.216444 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:34:44.217170 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:34:44.217227 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:34:44.217749 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:34:44.219851 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:34:44.222796 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:34:44.226746 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:34:44.227567 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:34:44.228212 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:34:44.232124 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:34:44.233276 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:34:44.235433 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 01:34:44.236859 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:34:44.237584 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:34:44.238392 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:34:44.238976 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:34:44.239014 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:34:44.246346 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:34:44.252351 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:34:44.253857 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:34:44.262385 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:34:44.266254 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:34:44.270389 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:34:44.271043 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:34:44.295353 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:34:44.304614 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:34:44.321442 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:34:44.324866 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:34:44.339625 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:34:44.341332 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:34:44.341962 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:34:44.345239 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:34:44.351339 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:34:44.393998 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:34:44.397484 jq[1460]: false Aug 13 01:34:44.394433 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:34:44.398453 dbus-daemon[1459]: [system] SELinux support is enabled Aug 13 01:34:44.398669 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:34:44.431887 extend-filesystems[1461]: Found loop4 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found loop5 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found loop6 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found loop7 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found sda Aug 13 01:34:44.431887 extend-filesystems[1461]: Found sda1 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found sda2 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found sda3 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found usr Aug 13 01:34:44.431887 extend-filesystems[1461]: Found sda4 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found sda6 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found sda7 Aug 13 01:34:44.431887 extend-filesystems[1461]: Found sda9 Aug 13 01:34:44.431887 extend-filesystems[1461]: Checking size of /dev/sda9 Aug 13 01:34:44.579377 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:34:44.579410 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:34:44.406250 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 01:34:44.579505 update_engine[1469]: I20250813 01:34:44.411005 1469 main.cc:92] Flatcar Update Engine starting Aug 13 01:34:44.579505 update_engine[1469]: I20250813 01:34:44.452095 1469 update_check_scheduler.cc:74] Next update check in 9m41s Aug 13 01:34:44.581136 extend-filesystems[1461]: Resized partition /dev/sda9 Aug 13 01:34:44.473414 dbus-daemon[1459]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1404 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:34:44.441991 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:34:44.581843 coreos-metadata[1458]: Aug 13 01:34:44.580 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:34:44.582085 jq[1470]: true Aug 13 01:34:44.582436 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Aug 13 01:34:44.582436 extend-filesystems[1493]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:34:44.582436 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:34:44.582436 extend-filesystems[1493]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:34:44.443042 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:34:44.639460 extend-filesystems[1461]: Resized filesystem in /dev/sda9 Aug 13 01:34:44.449701 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:34:44.449745 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:34:44.451093 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:34:44.451114 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:34:44.453679 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:34:44.641292 jq[1491]: true Aug 13 01:34:44.465335 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:34:44.469346 systemd-networkd[1404]: eth0: DHCPv4 address 172.233.223.240/24, gateway 172.233.223.1 acquired from 23.215.118.19 Aug 13 01:34:44.471883 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Aug 13 01:34:44.497370 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:34:44.561773 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:34:44.562106 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:34:44.581434 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:34:44.609788 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:34:44.610131 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:34:44.711227 tar[1497]: linux-amd64/LICENSE Aug 13 01:34:44.711227 tar[1497]: linux-amd64/helm Aug 13 01:34:44.890783 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:34:44.906308 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:34:44.906417 coreos-metadata[1458]: Aug 13 01:34:44.743 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:34:44.932212 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:34:44.933869 systemd[1]: Starting sshkeys.service... Aug 13 01:34:44.967013 coreos-metadata[1458]: Aug 13 01:34:44.966 INFO Fetch successful Aug 13 01:34:44.967013 coreos-metadata[1458]: Aug 13 01:34:44.966 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:34:44.989535 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:34:44.999258 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:34:45.058273 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:34:45.098658 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:34:45.110806 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:34:45.171256 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1421) Aug 13 01:34:45.145385 dbus-daemon[1459]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:34:45.147608 dbus-daemon[1459]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1494 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:34:45.171505 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:34:45.195860 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:34:45.219565 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:34:45.257956 coreos-metadata[1530]: Aug 13 01:34:45.257 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:34:45.220036 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:34:45.266088 coreos-metadata[1458]: Aug 13 01:34:45.263 INFO Fetch successful Aug 13 01:34:45.369076 systemd-logind[1468]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:34:45.369128 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:34:45.376038 systemd-logind[1468]: New seat seat0. Aug 13 01:34:45.412885 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:34:45.455677 polkitd[1537]: Started polkitd version 121 Aug 13 01:34:45.473642 coreos-metadata[1530]: Aug 13 01:34:45.467 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:34:45.550221 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:34:45.578539 polkitd[1537]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:34:45.578785 polkitd[1537]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:34:45.579523 systemd[1]: Started sshd@0-172.233.223.240:22-139.178.89.65:52624.service - OpenSSH per-connection server daemon (139.178.89.65:52624). Aug 13 01:34:45.598510 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:34:45.600394 polkitd[1537]: Finished loading, compiling and executing 2 rules Aug 13 01:34:45.619645 dbus-daemon[1459]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:34:45.621398 polkitd[1537]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:34:45.633776 coreos-metadata[1530]: Aug 13 01:34:45.633 INFO Fetch successful Aug 13 01:34:45.643142 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:34:45.644451 systemd-networkd[1404]: eth0: Gained IPv6LL Aug 13 01:34:45.650358 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Aug 13 01:34:45.721024 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:34:45.744342 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:34:45.751417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:34:45.829076 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:34:45.831058 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:34:45.837098 update-ssh-keys[1572]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:34:45.850987 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:34:45.859860 systemd[1]: Finished sshkeys.service. Aug 13 01:34:45.889362 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:34:45.893048 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:34:45.893970 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:34:45.959650 systemd-hostnamed[1494]: Hostname set to <172-233-223-240> (transient) Aug 13 01:34:45.962889 systemd-resolved[1347]: System hostname changed to '172-233-223-240'. Aug 13 01:34:46.047856 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:34:46.176532 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:34:46.189472 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Aug 13 01:34:46.194647 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:34:46.226859 containerd[1492]: time="2025-08-13T01:34:46.226730590Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 01:34:46.312026 containerd[1492]: time="2025-08-13T01:34:46.311926060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:34:46.315365 containerd[1492]: time="2025-08-13T01:34:46.315268880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:34:46.315365 containerd[1492]: time="2025-08-13T01:34:46.315313990Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:34:46.315365 containerd[1492]: time="2025-08-13T01:34:46.315354520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:34:46.315630 containerd[1492]: time="2025-08-13T01:34:46.315600860Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 01:34:46.315662 containerd[1492]: time="2025-08-13T01:34:46.315630520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 01:34:46.315744 containerd[1492]: time="2025-08-13T01:34:46.315716110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:34:46.315744 containerd[1492]: time="2025-08-13T01:34:46.315739500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:34:46.316041 containerd[1492]: time="2025-08-13T01:34:46.316009340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:34:46.316041 containerd[1492]: time="2025-08-13T01:34:46.316033960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:34:46.316119 containerd[1492]: time="2025-08-13T01:34:46.316048530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:34:46.316119 containerd[1492]: time="2025-08-13T01:34:46.316072890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:34:46.316279 containerd[1492]: time="2025-08-13T01:34:46.316248870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:34:46.316566 containerd[1492]: time="2025-08-13T01:34:46.316540500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:34:46.316759 containerd[1492]: time="2025-08-13T01:34:46.316733180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:34:46.316759 containerd[1492]: time="2025-08-13T01:34:46.316755760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:34:46.316922 containerd[1492]: time="2025-08-13T01:34:46.316895410Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:34:46.317034 containerd[1492]: time="2025-08-13T01:34:46.317007550Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:34:46.335096 containerd[1492]: time="2025-08-13T01:34:46.335037410Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:34:46.335208 containerd[1492]: time="2025-08-13T01:34:46.335130590Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:34:46.335208 containerd[1492]: time="2025-08-13T01:34:46.335150790Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 01:34:46.335647 containerd[1492]: time="2025-08-13T01:34:46.335614990Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 01:34:46.335691 containerd[1492]: time="2025-08-13T01:34:46.335669770Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:34:46.370220 containerd[1492]: time="2025-08-13T01:34:46.335922820Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:34:46.376319 containerd[1492]: time="2025-08-13T01:34:46.376270120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:34:46.376559 containerd[1492]: time="2025-08-13T01:34:46.376515780Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 01:34:46.376559 containerd[1492]: time="2025-08-13T01:34:46.376549240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 01:34:46.376656 containerd[1492]: time="2025-08-13T01:34:46.376570010Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 01:34:46.376656 containerd[1492]: time="2025-08-13T01:34:46.376588670Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:34:46.376656 containerd[1492]: time="2025-08-13T01:34:46.376606290Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:34:46.376656 containerd[1492]: time="2025-08-13T01:34:46.376622590Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:34:46.376656 containerd[1492]: time="2025-08-13T01:34:46.376640310Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:34:46.376656 containerd[1492]: time="2025-08-13T01:34:46.376657960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:34:46.376800 containerd[1492]: time="2025-08-13T01:34:46.376674110Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:34:46.376800 containerd[1492]: time="2025-08-13T01:34:46.376691030Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:34:46.376800 containerd[1492]: time="2025-08-13T01:34:46.376706350Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:34:46.376800 containerd[1492]: time="2025-08-13T01:34:46.376764200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.376800 containerd[1492]: time="2025-08-13T01:34:46.376786160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.376937 containerd[1492]: time="2025-08-13T01:34:46.376803060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.376937 containerd[1492]: time="2025-08-13T01:34:46.376818470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.376937 containerd[1492]: time="2025-08-13T01:34:46.376841620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.376937 containerd[1492]: time="2025-08-13T01:34:46.376858650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.376937 containerd[1492]: time="2025-08-13T01:34:46.376874680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.376937 containerd[1492]: time="2025-08-13T01:34:46.376891100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.376937 containerd[1492]: time="2025-08-13T01:34:46.376907580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.376937 containerd[1492]: time="2025-08-13T01:34:46.376926890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.377157 containerd[1492]: time="2025-08-13T01:34:46.376957700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.377157 containerd[1492]: time="2025-08-13T01:34:46.376984810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.377157 containerd[1492]: time="2025-08-13T01:34:46.377001730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.377157 containerd[1492]: time="2025-08-13T01:34:46.377019240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 01:34:46.377157 containerd[1492]: time="2025-08-13T01:34:46.377044370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.377157 containerd[1492]: time="2025-08-13T01:34:46.377062470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.377157 containerd[1492]: time="2025-08-13T01:34:46.377076750Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:34:46.377157 containerd[1492]: time="2025-08-13T01:34:46.377128600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:34:46.377157 containerd[1492]: time="2025-08-13T01:34:46.377156880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 01:34:46.377435 containerd[1492]: time="2025-08-13T01:34:46.377194930Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:34:46.377435 containerd[1492]: time="2025-08-13T01:34:46.377212580Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 01:34:46.377435 containerd[1492]: time="2025-08-13T01:34:46.377226150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.377435 containerd[1492]: time="2025-08-13T01:34:46.377243820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 01:34:46.377435 containerd[1492]: time="2025-08-13T01:34:46.377255580Z" level=info msg="NRI interface is disabled by configuration." Aug 13 01:34:46.377435 containerd[1492]: time="2025-08-13T01:34:46.377270230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:34:46.377644 containerd[1492]: time="2025-08-13T01:34:46.377585310Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:34:46.378227 containerd[1492]: time="2025-08-13T01:34:46.377659270Z" level=info msg="Connect containerd service" Aug 13 01:34:46.378227 containerd[1492]: time="2025-08-13T01:34:46.377701590Z" level=info msg="using legacy CRI server" Aug 13 01:34:46.378227 containerd[1492]: time="2025-08-13T01:34:46.377709400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:34:46.378227 containerd[1492]: time="2025-08-13T01:34:46.377984620Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:34:46.381039 sshd[1563]: Accepted publickey for core from 139.178.89.65 port 52624 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:34:46.381519 containerd[1492]: time="2025-08-13T01:34:46.381458070Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:34:46.392882 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:34:46.412019 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:34:46.424542 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:34:46.451280 systemd-logind[1468]: New session 1 of user core. Aug 13 01:34:46.454627 containerd[1492]: time="2025-08-13T01:34:46.454154330Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:34:46.454627 containerd[1492]: time="2025-08-13T01:34:46.454487040Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:34:46.485346 containerd[1492]: time="2025-08-13T01:34:46.480126110Z" level=info msg="Start subscribing containerd event" Aug 13 01:34:46.485346 containerd[1492]: time="2025-08-13T01:34:46.480235970Z" level=info msg="Start recovering state" Aug 13 01:34:46.485346 containerd[1492]: time="2025-08-13T01:34:46.480327480Z" level=info msg="Start event monitor" Aug 13 01:34:46.485346 containerd[1492]: time="2025-08-13T01:34:46.480357920Z" level=info msg="Start snapshots syncer" Aug 13 01:34:46.485346 containerd[1492]: time="2025-08-13T01:34:46.480369480Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:34:46.485346 containerd[1492]: time="2025-08-13T01:34:46.480377620Z" level=info msg="Start streaming server" Aug 13 01:34:46.485346 containerd[1492]: time="2025-08-13T01:34:46.480471430Z" level=info msg="containerd successfully booted in 0.265692s" Aug 13 01:34:46.485686 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:34:46.493090 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:34:46.507656 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:34:46.514598 (systemd)[1607]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:34:46.520596 systemd-logind[1468]: New session c1 of user core. Aug 13 01:34:46.831598 systemd[1607]: Queued start job for default target default.target. Aug 13 01:34:46.838665 systemd[1607]: Created slice app.slice - User Application Slice. Aug 13 01:34:46.839220 systemd[1607]: Reached target paths.target - Paths. Aug 13 01:34:46.839411 systemd[1607]: Reached target timers.target - Timers. Aug 13 01:34:46.853152 systemd[1607]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:34:46.884843 systemd[1607]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:34:46.885672 systemd[1607]: Reached target sockets.target - Sockets. Aug 13 01:34:46.885726 systemd[1607]: Reached target basic.target - Basic System. Aug 13 01:34:46.885776 systemd[1607]: Reached target default.target - Main User Target. Aug 13 01:34:46.885813 systemd[1607]: Startup finished in 337ms. Aug 13 01:34:46.885961 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:34:46.895395 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:34:47.319543 tar[1497]: linux-amd64/README.md Aug 13 01:34:47.324256 systemd[1]: Started sshd@1-172.233.223.240:22-139.178.89.65:52636.service - OpenSSH per-connection server daemon (139.178.89.65:52636). Aug 13 01:34:47.364826 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:34:47.379139 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Aug 13 01:34:47.665505 sshd[1618]: Accepted publickey for core from 139.178.89.65 port 52636 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:34:47.667703 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:34:47.675576 systemd-logind[1468]: New session 2 of user core. Aug 13 01:34:47.685463 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:34:48.123713 sshd[1623]: Connection closed by 139.178.89.65 port 52636 Aug 13 01:34:48.138677 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Aug 13 01:34:48.157046 systemd[1]: sshd@1-172.233.223.240:22-139.178.89.65:52636.service: Deactivated successfully. Aug 13 01:34:48.158704 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:34:48.159776 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:34:48.181099 systemd[1]: Started sshd@2-172.233.223.240:22-139.178.89.65:36078.service - OpenSSH per-connection server daemon (139.178.89.65:36078). Aug 13 01:34:48.183611 systemd-logind[1468]: Removed session 2. Aug 13 01:34:48.546024 sshd[1628]: Accepted publickey for core from 139.178.89.65 port 36078 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:34:48.548106 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:34:48.558381 systemd-logind[1468]: New session 3 of user core. Aug 13 01:34:48.562348 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:34:48.885424 sshd[1631]: Connection closed by 139.178.89.65 port 36078 Aug 13 01:34:48.886746 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Aug 13 01:34:48.891946 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:34:48.894541 systemd[1]: sshd@2-172.233.223.240:22-139.178.89.65:36078.service: Deactivated successfully. Aug 13 01:34:48.897515 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:34:48.899581 systemd-logind[1468]: Removed session 3. Aug 13 01:34:49.389285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:34:49.390569 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:34:49.392376 systemd[1]: Startup finished in 2.979s (kernel) + 9.756s (initrd) + 9.924s (userspace) = 22.660s. Aug 13 01:34:49.395799 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:34:50.208977 kubelet[1641]: E0813 01:34:50.208890 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:34:50.213726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:34:50.214236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:34:50.214976 systemd[1]: kubelet.service: Consumed 2.910s CPU time, 265.8M memory peak. Aug 13 01:34:58.952514 systemd[1]: Started sshd@3-172.233.223.240:22-139.178.89.65:39592.service - OpenSSH per-connection server daemon (139.178.89.65:39592). Aug 13 01:34:59.282717 sshd[1653]: Accepted publickey for core from 139.178.89.65 port 39592 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:34:59.284406 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:34:59.289532 systemd-logind[1468]: New session 4 of user core. Aug 13 01:34:59.297374 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:34:59.529828 sshd[1655]: Connection closed by 139.178.89.65 port 39592 Aug 13 01:34:59.530567 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Aug 13 01:34:59.533606 systemd[1]: sshd@3-172.233.223.240:22-139.178.89.65:39592.service: Deactivated successfully. Aug 13 01:34:59.535901 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:34:59.537642 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:34:59.538794 systemd-logind[1468]: Removed session 4. Aug 13 01:34:59.598446 systemd[1]: Started sshd@4-172.233.223.240:22-139.178.89.65:39596.service - OpenSSH per-connection server daemon (139.178.89.65:39596). Aug 13 01:34:59.945987 sshd[1661]: Accepted publickey for core from 139.178.89.65 port 39596 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:34:59.947590 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:34:59.952300 systemd-logind[1468]: New session 5 of user core. Aug 13 01:34:59.963366 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:35:00.195246 sshd[1663]: Connection closed by 139.178.89.65 port 39596 Aug 13 01:35:00.196394 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:00.202615 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:35:00.203875 systemd[1]: sshd@4-172.233.223.240:22-139.178.89.65:39596.service: Deactivated successfully. Aug 13 01:35:00.206748 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:35:00.207852 systemd-logind[1468]: Removed session 5. Aug 13 01:35:00.255614 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:35:00.273482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:35:00.275914 systemd[1]: Started sshd@5-172.233.223.240:22-139.178.89.65:39608.service - OpenSSH per-connection server daemon (139.178.89.65:39608). Aug 13 01:35:00.353324 systemd[1]: Started sshd@6-172.233.223.240:22-95.217.206.161:42254.service - OpenSSH per-connection server daemon (95.217.206.161:42254). Aug 13 01:35:00.608150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:35:00.610256 sshd[1670]: Accepted publickey for core from 139.178.89.65 port 39608 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:35:00.612039 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:00.614345 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:35:00.618225 systemd-logind[1468]: New session 6 of user core. Aug 13 01:35:00.666497 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:35:00.759809 kubelet[1682]: E0813 01:35:00.759644 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:35:00.765194 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:35:00.765451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:35:00.765886 systemd[1]: kubelet.service: Consumed 459ms CPU time, 111.2M memory peak. Aug 13 01:35:00.875473 sshd[1687]: Connection closed by 139.178.89.65 port 39608 Aug 13 01:35:00.876813 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:00.880796 systemd[1]: sshd@5-172.233.223.240:22-139.178.89.65:39608.service: Deactivated successfully. Aug 13 01:35:00.883288 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:35:00.885131 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:35:00.887026 systemd-logind[1468]: Removed session 6. Aug 13 01:35:00.949443 systemd[1]: Started sshd@7-172.233.223.240:22-139.178.89.65:39624.service - OpenSSH per-connection server daemon (139.178.89.65:39624). Aug 13 01:35:01.291775 sshd[1696]: Accepted publickey for core from 139.178.89.65 port 39624 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:35:01.293612 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:01.299805 systemd-logind[1468]: New session 7 of user core. Aug 13 01:35:01.308402 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:35:01.502251 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:35:01.502685 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:35:01.526995 sudo[1699]: pam_unix(sudo:session): session closed for user root Aug 13 01:35:01.579141 sshd[1698]: Connection closed by 139.178.89.65 port 39624 Aug 13 01:35:01.580002 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:01.584422 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:35:01.585625 systemd[1]: sshd@7-172.233.223.240:22-139.178.89.65:39624.service: Deactivated successfully. Aug 13 01:35:01.587998 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:35:01.589044 systemd-logind[1468]: Removed session 7. Aug 13 01:35:01.653706 systemd[1]: Started sshd@8-172.233.223.240:22-139.178.89.65:39628.service - OpenSSH per-connection server daemon (139.178.89.65:39628). Aug 13 01:35:01.987428 sshd[1705]: Accepted publickey for core from 139.178.89.65 port 39628 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:35:01.989405 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:01.996013 systemd-logind[1468]: New session 8 of user core. Aug 13 01:35:02.005503 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:35:02.187771 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:35:02.188169 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:35:02.192618 sudo[1709]: pam_unix(sudo:session): session closed for user root Aug 13 01:35:02.200516 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:35:02.200907 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:35:02.217810 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:35:02.269983 augenrules[1731]: No rules Aug 13 01:35:02.271880 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:35:02.272250 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:35:02.273538 sudo[1708]: pam_unix(sudo:session): session closed for user root Aug 13 01:35:02.324485 sshd[1707]: Connection closed by 139.178.89.65 port 39628 Aug 13 01:35:02.325283 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:02.328957 systemd[1]: sshd@8-172.233.223.240:22-139.178.89.65:39628.service: Deactivated successfully. Aug 13 01:35:02.331439 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:35:02.333273 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:35:02.334733 systemd-logind[1468]: Removed session 8. Aug 13 01:35:02.387540 systemd[1]: Started sshd@9-172.233.223.240:22-139.178.89.65:39638.service - OpenSSH per-connection server daemon (139.178.89.65:39638). Aug 13 01:35:02.714011 sshd[1740]: Accepted publickey for core from 139.178.89.65 port 39638 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:35:02.715997 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:02.721813 systemd-logind[1468]: New session 9 of user core. Aug 13 01:35:02.730535 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:35:02.912528 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:35:02.913049 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:35:03.667396 sshd[1675]: Connection closed by 95.217.206.161 port 42254 [preauth] Aug 13 01:35:03.668385 systemd[1]: sshd@6-172.233.223.240:22-95.217.206.161:42254.service: Deactivated successfully. Aug 13 01:35:04.372525 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:35:04.375123 (dockerd)[1762]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:35:05.642922 dockerd[1762]: time="2025-08-13T01:35:05.641923650Z" level=info msg="Starting up" Aug 13 01:35:05.925646 dockerd[1762]: time="2025-08-13T01:35:05.924950690Z" level=info msg="Loading containers: start." Aug 13 01:35:06.136239 kernel: Initializing XFRM netlink socket Aug 13 01:35:06.170543 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Aug 13 01:35:06.244596 systemd-networkd[1404]: docker0: Link UP Aug 13 01:35:06.282842 dockerd[1762]: time="2025-08-13T01:35:06.282790260Z" level=info msg="Loading containers: done." Aug 13 01:35:07.226584 systemd-resolved[1347]: Clock change detected. Flushing caches. Aug 13 01:35:07.227309 systemd-timesyncd[1401]: Contacted time server [2600:2600::199]:123 (2.flatcar.pool.ntp.org). Aug 13 01:35:07.227418 systemd-timesyncd[1401]: Initial clock synchronization to Wed 2025-08-13 01:35:07.226302 UTC. Aug 13 01:35:07.250663 dockerd[1762]: time="2025-08-13T01:35:07.250606396Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:35:07.250851 dockerd[1762]: time="2025-08-13T01:35:07.250721416Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 01:35:07.250945 dockerd[1762]: time="2025-08-13T01:35:07.250844436Z" level=info msg="Daemon has completed initialization" Aug 13 01:35:07.283040 dockerd[1762]: time="2025-08-13T01:35:07.282951696Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:35:07.283362 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:35:08.559138 containerd[1492]: time="2025-08-13T01:35:08.558992356Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 01:35:09.486342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764788495.mount: Deactivated successfully. Aug 13 01:35:11.726504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:35:11.734244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:35:11.806782 containerd[1492]: time="2025-08-13T01:35:11.805701366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:11.809620 containerd[1492]: time="2025-08-13T01:35:11.809574576Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 01:35:11.812122 containerd[1492]: time="2025-08-13T01:35:11.812051076Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:11.815604 containerd[1492]: time="2025-08-13T01:35:11.815567106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:11.817546 containerd[1492]: time="2025-08-13T01:35:11.816775656Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 3.25765253s" Aug 13 01:35:11.817729 containerd[1492]: time="2025-08-13T01:35:11.817699996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 01:35:11.819472 containerd[1492]: time="2025-08-13T01:35:11.819158436Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 01:35:11.970067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:35:11.974508 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:35:12.104108 kubelet[2011]: E0813 01:35:12.103793 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:35:12.120978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:35:12.121198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:35:12.121632 systemd[1]: kubelet.service: Consumed 343ms CPU time, 110.8M memory peak. Aug 13 01:35:12.193207 systemd[1]: Started sshd@10-172.233.223.240:22-23.236.220.75:50030.service - OpenSSH per-connection server daemon (23.236.220.75:50030). Aug 13 01:35:14.422169 containerd[1492]: time="2025-08-13T01:35:14.422024456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:14.423134 containerd[1492]: time="2025-08-13T01:35:14.423093716Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 01:35:14.424923 containerd[1492]: time="2025-08-13T01:35:14.423910616Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:14.426810 containerd[1492]: time="2025-08-13T01:35:14.426783746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:14.427810 containerd[1492]: time="2025-08-13T01:35:14.427785896Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 2.60860008s" Aug 13 01:35:14.427931 containerd[1492]: time="2025-08-13T01:35:14.427876556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 01:35:14.428532 containerd[1492]: time="2025-08-13T01:35:14.428510516Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 01:35:15.413622 sshd[2019]: Unable to negotiate with 23.236.220.75 port 50030: no matching MAC found. Their offer: hmac-sha1-96,hmac-sha1 [preauth] Aug 13 01:35:15.416434 systemd[1]: sshd@10-172.233.223.240:22-23.236.220.75:50030.service: Deactivated successfully. Aug 13 01:35:16.421658 containerd[1492]: time="2025-08-13T01:35:16.421540656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:16.422575 containerd[1492]: time="2025-08-13T01:35:16.422544796Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 01:35:16.423543 containerd[1492]: time="2025-08-13T01:35:16.423508686Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:16.425782 containerd[1492]: time="2025-08-13T01:35:16.425740396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:16.427566 containerd[1492]: time="2025-08-13T01:35:16.426675516Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.99806636s" Aug 13 01:35:16.427566 containerd[1492]: time="2025-08-13T01:35:16.426705166Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 01:35:16.427817 containerd[1492]: time="2025-08-13T01:35:16.427775316Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 01:35:16.877136 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:35:18.219031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount433990603.mount: Deactivated successfully. Aug 13 01:35:19.233856 systemd[1]: Started sshd@11-172.233.223.240:22-5.234.5.68:58682.service - OpenSSH per-connection server daemon (5.234.5.68:58682). Aug 13 01:35:19.343004 containerd[1492]: time="2025-08-13T01:35:19.342940336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:19.344269 containerd[1492]: time="2025-08-13T01:35:19.344089606Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 01:35:19.345921 containerd[1492]: time="2025-08-13T01:35:19.344866066Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:19.347093 containerd[1492]: time="2025-08-13T01:35:19.347046246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:19.347882 containerd[1492]: time="2025-08-13T01:35:19.347830486Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 2.92001648s" Aug 13 01:35:19.347882 containerd[1492]: time="2025-08-13T01:35:19.347881896Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 01:35:19.349639 containerd[1492]: time="2025-08-13T01:35:19.349345206Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:35:19.981959 sshd[2043]: Connection closed by 5.234.5.68 port 58682 [preauth] Aug 13 01:35:19.984529 systemd[1]: sshd@11-172.233.223.240:22-5.234.5.68:58682.service: Deactivated successfully. Aug 13 01:35:20.061593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755864395.mount: Deactivated successfully. Aug 13 01:35:21.332770 containerd[1492]: time="2025-08-13T01:35:21.332716216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:21.333884 containerd[1492]: time="2025-08-13T01:35:21.333826596Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:35:21.334383 containerd[1492]: time="2025-08-13T01:35:21.334338626Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:21.337880 containerd[1492]: time="2025-08-13T01:35:21.337258236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:21.338520 containerd[1492]: time="2025-08-13T01:35:21.338485596Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.98910808s" Aug 13 01:35:21.338574 containerd[1492]: time="2025-08-13T01:35:21.338523196Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:35:21.339017 containerd[1492]: time="2025-08-13T01:35:21.338969586Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:35:22.024294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246009570.mount: Deactivated successfully. Aug 13 01:35:22.028436 containerd[1492]: time="2025-08-13T01:35:22.028379626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:22.029518 containerd[1492]: time="2025-08-13T01:35:22.029451066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:35:22.030213 containerd[1492]: time="2025-08-13T01:35:22.030158336Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:22.035661 containerd[1492]: time="2025-08-13T01:35:22.035564846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:22.037301 containerd[1492]: time="2025-08-13T01:35:22.037044936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 698.047ms" Aug 13 01:35:22.037301 containerd[1492]: time="2025-08-13T01:35:22.037089746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:35:22.040446 containerd[1492]: time="2025-08-13T01:35:22.040242516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 01:35:22.217733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 01:35:22.246512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:35:22.452149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:35:22.464407 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:35:22.662200 kubelet[2111]: E0813 01:35:22.662090 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:35:22.666833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:35:22.667108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:35:22.668074 systemd[1]: kubelet.service: Consumed 383ms CPU time, 112.6M memory peak. Aug 13 01:35:22.858717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3389998316.mount: Deactivated successfully. Aug 13 01:35:25.414864 containerd[1492]: time="2025-08-13T01:35:25.414764026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:25.416824 containerd[1492]: time="2025-08-13T01:35:25.416738096Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 01:35:25.417949 containerd[1492]: time="2025-08-13T01:35:25.417118466Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:25.420917 containerd[1492]: time="2025-08-13T01:35:25.420554796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:25.422413 containerd[1492]: time="2025-08-13T01:35:25.422171296Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.38188535s" Aug 13 01:35:25.422413 containerd[1492]: time="2025-08-13T01:35:25.422225916Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:35:27.997622 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:35:27.997837 systemd[1]: kubelet.service: Consumed 383ms CPU time, 112.6M memory peak. Aug 13 01:35:28.005321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:35:28.037559 systemd[1]: Reload requested from client PID 2199 ('systemctl') (unit session-9.scope)... Aug 13 01:35:28.037804 systemd[1]: Reloading... Aug 13 01:35:28.234934 zram_generator::config[2240]: No configuration found. Aug 13 01:35:28.410459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:35:28.513759 systemd[1]: Reloading finished in 475 ms. Aug 13 01:35:28.574060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:35:28.575372 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:35:28.579214 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:35:28.580458 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:35:28.580747 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:35:28.580792 systemd[1]: kubelet.service: Consumed 331ms CPU time, 97.3M memory peak. Aug 13 01:35:28.587331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:35:28.708005 systemd[1]: Started sshd@12-172.233.223.240:22-5.102.103.183:62810.service - OpenSSH per-connection server daemon (5.102.103.183:62810). Aug 13 01:35:28.799484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:35:28.814326 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:35:29.020848 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:35:29.020848 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:35:29.020848 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:35:29.020848 kubelet[2303]: I0813 01:35:29.018430 2303 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:35:29.247279 kubelet[2303]: I0813 01:35:29.246041 2303 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:35:29.247279 kubelet[2303]: I0813 01:35:29.246078 2303 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:35:29.247279 kubelet[2303]: I0813 01:35:29.246353 2303 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:35:29.284202 kubelet[2303]: E0813 01:35:29.284086 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.233.223.240:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:29.284949 kubelet[2303]: I0813 01:35:29.284632 2303 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:35:29.293523 kubelet[2303]: E0813 01:35:29.293490 2303 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:35:29.293680 kubelet[2303]: I0813 01:35:29.293665 2303 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:35:29.299481 kubelet[2303]: I0813 01:35:29.299425 2303 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:35:29.301769 kubelet[2303]: I0813 01:35:29.301001 2303 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:35:29.301769 kubelet[2303]: I0813 01:35:29.301033 2303 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-223-240","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:35:29.301769 kubelet[2303]: I0813 01:35:29.301391 2303 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:35:29.301769 kubelet[2303]: I0813 01:35:29.301409 2303 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:35:29.302571 kubelet[2303]: I0813 01:35:29.301656 2303 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:35:29.308285 kubelet[2303]: I0813 01:35:29.308258 2303 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:35:29.308368 kubelet[2303]: I0813 01:35:29.308303 2303 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:35:29.308368 kubelet[2303]: I0813 01:35:29.308345 2303 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:35:29.308368 kubelet[2303]: I0813 01:35:29.308368 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:35:29.314910 kubelet[2303]: I0813 01:35:29.313989 2303 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 01:35:29.314910 kubelet[2303]: I0813 01:35:29.314385 2303 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:35:29.314910 kubelet[2303]: W0813 01:35:29.314521 2303 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:35:29.317274 kubelet[2303]: W0813 01:35:29.316569 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.233.223.240:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-223-240&limit=500&resourceVersion=0": dial tcp 172.233.223.240:6443: connect: connection refused Aug 13 01:35:29.317274 kubelet[2303]: E0813 01:35:29.316644 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.233.223.240:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-223-240&limit=500&resourceVersion=0\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:29.317274 kubelet[2303]: W0813 01:35:29.316730 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.233.223.240:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.233.223.240:6443: connect: connection refused Aug 13 01:35:29.317274 kubelet[2303]: E0813 01:35:29.316758 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.233.223.240:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:29.317521 kubelet[2303]: I0813 01:35:29.317503 2303 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:35:29.317613 kubelet[2303]: I0813 01:35:29.317601 2303 server.go:1287] "Started kubelet" Aug 13 01:35:29.335556 kubelet[2303]: I0813 01:35:29.335529 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:35:29.376047 kubelet[2303]: I0813 01:35:29.336110 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:35:29.376615 kubelet[2303]: I0813 01:35:29.376594 2303 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:35:29.376755 kubelet[2303]: I0813 01:35:29.341722 2303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:35:29.379763 kubelet[2303]: E0813 01:35:29.377976 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.223.240:6443/api/v1/namespaces/default/events\": dial tcp 172.233.223.240:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-223-240.185b2fb3db675b34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-223-240,UID:172-233-223-240,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-223-240,},FirstTimestamp:2025-08-13 01:35:29.317575476 +0000 UTC m=+0.498871051,LastTimestamp:2025-08-13 01:35:29.317575476 +0000 UTC m=+0.498871051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-223-240,}" Aug 13 01:35:29.383411 kubelet[2303]: I0813 01:35:29.335969 2303 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:35:29.383570 kubelet[2303]: E0813 01:35:29.383550 2303 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:35:29.384423 kubelet[2303]: E0813 01:35:29.384381 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-223-240\" not found" Aug 13 01:35:29.384558 kubelet[2303]: I0813 01:35:29.384544 2303 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:35:29.384707 kubelet[2303]: I0813 01:35:29.384679 2303 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:35:29.384982 kubelet[2303]: I0813 01:35:29.384955 2303 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:35:29.385152 kubelet[2303]: I0813 01:35:29.385138 2303 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:35:29.385799 kubelet[2303]: W0813 01:35:29.385734 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.233.223.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.233.223.240:6443: connect: connection refused Aug 13 01:35:29.386262 kubelet[2303]: E0813 01:35:29.386173 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.233.223.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:29.386443 kubelet[2303]: E0813 01:35:29.386419 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.223.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-223-240?timeout=10s\": dial tcp 172.233.223.240:6443: connect: connection refused" interval="200ms" Aug 13 01:35:29.386717 kubelet[2303]: I0813 01:35:29.386696 2303 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:35:29.388255 kubelet[2303]: I0813 01:35:29.388236 2303 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:35:29.388354 kubelet[2303]: I0813 01:35:29.388342 2303 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:35:29.400125 kubelet[2303]: I0813 01:35:29.400067 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:35:29.402918 kubelet[2303]: I0813 01:35:29.401660 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:35:29.402918 kubelet[2303]: I0813 01:35:29.401717 2303 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:35:29.402918 kubelet[2303]: I0813 01:35:29.401754 2303 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:35:29.402918 kubelet[2303]: I0813 01:35:29.401766 2303 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:35:29.402918 kubelet[2303]: E0813 01:35:29.401830 2303 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:35:29.424921 kubelet[2303]: W0813 01:35:29.424847 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.233.223.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.233.223.240:6443: connect: connection refused Aug 13 01:35:29.425122 kubelet[2303]: E0813 01:35:29.425101 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.233.223.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:29.430302 kubelet[2303]: I0813 01:35:29.430284 2303 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:35:29.430473 kubelet[2303]: I0813 01:35:29.430460 2303 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:35:29.430547 kubelet[2303]: I0813 01:35:29.430535 2303 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:35:29.432639 kubelet[2303]: I0813 01:35:29.432621 2303 policy_none.go:49] "None policy: Start" Aug 13 01:35:29.432745 kubelet[2303]: I0813 01:35:29.432733 2303 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:35:29.433008 kubelet[2303]: I0813 01:35:29.432997 2303 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:35:29.439590 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:35:29.456576 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:35:29.461285 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:35:29.476403 kubelet[2303]: W0813 01:35:29.476352 2303 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Aug 13 01:35:29.477919 kubelet[2303]: I0813 01:35:29.477875 2303 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:35:29.478171 kubelet[2303]: I0813 01:35:29.478147 2303 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:35:29.478221 kubelet[2303]: I0813 01:35:29.478176 2303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:35:29.479230 kubelet[2303]: I0813 01:35:29.478630 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:35:29.481110 kubelet[2303]: E0813 01:35:29.480985 2303 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:35:29.481110 kubelet[2303]: E0813 01:35:29.481089 2303 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-233-223-240\" not found" Aug 13 01:35:29.516643 systemd[1]: Created slice kubepods-burstable-podaa7326c4f0410458e95a5ebeec8e64c1.slice - libcontainer container kubepods-burstable-podaa7326c4f0410458e95a5ebeec8e64c1.slice. Aug 13 01:35:29.535477 kubelet[2303]: E0813 01:35:29.535295 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:29.538202 systemd[1]: Created slice kubepods-burstable-podd8ad3cde934ff9fd3b892bd7afa65e57.slice - libcontainer container kubepods-burstable-podd8ad3cde934ff9fd3b892bd7afa65e57.slice. Aug 13 01:35:29.548217 kubelet[2303]: E0813 01:35:29.547776 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:29.552451 systemd[1]: Created slice kubepods-burstable-pod9d8d1e6eb79e6ce2a4a89912e076b9c4.slice - libcontainer container kubepods-burstable-pod9d8d1e6eb79e6ce2a4a89912e076b9c4.slice. Aug 13 01:35:29.567840 kubelet[2303]: E0813 01:35:29.567799 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:29.580467 kubelet[2303]: I0813 01:35:29.580446 2303 kubelet_node_status.go:75] "Attempting to register node" node="172-233-223-240" Aug 13 01:35:29.581078 kubelet[2303]: E0813 01:35:29.581022 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.223.240:6443/api/v1/nodes\": dial tcp 172.233.223.240:6443: connect: connection refused" node="172-233-223-240" Aug 13 01:35:29.586497 kubelet[2303]: I0813 01:35:29.586463 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa7326c4f0410458e95a5ebeec8e64c1-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-223-240\" (UID: \"aa7326c4f0410458e95a5ebeec8e64c1\") " pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:29.586497 kubelet[2303]: I0813 01:35:29.586500 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-ca-certs\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:29.586626 kubelet[2303]: I0813 01:35:29.586521 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-flexvolume-dir\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:29.586626 kubelet[2303]: I0813 01:35:29.586545 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:29.586626 kubelet[2303]: I0813 01:35:29.586565 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d8d1e6eb79e6ce2a4a89912e076b9c4-kubeconfig\") pod \"kube-scheduler-172-233-223-240\" (UID: \"9d8d1e6eb79e6ce2a4a89912e076b9c4\") " pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:35:29.586626 kubelet[2303]: I0813 01:35:29.586582 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa7326c4f0410458e95a5ebeec8e64c1-ca-certs\") pod \"kube-apiserver-172-233-223-240\" (UID: \"aa7326c4f0410458e95a5ebeec8e64c1\") " pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:29.586626 kubelet[2303]: I0813 01:35:29.586597 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa7326c4f0410458e95a5ebeec8e64c1-k8s-certs\") pod \"kube-apiserver-172-233-223-240\" (UID: \"aa7326c4f0410458e95a5ebeec8e64c1\") " pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:29.586752 kubelet[2303]: I0813 01:35:29.586613 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-k8s-certs\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:29.586752 kubelet[2303]: I0813 01:35:29.586631 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-kubeconfig\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:29.587712 kubelet[2303]: E0813 01:35:29.587672 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.223.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-223-240?timeout=10s\": dial tcp 172.233.223.240:6443: connect: connection refused" interval="400ms" Aug 13 01:35:29.598227 sshd[2296]: Connection closed by 5.102.103.183 port 62810 [preauth] Aug 13 01:35:29.600867 systemd[1]: sshd@12-172.233.223.240:22-5.102.103.183:62810.service: Deactivated successfully. Aug 13 01:35:29.783216 kubelet[2303]: I0813 01:35:29.783185 2303 kubelet_node_status.go:75] "Attempting to register node" node="172-233-223-240" Aug 13 01:35:29.783534 kubelet[2303]: E0813 01:35:29.783502 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.223.240:6443/api/v1/nodes\": dial tcp 172.233.223.240:6443: connect: connection refused" node="172-233-223-240" Aug 13 01:35:29.836618 kubelet[2303]: E0813 01:35:29.836461 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:29.837677 containerd[1492]: time="2025-08-13T01:35:29.837621806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-223-240,Uid:aa7326c4f0410458e95a5ebeec8e64c1,Namespace:kube-system,Attempt:0,}" Aug 13 01:35:29.849440 kubelet[2303]: E0813 01:35:29.849408 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:29.850040 containerd[1492]: time="2025-08-13T01:35:29.849883936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-223-240,Uid:d8ad3cde934ff9fd3b892bd7afa65e57,Namespace:kube-system,Attempt:0,}" Aug 13 01:35:29.869323 kubelet[2303]: E0813 01:35:29.869249 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:29.870032 containerd[1492]: time="2025-08-13T01:35:29.869804236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-223-240,Uid:9d8d1e6eb79e6ce2a4a89912e076b9c4,Namespace:kube-system,Attempt:0,}" Aug 13 01:35:29.988229 kubelet[2303]: E0813 01:35:29.988177 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.223.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-223-240?timeout=10s\": dial tcp 172.233.223.240:6443: connect: connection refused" interval="800ms" Aug 13 01:35:30.186316 kubelet[2303]: I0813 01:35:30.186141 2303 kubelet_node_status.go:75] "Attempting to register node" node="172-233-223-240" Aug 13 01:35:30.186944 kubelet[2303]: E0813 01:35:30.186790 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.223.240:6443/api/v1/nodes\": dial tcp 172.233.223.240:6443: connect: connection refused" node="172-233-223-240" Aug 13 01:35:30.283773 kubelet[2303]: W0813 01:35:30.283695 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.233.223.240:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.233.223.240:6443: connect: connection refused Aug 13 01:35:30.283967 kubelet[2303]: E0813 01:35:30.283812 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.233.223.240:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:30.523399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301343305.mount: Deactivated successfully. Aug 13 01:35:30.527782 containerd[1492]: time="2025-08-13T01:35:30.527743496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:35:30.529172 containerd[1492]: time="2025-08-13T01:35:30.529133026Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:35:30.530545 containerd[1492]: time="2025-08-13T01:35:30.530493546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 01:35:30.531148 containerd[1492]: time="2025-08-13T01:35:30.531078286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 01:35:30.532493 containerd[1492]: time="2025-08-13T01:35:30.532463646Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:35:30.533677 containerd[1492]: time="2025-08-13T01:35:30.533609016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 01:35:30.535464 containerd[1492]: time="2025-08-13T01:35:30.535433146Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:35:30.538358 containerd[1492]: time="2025-08-13T01:35:30.537580376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 687.58966ms" Aug 13 01:35:30.538582 containerd[1492]: time="2025-08-13T01:35:30.538541996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:35:30.540420 containerd[1492]: time="2025-08-13T01:35:30.540375086Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 702.54074ms" Aug 13 01:35:30.543196 containerd[1492]: time="2025-08-13T01:35:30.543161216Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.23554ms" Aug 13 01:35:30.652789 update_engine[1469]: I20250813 01:35:30.648104 1469 update_attempter.cc:509] Updating boot flags... Aug 13 01:35:30.718806 kubelet[2303]: W0813 01:35:30.718733 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.233.223.240:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-223-240&limit=500&resourceVersion=0": dial tcp 172.233.223.240:6443: connect: connection refused Aug 13 01:35:30.718806 kubelet[2303]: E0813 01:35:30.718810 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.233.223.240:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-223-240&limit=500&resourceVersion=0\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:30.719326 kubelet[2303]: W0813 01:35:30.719262 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.233.223.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.233.223.240:6443: connect: connection refused Aug 13 01:35:30.719588 kubelet[2303]: E0813 01:35:30.719331 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.233.223.240:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:30.809405 kubelet[2303]: E0813 01:35:30.796295 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.223.240:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-223-240?timeout=10s\": dial tcp 172.233.223.240:6443: connect: connection refused" interval="1.6s" Aug 13 01:35:30.844029 kubelet[2303]: W0813 01:35:30.843979 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.233.223.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.233.223.240:6443: connect: connection refused Aug 13 01:35:30.844155 kubelet[2303]: E0813 01:35:30.844058 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.233.223.240:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:30.850925 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (2361) Aug 13 01:35:30.976003 containerd[1492]: time="2025-08-13T01:35:30.975913766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:35:30.976003 containerd[1492]: time="2025-08-13T01:35:30.975966506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:35:30.976003 containerd[1492]: time="2025-08-13T01:35:30.975980496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:30.993292 containerd[1492]: time="2025-08-13T01:35:30.976504136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:30.993395 kubelet[2303]: I0813 01:35:30.990435 2303 kubelet_node_status.go:75] "Attempting to register node" node="172-233-223-240" Aug 13 01:35:30.993395 kubelet[2303]: E0813 01:35:30.990760 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.223.240:6443/api/v1/nodes\": dial tcp 172.233.223.240:6443: connect: connection refused" node="172-233-223-240" Aug 13 01:35:31.008478 containerd[1492]: time="2025-08-13T01:35:30.998254526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:35:31.008478 containerd[1492]: time="2025-08-13T01:35:30.998304896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:35:31.008478 containerd[1492]: time="2025-08-13T01:35:30.998315936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:31.008478 containerd[1492]: time="2025-08-13T01:35:30.998392716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:31.058274 containerd[1492]: time="2025-08-13T01:35:31.049506376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:35:31.058274 containerd[1492]: time="2025-08-13T01:35:31.049580986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:35:31.058274 containerd[1492]: time="2025-08-13T01:35:31.049595766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:31.058274 containerd[1492]: time="2025-08-13T01:35:31.049702126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:31.086139 systemd[1]: Started cri-containerd-4687fd2e989ef5bffea704aeadf8cae6c5b361520f3b0231a947bd8e3291a7d5.scope - libcontainer container 4687fd2e989ef5bffea704aeadf8cae6c5b361520f3b0231a947bd8e3291a7d5. Aug 13 01:35:31.191073 systemd[1]: Started cri-containerd-df4f78d5d9f4ce3655263907ab4cb9cdc9e5dcc16cbd850778b0ac751774cebc.scope - libcontainer container df4f78d5d9f4ce3655263907ab4cb9cdc9e5dcc16cbd850778b0ac751774cebc. Aug 13 01:35:31.213035 systemd[1]: Started cri-containerd-3b2f18332974223df0c79944aba859f69d35111a146a17d18f77424ede28dc2f.scope - libcontainer container 3b2f18332974223df0c79944aba859f69d35111a146a17d18f77424ede28dc2f. Aug 13 01:35:31.296277 containerd[1492]: time="2025-08-13T01:35:31.296197836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-223-240,Uid:9d8d1e6eb79e6ce2a4a89912e076b9c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4687fd2e989ef5bffea704aeadf8cae6c5b361520f3b0231a947bd8e3291a7d5\"" Aug 13 01:35:31.298626 kubelet[2303]: E0813 01:35:31.298582 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:31.302915 containerd[1492]: time="2025-08-13T01:35:31.301644746Z" level=info msg="CreateContainer within sandbox \"4687fd2e989ef5bffea704aeadf8cae6c5b361520f3b0231a947bd8e3291a7d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:35:31.318701 containerd[1492]: time="2025-08-13T01:35:31.318640596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-223-240,Uid:d8ad3cde934ff9fd3b892bd7afa65e57,Namespace:kube-system,Attempt:0,} returns sandbox id \"df4f78d5d9f4ce3655263907ab4cb9cdc9e5dcc16cbd850778b0ac751774cebc\"" Aug 13 01:35:31.319744 kubelet[2303]: E0813 01:35:31.319720 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:31.324518 containerd[1492]: time="2025-08-13T01:35:31.324437526Z" level=info msg="CreateContainer within sandbox \"df4f78d5d9f4ce3655263907ab4cb9cdc9e5dcc16cbd850778b0ac751774cebc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:35:31.336395 containerd[1492]: time="2025-08-13T01:35:31.336195366Z" level=info msg="CreateContainer within sandbox \"df4f78d5d9f4ce3655263907ab4cb9cdc9e5dcc16cbd850778b0ac751774cebc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3dfec50390546eae432b0a297b3d1e8e82d3e2fa0e20f05fb072a734348bd3c9\"" Aug 13 01:35:31.336395 containerd[1492]: time="2025-08-13T01:35:31.336331036Z" level=info msg="CreateContainer within sandbox \"4687fd2e989ef5bffea704aeadf8cae6c5b361520f3b0231a947bd8e3291a7d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0d3653beb91f2238920abb068ae1b34be09c0a4193ec941f783464f6ceae7da6\"" Aug 13 01:35:31.337242 containerd[1492]: time="2025-08-13T01:35:31.336860726Z" level=info msg="StartContainer for \"3dfec50390546eae432b0a297b3d1e8e82d3e2fa0e20f05fb072a734348bd3c9\"" Aug 13 01:35:31.340291 containerd[1492]: time="2025-08-13T01:35:31.340263836Z" level=info msg="StartContainer for \"0d3653beb91f2238920abb068ae1b34be09c0a4193ec941f783464f6ceae7da6\"" Aug 13 01:35:31.340754 containerd[1492]: time="2025-08-13T01:35:31.340643356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-223-240,Uid:aa7326c4f0410458e95a5ebeec8e64c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b2f18332974223df0c79944aba859f69d35111a146a17d18f77424ede28dc2f\"" Aug 13 01:35:31.343923 kubelet[2303]: E0813 01:35:31.342134 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:31.345282 containerd[1492]: time="2025-08-13T01:35:31.345258266Z" level=info msg="CreateContainer within sandbox \"3b2f18332974223df0c79944aba859f69d35111a146a17d18f77424ede28dc2f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:35:31.358474 kubelet[2303]: E0813 01:35:31.358446 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.233.223.240:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.223.240:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:35:31.359609 containerd[1492]: time="2025-08-13T01:35:31.359579006Z" level=info msg="CreateContainer within sandbox \"3b2f18332974223df0c79944aba859f69d35111a146a17d18f77424ede28dc2f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc512dc9989aedc24cd3d28a396a3a039d08035d403b68eeb19a6a38e91f7bc2\"" Aug 13 01:35:31.360259 containerd[1492]: time="2025-08-13T01:35:31.360238116Z" level=info msg="StartContainer for \"dc512dc9989aedc24cd3d28a396a3a039d08035d403b68eeb19a6a38e91f7bc2\"" Aug 13 01:35:31.394091 systemd[1]: Started cri-containerd-3dfec50390546eae432b0a297b3d1e8e82d3e2fa0e20f05fb072a734348bd3c9.scope - libcontainer container 3dfec50390546eae432b0a297b3d1e8e82d3e2fa0e20f05fb072a734348bd3c9. Aug 13 01:35:31.398102 systemd[1]: Started cri-containerd-0d3653beb91f2238920abb068ae1b34be09c0a4193ec941f783464f6ceae7da6.scope - libcontainer container 0d3653beb91f2238920abb068ae1b34be09c0a4193ec941f783464f6ceae7da6. Aug 13 01:35:31.430232 systemd[1]: Started cri-containerd-dc512dc9989aedc24cd3d28a396a3a039d08035d403b68eeb19a6a38e91f7bc2.scope - libcontainer container dc512dc9989aedc24cd3d28a396a3a039d08035d403b68eeb19a6a38e91f7bc2. Aug 13 01:35:31.527486 containerd[1492]: time="2025-08-13T01:35:31.527440796Z" level=info msg="StartContainer for \"dc512dc9989aedc24cd3d28a396a3a039d08035d403b68eeb19a6a38e91f7bc2\" returns successfully" Aug 13 01:35:31.533693 containerd[1492]: time="2025-08-13T01:35:31.532584276Z" level=info msg="StartContainer for \"0d3653beb91f2238920abb068ae1b34be09c0a4193ec941f783464f6ceae7da6\" returns successfully" Aug 13 01:35:31.538978 containerd[1492]: time="2025-08-13T01:35:31.538929126Z" level=info msg="StartContainer for \"3dfec50390546eae432b0a297b3d1e8e82d3e2fa0e20f05fb072a734348bd3c9\" returns successfully" Aug 13 01:35:32.456349 kubelet[2303]: E0813 01:35:32.456298 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:32.456829 kubelet[2303]: E0813 01:35:32.456449 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:32.457841 kubelet[2303]: E0813 01:35:32.457806 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:32.458026 kubelet[2303]: E0813 01:35:32.457995 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:32.461972 kubelet[2303]: E0813 01:35:32.461934 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:32.462107 kubelet[2303]: E0813 01:35:32.462079 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:32.594601 kubelet[2303]: I0813 01:35:32.594550 2303 kubelet_node_status.go:75] "Attempting to register node" node="172-233-223-240" Aug 13 01:35:33.464392 kubelet[2303]: E0813 01:35:33.464337 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:33.464862 kubelet[2303]: E0813 01:35:33.464513 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:33.466944 kubelet[2303]: E0813 01:35:33.464964 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:33.466944 kubelet[2303]: E0813 01:35:33.465139 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:33.466944 kubelet[2303]: E0813 01:35:33.465350 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:33.466944 kubelet[2303]: E0813 01:35:33.465434 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:34.069479 kubelet[2303]: E0813 01:35:34.069427 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-233-223-240\" not found" node="172-233-223-240" Aug 13 01:35:34.087444 kubelet[2303]: I0813 01:35:34.087252 2303 kubelet_node_status.go:78] "Successfully registered node" node="172-233-223-240" Aug 13 01:35:34.087444 kubelet[2303]: E0813 01:35:34.087285 2303 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-233-223-240\": node \"172-233-223-240\" not found" Aug 13 01:35:34.186397 kubelet[2303]: I0813 01:35:34.186321 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:34.192377 kubelet[2303]: E0813 01:35:34.192321 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-233-223-240\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:34.192377 kubelet[2303]: I0813 01:35:34.192358 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:35:34.194176 kubelet[2303]: E0813 01:35:34.194147 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-223-240\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:35:34.194176 kubelet[2303]: I0813 01:35:34.194176 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:34.195874 kubelet[2303]: E0813 01:35:34.195815 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-223-240\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:34.364444 kubelet[2303]: I0813 01:35:34.363955 2303 apiserver.go:52] "Watching apiserver" Aug 13 01:35:34.385660 kubelet[2303]: I0813 01:35:34.385563 2303 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:35:34.479926 kubelet[2303]: I0813 01:35:34.479498 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:34.479926 kubelet[2303]: I0813 01:35:34.479688 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:35:34.483932 kubelet[2303]: E0813 01:35:34.483684 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-223-240\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:34.483932 kubelet[2303]: E0813 01:35:34.483844 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:34.532252 kubelet[2303]: E0813 01:35:34.532183 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-223-240\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:35:34.532476 kubelet[2303]: E0813 01:35:34.532451 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:35.481129 kubelet[2303]: I0813 01:35:35.481081 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:35.487323 kubelet[2303]: E0813 01:35:35.487087 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:36.261411 systemd[1]: Reload requested from client PID 2592 ('systemctl') (unit session-9.scope)... Aug 13 01:35:36.261458 systemd[1]: Reloading... Aug 13 01:35:36.447993 zram_generator::config[2654]: No configuration found. Aug 13 01:35:36.483467 kubelet[2303]: E0813 01:35:36.483434 2303 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:36.586191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:35:36.707563 systemd[1]: Reloading finished in 445 ms. Aug 13 01:35:36.744499 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:35:36.768629 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:35:36.769514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:35:36.769592 systemd[1]: kubelet.service: Consumed 1.214s CPU time, 132.5M memory peak. Aug 13 01:35:36.780195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:35:37.010924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:35:37.023452 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:35:37.107161 kubelet[2688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:35:37.107161 kubelet[2688]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:35:37.107161 kubelet[2688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:35:37.107744 kubelet[2688]: I0813 01:35:37.107245 2688 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:35:37.116955 kubelet[2688]: I0813 01:35:37.116864 2688 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:35:37.117175 kubelet[2688]: I0813 01:35:37.117044 2688 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:35:37.117783 kubelet[2688]: I0813 01:35:37.117731 2688 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:35:37.119441 kubelet[2688]: I0813 01:35:37.119355 2688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:35:37.121740 kubelet[2688]: I0813 01:35:37.121717 2688 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:35:37.126729 kubelet[2688]: E0813 01:35:37.126688 2688 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:35:37.126729 kubelet[2688]: I0813 01:35:37.126722 2688 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:35:37.132556 kubelet[2688]: I0813 01:35:37.132258 2688 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:35:37.133055 kubelet[2688]: I0813 01:35:37.132999 2688 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:35:37.133835 kubelet[2688]: I0813 01:35:37.133172 2688 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-223-240","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:35:37.134996 kubelet[2688]: I0813 01:35:37.134969 2688 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:35:37.136140 kubelet[2688]: I0813 01:35:37.135148 2688 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:35:37.136140 kubelet[2688]: I0813 01:35:37.135260 2688 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:35:37.136140 kubelet[2688]: I0813 01:35:37.135487 2688 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:35:37.136140 kubelet[2688]: I0813 01:35:37.135512 2688 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:35:37.136140 kubelet[2688]: I0813 01:35:37.135542 2688 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:35:37.136140 kubelet[2688]: I0813 01:35:37.135556 2688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:35:37.140069 kubelet[2688]: I0813 01:35:37.140033 2688 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 01:35:37.140462 kubelet[2688]: I0813 01:35:37.140435 2688 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:35:37.141099 kubelet[2688]: I0813 01:35:37.141075 2688 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:35:37.141178 kubelet[2688]: I0813 01:35:37.141121 2688 server.go:1287] "Started kubelet" Aug 13 01:35:37.142709 kubelet[2688]: I0813 01:35:37.142638 2688 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:35:37.143643 kubelet[2688]: I0813 01:35:37.143614 2688 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:35:37.145588 kubelet[2688]: I0813 01:35:37.145529 2688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:35:37.145820 kubelet[2688]: I0813 01:35:37.145794 2688 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:35:37.147421 kubelet[2688]: I0813 01:35:37.147208 2688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:35:37.160698 kubelet[2688]: I0813 01:35:37.160655 2688 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:35:37.163783 kubelet[2688]: I0813 01:35:37.163746 2688 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:35:37.164471 kubelet[2688]: E0813 01:35:37.164235 2688 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-223-240\" not found" Aug 13 01:35:37.167637 kubelet[2688]: I0813 01:35:37.167593 2688 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:35:37.168397 kubelet[2688]: I0813 01:35:37.168341 2688 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:35:37.169076 kubelet[2688]: E0813 01:35:37.168656 2688 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:35:37.169553 kubelet[2688]: I0813 01:35:37.169525 2688 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:35:37.169703 kubelet[2688]: I0813 01:35:37.169679 2688 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:35:37.173617 kubelet[2688]: I0813 01:35:37.173558 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:35:37.177366 kubelet[2688]: I0813 01:35:37.177339 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:35:37.177682 kubelet[2688]: I0813 01:35:37.177549 2688 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:35:37.178013 kubelet[2688]: I0813 01:35:37.177801 2688 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:35:37.178152 kubelet[2688]: I0813 01:35:37.178122 2688 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:35:37.178683 kubelet[2688]: E0813 01:35:37.178314 2688 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:35:37.186350 kubelet[2688]: I0813 01:35:37.186305 2688 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:35:37.243568 kubelet[2688]: I0813 01:35:37.243538 2688 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:35:37.243746 kubelet[2688]: I0813 01:35:37.243731 2688 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:35:37.244728 kubelet[2688]: I0813 01:35:37.243805 2688 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:35:37.244728 kubelet[2688]: I0813 01:35:37.244023 2688 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:35:37.244728 kubelet[2688]: I0813 01:35:37.244036 2688 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:35:37.244728 kubelet[2688]: I0813 01:35:37.244055 2688 policy_none.go:49] "None policy: Start" Aug 13 01:35:37.244728 kubelet[2688]: I0813 01:35:37.244065 2688 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:35:37.244728 kubelet[2688]: I0813 01:35:37.244076 2688 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:35:37.244728 kubelet[2688]: I0813 01:35:37.244208 2688 state_mem.go:75] "Updated machine memory state" Aug 13 01:35:37.253167 kubelet[2688]: I0813 01:35:37.252919 2688 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:35:37.253484 kubelet[2688]: I0813 01:35:37.253466 2688 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:35:37.253693 kubelet[2688]: I0813 01:35:37.253665 2688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:35:37.266832 kubelet[2688]: I0813 01:35:37.259986 2688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:35:37.268712 kubelet[2688]: E0813 01:35:37.268275 2688 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:35:37.280076 kubelet[2688]: I0813 01:35:37.280035 2688 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:35:37.282791 kubelet[2688]: I0813 01:35:37.282772 2688 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:37.285162 kubelet[2688]: I0813 01:35:37.281523 2688 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:37.288036 sudo[2718]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:35:37.289198 sudo[2718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 01:35:37.301173 kubelet[2688]: E0813 01:35:37.300378 2688 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-223-240\" already exists" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:37.358759 kubelet[2688]: I0813 01:35:37.358712 2688 kubelet_node_status.go:75] "Attempting to register node" node="172-233-223-240" Aug 13 01:35:37.370520 kubelet[2688]: I0813 01:35:37.370470 2688 kubelet_node_status.go:124] "Node was previously registered" node="172-233-223-240" Aug 13 01:35:37.370670 kubelet[2688]: I0813 01:35:37.370598 2688 kubelet_node_status.go:78] "Successfully registered node" node="172-233-223-240" Aug 13 01:35:37.372403 kubelet[2688]: I0813 01:35:37.371976 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-kubeconfig\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:37.372403 kubelet[2688]: I0813 01:35:37.372009 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa7326c4f0410458e95a5ebeec8e64c1-k8s-certs\") pod \"kube-apiserver-172-233-223-240\" (UID: \"aa7326c4f0410458e95a5ebeec8e64c1\") " pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:37.372403 kubelet[2688]: I0813 01:35:37.372026 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa7326c4f0410458e95a5ebeec8e64c1-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-223-240\" (UID: \"aa7326c4f0410458e95a5ebeec8e64c1\") " pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:37.372403 kubelet[2688]: I0813 01:35:37.372045 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-flexvolume-dir\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:37.372403 kubelet[2688]: I0813 01:35:37.372059 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-k8s-certs\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:37.372593 kubelet[2688]: I0813 01:35:37.372075 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:37.372593 kubelet[2688]: I0813 01:35:37.372090 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9d8d1e6eb79e6ce2a4a89912e076b9c4-kubeconfig\") pod \"kube-scheduler-172-233-223-240\" (UID: \"9d8d1e6eb79e6ce2a4a89912e076b9c4\") " pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:35:37.372593 kubelet[2688]: I0813 01:35:37.372104 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa7326c4f0410458e95a5ebeec8e64c1-ca-certs\") pod \"kube-apiserver-172-233-223-240\" (UID: \"aa7326c4f0410458e95a5ebeec8e64c1\") " pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:37.372593 kubelet[2688]: I0813 01:35:37.372155 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8ad3cde934ff9fd3b892bd7afa65e57-ca-certs\") pod \"kube-controller-manager-172-233-223-240\" (UID: \"d8ad3cde934ff9fd3b892bd7afa65e57\") " pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:35:37.668029 kubelet[2688]: E0813 01:35:37.665551 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:37.668029 kubelet[2688]: E0813 01:35:37.665633 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:37.668029 kubelet[2688]: E0813 01:35:37.665791 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:38.138855 kubelet[2688]: I0813 01:35:38.138797 2688 apiserver.go:52] "Watching apiserver" Aug 13 01:35:38.181296 kubelet[2688]: I0813 01:35:38.181198 2688 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:35:38.213597 kubelet[2688]: E0813 01:35:38.213561 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:38.214211 kubelet[2688]: I0813 01:35:38.214188 2688 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:35:38.214509 kubelet[2688]: I0813 01:35:38.214470 2688 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:38.230582 kubelet[2688]: E0813 01:35:38.227531 2688 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-223-240\" already exists" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:35:38.230582 kubelet[2688]: E0813 01:35:38.228466 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:38.231292 kubelet[2688]: E0813 01:35:38.231155 2688 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-223-240\" already exists" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:35:38.232036 kubelet[2688]: E0813 01:35:38.232011 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:38.265195 kubelet[2688]: I0813 01:35:38.263915 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-233-223-240" podStartSLOduration=1.2635775759999999 podStartE2EDuration="1.263577576s" podCreationTimestamp="2025-08-13 01:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:35:38.254832006 +0000 UTC m=+1.208703481" watchObservedRunningTime="2025-08-13 01:35:38.263577576 +0000 UTC m=+1.217449051" Aug 13 01:35:38.274210 kubelet[2688]: I0813 01:35:38.273713 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-233-223-240" podStartSLOduration=1.273633386 podStartE2EDuration="1.273633386s" podCreationTimestamp="2025-08-13 01:35:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:35:38.264044686 +0000 UTC m=+1.217916161" watchObservedRunningTime="2025-08-13 01:35:38.273633386 +0000 UTC m=+1.227504861" Aug 13 01:35:38.621694 sudo[2718]: pam_unix(sudo:session): session closed for user root Aug 13 01:35:39.216100 kubelet[2688]: E0813 01:35:39.216047 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:39.219093 kubelet[2688]: E0813 01:35:39.217681 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:39.463440 kubelet[2688]: E0813 01:35:39.463398 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:41.046313 kubelet[2688]: E0813 01:35:41.045543 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:41.139279 kubelet[2688]: I0813 01:35:41.138151 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-233-223-240" podStartSLOduration=6.13811932 podStartE2EDuration="6.13811932s" podCreationTimestamp="2025-08-13 01:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:35:38.275298156 +0000 UTC m=+1.229169631" watchObservedRunningTime="2025-08-13 01:35:41.13811932 +0000 UTC m=+4.091990835" Aug 13 01:35:41.243464 kubelet[2688]: E0813 01:35:41.224076 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:43.674702 kubelet[2688]: I0813 01:35:43.674642 2688 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:35:43.675347 containerd[1492]: time="2025-08-13T01:35:43.675008864Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:35:43.675648 kubelet[2688]: I0813 01:35:43.675386 2688 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:35:44.251631 systemd[1]: Created slice kubepods-besteffort-pod92cde492_cffa_4a67_a382_573375083721.slice - libcontainer container kubepods-besteffort-pod92cde492_cffa_4a67_a382_573375083721.slice. Aug 13 01:35:44.272478 kubelet[2688]: W0813 01:35:44.272083 2688 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172-233-223-240" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-233-223-240' and this object Aug 13 01:35:44.272478 kubelet[2688]: E0813 01:35:44.272136 2688 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172-233-223-240\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-233-223-240' and this object" logger="UnhandledError" Aug 13 01:35:44.272478 kubelet[2688]: I0813 01:35:44.272197 2688 status_manager.go:890] "Failed to get status for pod" podUID="c67b6e0c-16a5-47ac-92fd-af9bf0169651" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" err="pods \"cilium-operator-6c4d7847fc-fzpq4\" is forbidden: User \"system:node:172-233-223-240\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-233-223-240' and this object" Aug 13 01:35:44.278411 systemd[1]: Created slice kubepods-besteffort-podc67b6e0c_16a5_47ac_92fd_af9bf0169651.slice - libcontainer container kubepods-besteffort-podc67b6e0c_16a5_47ac_92fd_af9bf0169651.slice. Aug 13 01:35:44.332369 systemd[1]: Created slice kubepods-burstable-podb570fe5d_eb8b_4763_9890_9e7f066c4c2e.slice - libcontainer container kubepods-burstable-podb570fe5d_eb8b_4763_9890_9e7f066c4c2e.slice. Aug 13 01:35:44.415007 kubelet[2688]: I0813 01:35:44.342069 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-bpf-maps\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415007 kubelet[2688]: I0813 01:35:44.342118 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-clustermesh-secrets\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415007 kubelet[2688]: I0813 01:35:44.342144 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smjzn\" (UniqueName: \"kubernetes.io/projected/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-kube-api-access-smjzn\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415007 kubelet[2688]: I0813 01:35:44.342162 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-hubble-tls\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415007 kubelet[2688]: I0813 01:35:44.342178 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-cgroup\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415007 kubelet[2688]: I0813 01:35:44.342200 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-host-proc-sys-kernel\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415521 kubelet[2688]: I0813 01:35:44.342217 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjsj\" (UniqueName: \"kubernetes.io/projected/92cde492-cffa-4a67-a382-573375083721-kube-api-access-xcjsj\") pod \"kube-proxy-jl9t6\" (UID: \"92cde492-cffa-4a67-a382-573375083721\") " pod="kube-system/kube-proxy-jl9t6" Aug 13 01:35:44.415521 kubelet[2688]: I0813 01:35:44.342233 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-etc-cni-netd\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415521 kubelet[2688]: I0813 01:35:44.342249 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-lib-modules\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415521 kubelet[2688]: I0813 01:35:44.342459 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92cde492-cffa-4a67-a382-573375083721-kube-proxy\") pod \"kube-proxy-jl9t6\" (UID: \"92cde492-cffa-4a67-a382-573375083721\") " pod="kube-system/kube-proxy-jl9t6" Aug 13 01:35:44.415521 kubelet[2688]: I0813 01:35:44.342477 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lgkw\" (UniqueName: \"kubernetes.io/projected/c67b6e0c-16a5-47ac-92fd-af9bf0169651-kube-api-access-6lgkw\") pod \"cilium-operator-6c4d7847fc-fzpq4\" (UID: \"c67b6e0c-16a5-47ac-92fd-af9bf0169651\") " pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:35:44.415658 kubelet[2688]: I0813 01:35:44.342492 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-config-path\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415658 kubelet[2688]: I0813 01:35:44.342513 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92cde492-cffa-4a67-a382-573375083721-xtables-lock\") pod \"kube-proxy-jl9t6\" (UID: \"92cde492-cffa-4a67-a382-573375083721\") " pod="kube-system/kube-proxy-jl9t6" Aug 13 01:35:44.415658 kubelet[2688]: I0813 01:35:44.342529 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92cde492-cffa-4a67-a382-573375083721-lib-modules\") pod \"kube-proxy-jl9t6\" (UID: \"92cde492-cffa-4a67-a382-573375083721\") " pod="kube-system/kube-proxy-jl9t6" Aug 13 01:35:44.415658 kubelet[2688]: I0813 01:35:44.342544 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-run\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415658 kubelet[2688]: I0813 01:35:44.342559 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cni-path\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415658 kubelet[2688]: I0813 01:35:44.342577 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-xtables-lock\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415842 kubelet[2688]: I0813 01:35:44.342601 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c67b6e0c-16a5-47ac-92fd-af9bf0169651-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fzpq4\" (UID: \"c67b6e0c-16a5-47ac-92fd-af9bf0169651\") " pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:35:44.415842 kubelet[2688]: I0813 01:35:44.342621 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-hostproc\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.415842 kubelet[2688]: I0813 01:35:44.342636 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-host-proc-sys-net\") pod \"cilium-h64hf\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " pod="kube-system/cilium-h64hf" Aug 13 01:35:44.563427 kubelet[2688]: E0813 01:35:44.563284 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:44.566486 containerd[1492]: time="2025-08-13T01:35:44.566415168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jl9t6,Uid:92cde492-cffa-4a67-a382-573375083721,Namespace:kube-system,Attempt:0,}" Aug 13 01:35:44.742528 containerd[1492]: time="2025-08-13T01:35:44.742289069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:35:44.742528 containerd[1492]: time="2025-08-13T01:35:44.742465398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:35:44.743326 containerd[1492]: time="2025-08-13T01:35:44.742501478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:44.744760 containerd[1492]: time="2025-08-13T01:35:44.744298937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:44.913367 systemd[1]: Started cri-containerd-c77d0711f127fdce43c9dc068df21f6edddbd7e66bc9b464199e1752e10c5027.scope - libcontainer container c77d0711f127fdce43c9dc068df21f6edddbd7e66bc9b464199e1752e10c5027. Aug 13 01:35:44.983446 containerd[1492]: time="2025-08-13T01:35:44.965176854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jl9t6,Uid:92cde492-cffa-4a67-a382-573375083721,Namespace:kube-system,Attempt:0,} returns sandbox id \"c77d0711f127fdce43c9dc068df21f6edddbd7e66bc9b464199e1752e10c5027\"" Aug 13 01:35:44.983446 containerd[1492]: time="2025-08-13T01:35:44.974317818Z" level=info msg="CreateContainer within sandbox \"c77d0711f127fdce43c9dc068df21f6edddbd7e66bc9b464199e1752e10c5027\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:35:44.983620 kubelet[2688]: E0813 01:35:44.966171 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:45.072954 containerd[1492]: time="2025-08-13T01:35:45.070917002Z" level=info msg="CreateContainer within sandbox \"c77d0711f127fdce43c9dc068df21f6edddbd7e66bc9b464199e1752e10c5027\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81a683ad0c39e7e4802ca30aa24d4f93c481daea8b058894f4bed3fa6bd4c051\"" Aug 13 01:35:45.076676 containerd[1492]: time="2025-08-13T01:35:45.074028235Z" level=info msg="StartContainer for \"81a683ad0c39e7e4802ca30aa24d4f93c481daea8b058894f4bed3fa6bd4c051\"" Aug 13 01:35:45.086378 sudo[1743]: pam_unix(sudo:session): session closed for user root Aug 13 01:35:45.137313 sshd[1742]: Connection closed by 139.178.89.65 port 39638 Aug 13 01:35:45.140435 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:45.146718 systemd[1]: sshd@9-172.233.223.240:22-139.178.89.65:39638.service: Deactivated successfully. Aug 13 01:35:45.150694 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:35:45.151286 systemd[1]: session-9.scope: Consumed 8.792s CPU time, 263.8M memory peak. Aug 13 01:35:45.154681 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:35:45.156767 systemd-logind[1468]: Removed session 9. Aug 13 01:35:45.171293 kubelet[2688]: E0813 01:35:45.171180 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:45.222102 systemd[1]: Started cri-containerd-81a683ad0c39e7e4802ca30aa24d4f93c481daea8b058894f4bed3fa6bd4c051.scope - libcontainer container 81a683ad0c39e7e4802ca30aa24d4f93c481daea8b058894f4bed3fa6bd4c051. Aug 13 01:35:45.259113 containerd[1492]: time="2025-08-13T01:35:45.258870339Z" level=info msg="StartContainer for \"81a683ad0c39e7e4802ca30aa24d4f93c481daea8b058894f4bed3fa6bd4c051\" returns successfully" Aug 13 01:35:45.314060 kubelet[2688]: E0813 01:35:45.314004 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:45.314590 kubelet[2688]: E0813 01:35:45.314561 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:45.344833 kubelet[2688]: I0813 01:35:45.344061 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jl9t6" podStartSLOduration=2.344036532 podStartE2EDuration="2.344036532s" podCreationTimestamp="2025-08-13 01:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:35:45.33200304 +0000 UTC m=+8.285874515" watchObservedRunningTime="2025-08-13 01:35:45.344036532 +0000 UTC m=+8.297908007" Aug 13 01:35:45.485329 kubelet[2688]: E0813 01:35:45.485290 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:45.486103 containerd[1492]: time="2025-08-13T01:35:45.486041246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fzpq4,Uid:c67b6e0c-16a5-47ac-92fd-af9bf0169651,Namespace:kube-system,Attempt:0,}" Aug 13 01:35:45.533466 containerd[1492]: time="2025-08-13T01:35:45.533002913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:35:45.533466 containerd[1492]: time="2025-08-13T01:35:45.533062123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:35:45.533466 containerd[1492]: time="2025-08-13T01:35:45.533077613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:45.533466 containerd[1492]: time="2025-08-13T01:35:45.533173032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:45.573070 kubelet[2688]: E0813 01:35:45.572994 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:45.578374 containerd[1492]: time="2025-08-13T01:35:45.578282310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h64hf,Uid:b570fe5d-eb8b-4763-9890-9e7f066c4c2e,Namespace:kube-system,Attempt:0,}" Aug 13 01:35:45.583348 systemd[1]: Started cri-containerd-346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f.scope - libcontainer container 346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f. Aug 13 01:35:45.585479 systemd[1]: run-containerd-runc-k8s.io-346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f-runc.7RJrKY.mount: Deactivated successfully. Aug 13 01:35:45.688123 containerd[1492]: time="2025-08-13T01:35:45.688067755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fzpq4,Uid:c67b6e0c-16a5-47ac-92fd-af9bf0169651,Namespace:kube-system,Attempt:0,} returns sandbox id \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\"" Aug 13 01:35:45.689447 kubelet[2688]: E0813 01:35:45.689241 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:45.695999 containerd[1492]: time="2025-08-13T01:35:45.695544213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:35:45.695999 containerd[1492]: time="2025-08-13T01:35:45.695602352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:35:45.695999 containerd[1492]: time="2025-08-13T01:35:45.695613172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:45.698663 containerd[1492]: time="2025-08-13T01:35:45.696782396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:35:45.702303 containerd[1492]: time="2025-08-13T01:35:45.702235605Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:35:45.731065 systemd[1]: Started cri-containerd-05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01.scope - libcontainer container 05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01. Aug 13 01:35:45.785962 containerd[1492]: time="2025-08-13T01:35:45.785263979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h64hf,Uid:b570fe5d-eb8b-4763-9890-9e7f066c4c2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\"" Aug 13 01:35:45.788138 kubelet[2688]: E0813 01:35:45.787742 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:46.318006 kubelet[2688]: E0813 01:35:46.317627 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:46.592315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373262307.mount: Deactivated successfully. Aug 13 01:35:48.424673 systemd[1]: Started sshd@13-172.233.223.240:22-188.213.192.7:49513.service - OpenSSH per-connection server daemon (188.213.192.7:49513). Aug 13 01:35:49.432272 containerd[1492]: time="2025-08-13T01:35:49.432020800Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:49.433119 containerd[1492]: time="2025-08-13T01:35:49.433060695Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 01:35:49.433569 containerd[1492]: time="2025-08-13T01:35:49.433532623Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:49.435625 containerd[1492]: time="2025-08-13T01:35:49.435580924Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.73314775s" Aug 13 01:35:49.435686 containerd[1492]: time="2025-08-13T01:35:49.435629624Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:35:49.437858 containerd[1492]: time="2025-08-13T01:35:49.437816904Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:35:49.440244 containerd[1492]: time="2025-08-13T01:35:49.439929635Z" level=info msg="CreateContainer within sandbox \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:35:49.463420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221450548.mount: Deactivated successfully. Aug 13 01:35:49.465584 containerd[1492]: time="2025-08-13T01:35:49.465465602Z" level=info msg="CreateContainer within sandbox \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\"" Aug 13 01:35:49.468808 containerd[1492]: time="2025-08-13T01:35:49.468754868Z" level=info msg="StartContainer for \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\"" Aug 13 01:35:49.485490 kubelet[2688]: E0813 01:35:49.485337 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:49.518973 sshd[3063]: Connection closed by 188.213.192.7 port 49513 [preauth] Aug 13 01:35:49.522848 systemd[1]: sshd@13-172.233.223.240:22-188.213.192.7:49513.service: Deactivated successfully. Aug 13 01:35:49.605175 systemd[1]: Started cri-containerd-4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197.scope - libcontainer container 4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197. Aug 13 01:35:49.652933 containerd[1492]: time="2025-08-13T01:35:49.652795266Z" level=info msg="StartContainer for \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\" returns successfully" Aug 13 01:35:50.362440 kubelet[2688]: E0813 01:35:50.360651 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:51.368328 kubelet[2688]: E0813 01:35:51.364840 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:35:58.658825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2097410737.mount: Deactivated successfully. Aug 13 01:36:02.757881 containerd[1492]: time="2025-08-13T01:36:02.757806752Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:36:02.758980 containerd[1492]: time="2025-08-13T01:36:02.758940719Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 01:36:02.759941 containerd[1492]: time="2025-08-13T01:36:02.759454248Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:36:02.761247 containerd[1492]: time="2025-08-13T01:36:02.761219825Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.323363821s" Aug 13 01:36:02.761383 containerd[1492]: time="2025-08-13T01:36:02.761364915Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:36:02.764601 containerd[1492]: time="2025-08-13T01:36:02.764541849Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:36:02.786052 containerd[1492]: time="2025-08-13T01:36:02.785998918Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\"" Aug 13 01:36:02.786525 containerd[1492]: time="2025-08-13T01:36:02.786501937Z" level=info msg="StartContainer for \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\"" Aug 13 01:36:02.864625 systemd[1]: run-containerd-runc-k8s.io-bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2-runc.wQ5v5f.mount: Deactivated successfully. Aug 13 01:36:02.876185 systemd[1]: Started cri-containerd-bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2.scope - libcontainer container bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2. Aug 13 01:36:02.990392 containerd[1492]: time="2025-08-13T01:36:02.990305589Z" level=info msg="StartContainer for \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\" returns successfully" Aug 13 01:36:03.008238 systemd[1]: cri-containerd-bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2.scope: Deactivated successfully. Aug 13 01:36:03.046728 containerd[1492]: time="2025-08-13T01:36:03.046367257Z" level=info msg="shim disconnected" id=bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2 namespace=k8s.io Aug 13 01:36:03.046728 containerd[1492]: time="2025-08-13T01:36:03.046486397Z" level=warning msg="cleaning up after shim disconnected" id=bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2 namespace=k8s.io Aug 13 01:36:03.046728 containerd[1492]: time="2025-08-13T01:36:03.046506987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:36:03.567461 kubelet[2688]: E0813 01:36:03.567422 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:03.571012 containerd[1492]: time="2025-08-13T01:36:03.570952310Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:36:03.586636 containerd[1492]: time="2025-08-13T01:36:03.586569562Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\"" Aug 13 01:36:03.589342 containerd[1492]: time="2025-08-13T01:36:03.589001887Z" level=info msg="StartContainer for \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\"" Aug 13 01:36:03.590121 kubelet[2688]: I0813 01:36:03.590049 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" podStartSLOduration=15.84521093 podStartE2EDuration="19.589882746s" podCreationTimestamp="2025-08-13 01:35:44 +0000 UTC" firstStartedPulling="2025-08-13 01:35:45.691968923 +0000 UTC m=+8.645840398" lastFinishedPulling="2025-08-13 01:35:49.436640739 +0000 UTC m=+12.390512214" observedRunningTime="2025-08-13 01:35:50.834653624 +0000 UTC m=+13.788525099" watchObservedRunningTime="2025-08-13 01:36:03.589882746 +0000 UTC m=+26.543754221" Aug 13 01:36:03.626286 systemd[1]: Started cri-containerd-7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5.scope - libcontainer container 7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5. Aug 13 01:36:03.658673 containerd[1492]: time="2025-08-13T01:36:03.658602243Z" level=info msg="StartContainer for \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\" returns successfully" Aug 13 01:36:03.680951 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:36:03.681259 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:36:03.682458 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:36:03.692224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:36:03.692478 systemd[1]: cri-containerd-7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5.scope: Deactivated successfully. Aug 13 01:36:03.727782 containerd[1492]: time="2025-08-13T01:36:03.727553560Z" level=info msg="shim disconnected" id=7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5 namespace=k8s.io Aug 13 01:36:03.727782 containerd[1492]: time="2025-08-13T01:36:03.727604220Z" level=warning msg="cleaning up after shim disconnected" id=7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5 namespace=k8s.io Aug 13 01:36:03.727782 containerd[1492]: time="2025-08-13T01:36:03.727623710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:36:03.752396 containerd[1492]: time="2025-08-13T01:36:03.751184878Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:36:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:36:03.760961 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:36:03.774677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2-rootfs.mount: Deactivated successfully. Aug 13 01:36:04.570850 kubelet[2688]: E0813 01:36:04.570774 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:04.576359 containerd[1492]: time="2025-08-13T01:36:04.575420508Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:36:04.618007 containerd[1492]: time="2025-08-13T01:36:04.617952637Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\"" Aug 13 01:36:04.619285 containerd[1492]: time="2025-08-13T01:36:04.619235924Z" level=info msg="StartContainer for \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\"" Aug 13 01:36:04.664088 systemd[1]: Started cri-containerd-13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed.scope - libcontainer container 13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed. Aug 13 01:36:04.721877 containerd[1492]: time="2025-08-13T01:36:04.721829573Z" level=info msg="StartContainer for \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\" returns successfully" Aug 13 01:36:04.726102 systemd[1]: cri-containerd-13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed.scope: Deactivated successfully. Aug 13 01:36:04.767801 containerd[1492]: time="2025-08-13T01:36:04.767578087Z" level=info msg="shim disconnected" id=13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed namespace=k8s.io Aug 13 01:36:04.767801 containerd[1492]: time="2025-08-13T01:36:04.767651796Z" level=warning msg="cleaning up after shim disconnected" id=13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed namespace=k8s.io Aug 13 01:36:04.767801 containerd[1492]: time="2025-08-13T01:36:04.767666126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:36:04.777057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed-rootfs.mount: Deactivated successfully. Aug 13 01:36:05.576216 kubelet[2688]: E0813 01:36:05.576094 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:05.579885 containerd[1492]: time="2025-08-13T01:36:05.579537847Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:36:05.593699 containerd[1492]: time="2025-08-13T01:36:05.590115580Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\"" Aug 13 01:36:05.593699 containerd[1492]: time="2025-08-13T01:36:05.591791987Z" level=info msg="StartContainer for \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\"" Aug 13 01:36:05.640057 systemd[1]: Started cri-containerd-9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5.scope - libcontainer container 9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5. Aug 13 01:36:05.668978 systemd[1]: cri-containerd-9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5.scope: Deactivated successfully. Aug 13 01:36:05.671099 containerd[1492]: time="2025-08-13T01:36:05.670757183Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb570fe5d_eb8b_4763_9890_9e7f066c4c2e.slice/cri-containerd-9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5.scope/memory.events\": no such file or directory" Aug 13 01:36:05.673287 containerd[1492]: time="2025-08-13T01:36:05.673155860Z" level=info msg="StartContainer for \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\" returns successfully" Aug 13 01:36:05.697637 containerd[1492]: time="2025-08-13T01:36:05.697560291Z" level=info msg="shim disconnected" id=9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5 namespace=k8s.io Aug 13 01:36:05.697637 containerd[1492]: time="2025-08-13T01:36:05.697613221Z" level=warning msg="cleaning up after shim disconnected" id=9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5 namespace=k8s.io Aug 13 01:36:05.697637 containerd[1492]: time="2025-08-13T01:36:05.697622421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:36:05.774989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5-rootfs.mount: Deactivated successfully. Aug 13 01:36:06.579908 kubelet[2688]: E0813 01:36:06.579845 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:06.584128 containerd[1492]: time="2025-08-13T01:36:06.584095067Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:36:06.605484 containerd[1492]: time="2025-08-13T01:36:06.601701151Z" level=info msg="CreateContainer within sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\"" Aug 13 01:36:06.605484 containerd[1492]: time="2025-08-13T01:36:06.604994966Z" level=info msg="StartContainer for \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\"" Aug 13 01:36:06.606455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234694024.mount: Deactivated successfully. Aug 13 01:36:06.648054 systemd[1]: Started cri-containerd-546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9.scope - libcontainer container 546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9. Aug 13 01:36:06.737490 containerd[1492]: time="2025-08-13T01:36:06.737374661Z" level=info msg="StartContainer for \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\" returns successfully" Aug 13 01:36:06.950442 kubelet[2688]: I0813 01:36:06.950350 2688 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:36:06.987954 systemd[1]: Created slice kubepods-burstable-pode73e3223_985a_4c2c_94e5_a5ab1c996093.slice - libcontainer container kubepods-burstable-pode73e3223_985a_4c2c_94e5_a5ab1c996093.slice. Aug 13 01:36:07.001940 systemd[1]: Created slice kubepods-burstable-podff4df3a2_0744_47a3_a777_45ec3adfe077.slice - libcontainer container kubepods-burstable-podff4df3a2_0744_47a3_a777_45ec3adfe077.slice. Aug 13 01:36:07.112274 kubelet[2688]: I0813 01:36:07.112226 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8psgx\" (UniqueName: \"kubernetes.io/projected/e73e3223-985a-4c2c-94e5-a5ab1c996093-kube-api-access-8psgx\") pod \"coredns-668d6bf9bc-955pz\" (UID: \"e73e3223-985a-4c2c-94e5-a5ab1c996093\") " pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:36:07.112674 kubelet[2688]: I0813 01:36:07.112547 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff4df3a2-0744-47a3-a777-45ec3adfe077-config-volume\") pod \"coredns-668d6bf9bc-nx2pw\" (UID: \"ff4df3a2-0744-47a3-a777-45ec3adfe077\") " pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:36:07.112674 kubelet[2688]: I0813 01:36:07.112602 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e73e3223-985a-4c2c-94e5-a5ab1c996093-config-volume\") pod \"coredns-668d6bf9bc-955pz\" (UID: \"e73e3223-985a-4c2c-94e5-a5ab1c996093\") " pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:36:07.112674 kubelet[2688]: I0813 01:36:07.112627 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-957jj\" (UniqueName: \"kubernetes.io/projected/ff4df3a2-0744-47a3-a777-45ec3adfe077-kube-api-access-957jj\") pod \"coredns-668d6bf9bc-nx2pw\" (UID: \"ff4df3a2-0744-47a3-a777-45ec3adfe077\") " pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:36:07.294757 kubelet[2688]: E0813 01:36:07.294571 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:07.297627 containerd[1492]: time="2025-08-13T01:36:07.296872365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-955pz,Uid:e73e3223-985a-4c2c-94e5-a5ab1c996093,Namespace:kube-system,Attempt:0,}" Aug 13 01:36:07.309506 kubelet[2688]: E0813 01:36:07.309047 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:07.312431 containerd[1492]: time="2025-08-13T01:36:07.312402473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nx2pw,Uid:ff4df3a2-0744-47a3-a777-45ec3adfe077,Namespace:kube-system,Attempt:0,}" Aug 13 01:36:07.424611 kubelet[2688]: I0813 01:36:07.424558 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:07.424870 kubelet[2688]: I0813 01:36:07.424852 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:36:07.432156 kubelet[2688]: I0813 01:36:07.432021 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:07.459387 kubelet[2688]: I0813 01:36:07.459358 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:07.459838 kubelet[2688]: I0813 01:36:07.459672 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:36:07.459838 kubelet[2688]: E0813 01:36:07.459734 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:36:07.459838 kubelet[2688]: E0813 01:36:07.459745 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:36:07.459838 kubelet[2688]: E0813 01:36:07.459756 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:36:07.459838 kubelet[2688]: E0813 01:36:07.459769 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:36:07.459838 kubelet[2688]: E0813 01:36:07.459779 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:36:07.459838 kubelet[2688]: E0813 01:36:07.459791 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:36:07.459838 kubelet[2688]: E0813 01:36:07.459799 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:36:07.459838 kubelet[2688]: E0813 01:36:07.459807 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:36:07.459838 kubelet[2688]: I0813 01:36:07.459819 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:07.586342 kubelet[2688]: E0813 01:36:07.586220 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:08.587716 kubelet[2688]: E0813 01:36:08.587681 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:09.312440 systemd-networkd[1404]: cilium_host: Link UP Aug 13 01:36:09.315406 systemd-networkd[1404]: cilium_net: Link UP Aug 13 01:36:09.315796 systemd-networkd[1404]: cilium_net: Gained carrier Aug 13 01:36:09.316777 systemd-networkd[1404]: cilium_host: Gained carrier Aug 13 01:36:09.317219 systemd-networkd[1404]: cilium_net: Gained IPv6LL Aug 13 01:36:09.317606 systemd-networkd[1404]: cilium_host: Gained IPv6LL Aug 13 01:36:09.482438 systemd-networkd[1404]: cilium_vxlan: Link UP Aug 13 01:36:09.482709 systemd-networkd[1404]: cilium_vxlan: Gained carrier Aug 13 01:36:09.591741 kubelet[2688]: E0813 01:36:09.590453 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:09.973192 kernel: NET: Registered PF_ALG protocol family Aug 13 01:36:10.638160 systemd-networkd[1404]: cilium_vxlan: Gained IPv6LL Aug 13 01:36:10.754123 systemd-networkd[1404]: lxc_health: Link UP Aug 13 01:36:10.766174 systemd-networkd[1404]: lxc_health: Gained carrier Aug 13 01:36:10.931786 kernel: eth0: renamed from tmpbd027 Aug 13 01:36:10.931466 systemd-networkd[1404]: lxc315802d1345c: Link UP Aug 13 01:36:10.937544 systemd-networkd[1404]: lxc315802d1345c: Gained carrier Aug 13 01:36:11.392990 kernel: eth0: renamed from tmpdef32 Aug 13 01:36:11.396610 systemd-networkd[1404]: lxce3a89a5c2213: Link UP Aug 13 01:36:11.401066 systemd-networkd[1404]: lxce3a89a5c2213: Gained carrier Aug 13 01:36:11.576402 kubelet[2688]: E0813 01:36:11.576235 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:11.596927 kubelet[2688]: E0813 01:36:11.596303 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:11.600088 kubelet[2688]: I0813 01:36:11.600040 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h64hf" podStartSLOduration=10.627323392 podStartE2EDuration="27.600003279s" podCreationTimestamp="2025-08-13 01:35:44 +0000 UTC" firstStartedPulling="2025-08-13 01:35:45.789384966 +0000 UTC m=+8.743256441" lastFinishedPulling="2025-08-13 01:36:02.762064853 +0000 UTC m=+25.715936328" observedRunningTime="2025-08-13 01:36:07.602253633 +0000 UTC m=+30.556125108" watchObservedRunningTime="2025-08-13 01:36:11.600003279 +0000 UTC m=+34.553874754" Aug 13 01:36:12.046433 systemd-networkd[1404]: lxc315802d1345c: Gained IPv6LL Aug 13 01:36:12.599440 kubelet[2688]: E0813 01:36:12.598222 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:12.686696 systemd-networkd[1404]: lxc_health: Gained IPv6LL Aug 13 01:36:12.750137 systemd-networkd[1404]: lxce3a89a5c2213: Gained IPv6LL Aug 13 01:36:15.106027 containerd[1492]: time="2025-08-13T01:36:15.103999083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:36:15.106027 containerd[1492]: time="2025-08-13T01:36:15.104089093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:36:15.106027 containerd[1492]: time="2025-08-13T01:36:15.104105283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:36:15.106027 containerd[1492]: time="2025-08-13T01:36:15.104195622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:36:15.175152 systemd[1]: Started cri-containerd-def32fef0b2684a51463b229c1d7f54381d4373cf738576c30e3abefafcc0559.scope - libcontainer container def32fef0b2684a51463b229c1d7f54381d4373cf738576c30e3abefafcc0559. Aug 13 01:36:15.197613 containerd[1492]: time="2025-08-13T01:36:15.197317775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:36:15.200317 containerd[1492]: time="2025-08-13T01:36:15.200045133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:36:15.200317 containerd[1492]: time="2025-08-13T01:36:15.200081793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:36:15.200317 containerd[1492]: time="2025-08-13T01:36:15.200179102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:36:15.270055 systemd[1]: Started cri-containerd-bd0275a4e8bfcff0c806bfa9903088208f2585e4ffe857be7e32f6bc5e8a6add.scope - libcontainer container bd0275a4e8bfcff0c806bfa9903088208f2585e4ffe857be7e32f6bc5e8a6add. Aug 13 01:36:15.357412 containerd[1492]: time="2025-08-13T01:36:15.357249023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-955pz,Uid:e73e3223-985a-4c2c-94e5-a5ab1c996093,Namespace:kube-system,Attempt:0,} returns sandbox id \"def32fef0b2684a51463b229c1d7f54381d4373cf738576c30e3abefafcc0559\"" Aug 13 01:36:15.358561 kubelet[2688]: E0813 01:36:15.358505 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:15.364639 containerd[1492]: time="2025-08-13T01:36:15.364544857Z" level=info msg="CreateContainer within sandbox \"def32fef0b2684a51463b229c1d7f54381d4373cf738576c30e3abefafcc0559\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:36:15.387040 containerd[1492]: time="2025-08-13T01:36:15.386882989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nx2pw,Uid:ff4df3a2-0744-47a3-a777-45ec3adfe077,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd0275a4e8bfcff0c806bfa9903088208f2585e4ffe857be7e32f6bc5e8a6add\"" Aug 13 01:36:15.388191 kubelet[2688]: E0813 01:36:15.388158 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:15.389949 containerd[1492]: time="2025-08-13T01:36:15.389882556Z" level=info msg="CreateContainer within sandbox \"def32fef0b2684a51463b229c1d7f54381d4373cf738576c30e3abefafcc0559\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"278ef578cc2c3b14d3044ada3f3c9bd52a4abb2b4b415ff91e883ddca55afe29\"" Aug 13 01:36:15.391935 containerd[1492]: time="2025-08-13T01:36:15.391062005Z" level=info msg="StartContainer for \"278ef578cc2c3b14d3044ada3f3c9bd52a4abb2b4b415ff91e883ddca55afe29\"" Aug 13 01:36:15.394053 containerd[1492]: time="2025-08-13T01:36:15.393407543Z" level=info msg="CreateContainer within sandbox \"bd0275a4e8bfcff0c806bfa9903088208f2585e4ffe857be7e32f6bc5e8a6add\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:36:15.407353 containerd[1492]: time="2025-08-13T01:36:15.406884962Z" level=info msg="CreateContainer within sandbox \"bd0275a4e8bfcff0c806bfa9903088208f2585e4ffe857be7e32f6bc5e8a6add\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"974e93424ace8c6ce2673af5098a1c818f66f463e1db1c3cbaa2199be008aeb1\"" Aug 13 01:36:15.417655 containerd[1492]: time="2025-08-13T01:36:15.417604643Z" level=info msg="StartContainer for \"974e93424ace8c6ce2673af5098a1c818f66f463e1db1c3cbaa2199be008aeb1\"" Aug 13 01:36:15.432121 systemd[1]: Started cri-containerd-278ef578cc2c3b14d3044ada3f3c9bd52a4abb2b4b415ff91e883ddca55afe29.scope - libcontainer container 278ef578cc2c3b14d3044ada3f3c9bd52a4abb2b4b415ff91e883ddca55afe29. Aug 13 01:36:15.475210 systemd[1]: Started cri-containerd-974e93424ace8c6ce2673af5098a1c818f66f463e1db1c3cbaa2199be008aeb1.scope - libcontainer container 974e93424ace8c6ce2673af5098a1c818f66f463e1db1c3cbaa2199be008aeb1. Aug 13 01:36:15.498957 containerd[1492]: time="2025-08-13T01:36:15.498732907Z" level=info msg="StartContainer for \"278ef578cc2c3b14d3044ada3f3c9bd52a4abb2b4b415ff91e883ddca55afe29\" returns successfully" Aug 13 01:36:15.528446 containerd[1492]: time="2025-08-13T01:36:15.528212312Z" level=info msg="StartContainer for \"974e93424ace8c6ce2673af5098a1c818f66f463e1db1c3cbaa2199be008aeb1\" returns successfully" Aug 13 01:36:15.606760 kubelet[2688]: E0813 01:36:15.606726 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:15.612178 kubelet[2688]: E0813 01:36:15.612000 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:15.625552 kubelet[2688]: I0813 01:36:15.625408 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nx2pw" podStartSLOduration=31.625364552 podStartE2EDuration="31.625364552s" podCreationTimestamp="2025-08-13 01:35:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:36:15.624725373 +0000 UTC m=+38.578596848" watchObservedRunningTime="2025-08-13 01:36:15.625364552 +0000 UTC m=+38.579236027" Aug 13 01:36:15.652850 kubelet[2688]: I0813 01:36:15.652241 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-955pz" podStartSLOduration=32.65222193 podStartE2EDuration="32.65222193s" podCreationTimestamp="2025-08-13 01:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:36:15.639315701 +0000 UTC m=+38.593187176" watchObservedRunningTime="2025-08-13 01:36:15.65222193 +0000 UTC m=+38.606093405" Aug 13 01:36:16.114423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382121453.mount: Deactivated successfully. Aug 13 01:36:16.613593 kubelet[2688]: E0813 01:36:16.613430 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:16.613593 kubelet[2688]: E0813 01:36:16.613497 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:17.505546 kubelet[2688]: I0813 01:36:17.505499 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:17.505684 kubelet[2688]: I0813 01:36:17.505566 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:36:17.507763 kubelet[2688]: I0813 01:36:17.507701 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:17.519156 kubelet[2688]: I0813 01:36:17.519109 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:17.519287 kubelet[2688]: I0813 01:36:17.519250 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:36:17.519329 kubelet[2688]: E0813 01:36:17.519306 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:36:17.519329 kubelet[2688]: E0813 01:36:17.519319 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:36:17.519329 kubelet[2688]: E0813 01:36:17.519328 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:36:17.519419 kubelet[2688]: E0813 01:36:17.519338 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:36:17.519419 kubelet[2688]: E0813 01:36:17.519348 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:36:17.519419 kubelet[2688]: E0813 01:36:17.519357 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:36:17.519419 kubelet[2688]: E0813 01:36:17.519365 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:36:17.519419 kubelet[2688]: E0813 01:36:17.519372 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:36:17.519419 kubelet[2688]: I0813 01:36:17.519383 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:17.615103 kubelet[2688]: E0813 01:36:17.615043 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:17.615691 kubelet[2688]: E0813 01:36:17.615333 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:27.538830 kubelet[2688]: I0813 01:36:27.538743 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:27.538830 kubelet[2688]: I0813 01:36:27.538808 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:36:27.542235 kubelet[2688]: I0813 01:36:27.542202 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:27.555052 kubelet[2688]: I0813 01:36:27.555016 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:27.555212 kubelet[2688]: I0813 01:36:27.555153 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:36:27.555212 kubelet[2688]: E0813 01:36:27.555197 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:36:27.555373 kubelet[2688]: E0813 01:36:27.555217 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:36:27.555373 kubelet[2688]: E0813 01:36:27.555232 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:36:27.555373 kubelet[2688]: E0813 01:36:27.555247 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:36:27.555373 kubelet[2688]: E0813 01:36:27.555263 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:36:27.555373 kubelet[2688]: E0813 01:36:27.555272 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:36:27.555373 kubelet[2688]: E0813 01:36:27.555282 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:36:27.555373 kubelet[2688]: E0813 01:36:27.555290 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:36:27.555373 kubelet[2688]: I0813 01:36:27.555301 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:35.400169 systemd[1]: Started sshd@14-172.233.223.240:22-5.121.189.9:64505.service - OpenSSH per-connection server daemon (5.121.189.9:64505). Aug 13 01:36:35.776729 sshd[4054]: Connection closed by 5.121.189.9 port 64505 [preauth] Aug 13 01:36:35.778016 systemd[1]: sshd@14-172.233.223.240:22-5.121.189.9:64505.service: Deactivated successfully. Aug 13 01:36:37.580992 kubelet[2688]: I0813 01:36:37.580949 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:37.582718 kubelet[2688]: I0813 01:36:37.581014 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:36:37.584488 kubelet[2688]: I0813 01:36:37.584448 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:37.598528 kubelet[2688]: I0813 01:36:37.598243 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:37.598528 kubelet[2688]: I0813 01:36:37.598370 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:36:37.598528 kubelet[2688]: E0813 01:36:37.598424 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:36:37.598528 kubelet[2688]: E0813 01:36:37.598438 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:36:37.598528 kubelet[2688]: E0813 01:36:37.598447 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:36:37.598528 kubelet[2688]: E0813 01:36:37.598458 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:36:37.598528 kubelet[2688]: E0813 01:36:37.598468 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:36:37.598528 kubelet[2688]: E0813 01:36:37.598476 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:36:37.598528 kubelet[2688]: E0813 01:36:37.598484 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:36:37.598528 kubelet[2688]: E0813 01:36:37.598493 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:36:37.598528 kubelet[2688]: I0813 01:36:37.598505 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:47.620325 kubelet[2688]: I0813 01:36:47.620274 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:47.621149 kubelet[2688]: I0813 01:36:47.620342 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:36:47.624668 kubelet[2688]: I0813 01:36:47.624517 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:47.638426 kubelet[2688]: I0813 01:36:47.638396 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:47.638759 kubelet[2688]: I0813 01:36:47.638528 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-proxy-jl9t6","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:36:47.638841 kubelet[2688]: E0813 01:36:47.638809 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:36:47.638841 kubelet[2688]: E0813 01:36:47.638824 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:36:47.638841 kubelet[2688]: E0813 01:36:47.638834 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:36:47.638841 kubelet[2688]: E0813 01:36:47.638843 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:36:47.639028 kubelet[2688]: E0813 01:36:47.638851 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:36:47.639028 kubelet[2688]: E0813 01:36:47.638859 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:36:47.639028 kubelet[2688]: E0813 01:36:47.638867 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:36:47.639028 kubelet[2688]: E0813 01:36:47.638875 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:36:47.639028 kubelet[2688]: I0813 01:36:47.638886 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:48.179499 kubelet[2688]: E0813 01:36:48.179457 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:53.179918 kubelet[2688]: E0813 01:36:53.178939 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:36:57.659390 kubelet[2688]: I0813 01:36:57.658368 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:57.659390 kubelet[2688]: I0813 01:36:57.658407 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:36:57.661434 kubelet[2688]: I0813 01:36:57.661154 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:57.676792 kubelet[2688]: I0813 01:36:57.676754 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:57.677040 kubelet[2688]: I0813 01:36:57.676958 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/coredns-668d6bf9bc-955pz","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:36:57.677040 kubelet[2688]: E0813 01:36:57.677000 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:36:57.677040 kubelet[2688]: E0813 01:36:57.677013 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:36:57.677040 kubelet[2688]: E0813 01:36:57.677023 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:36:57.677040 kubelet[2688]: E0813 01:36:57.677032 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:36:57.677040 kubelet[2688]: E0813 01:36:57.677043 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:36:57.677329 kubelet[2688]: E0813 01:36:57.677051 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:36:57.677329 kubelet[2688]: E0813 01:36:57.677059 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:36:57.677329 kubelet[2688]: E0813 01:36:57.677068 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:36:57.677329 kubelet[2688]: I0813 01:36:57.677080 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:03.185843 kubelet[2688]: E0813 01:37:03.183448 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:37:04.181882 kubelet[2688]: E0813 01:37:04.180029 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:37:07.709694 kubelet[2688]: I0813 01:37:07.709498 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:07.709694 kubelet[2688]: I0813 01:37:07.709607 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:37:07.716084 kubelet[2688]: I0813 01:37:07.715064 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:07.739044 kubelet[2688]: I0813 01:37:07.739009 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:07.739260 kubelet[2688]: I0813 01:37:07.739172 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:37:07.739260 kubelet[2688]: E0813 01:37:07.739231 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:37:07.739260 kubelet[2688]: E0813 01:37:07.739244 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:37:07.739260 kubelet[2688]: E0813 01:37:07.739253 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:37:07.739260 kubelet[2688]: E0813 01:37:07.739262 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:37:07.739260 kubelet[2688]: E0813 01:37:07.739271 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:37:07.739260 kubelet[2688]: E0813 01:37:07.739280 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:37:07.739533 kubelet[2688]: E0813 01:37:07.739289 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:37:07.739533 kubelet[2688]: E0813 01:37:07.739297 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:37:07.739533 kubelet[2688]: I0813 01:37:07.739309 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:14.179379 kubelet[2688]: E0813 01:37:14.179335 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:37:17.757555 kubelet[2688]: I0813 01:37:17.757509 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:17.757555 kubelet[2688]: I0813 01:37:17.757556 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:37:17.760752 kubelet[2688]: I0813 01:37:17.760723 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:17.773767 kubelet[2688]: I0813 01:37:17.773703 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:17.773979 kubelet[2688]: I0813 01:37:17.773854 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:37:17.773979 kubelet[2688]: E0813 01:37:17.773934 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:37:17.773979 kubelet[2688]: E0813 01:37:17.773949 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:37:17.773979 kubelet[2688]: E0813 01:37:17.773959 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:37:17.773979 kubelet[2688]: E0813 01:37:17.773970 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:37:17.773979 kubelet[2688]: E0813 01:37:17.773979 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:37:17.773979 kubelet[2688]: E0813 01:37:17.773988 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:37:17.773979 kubelet[2688]: E0813 01:37:17.773999 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:37:17.773979 kubelet[2688]: E0813 01:37:17.774008 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:37:17.774313 kubelet[2688]: I0813 01:37:17.774019 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:21.179324 kubelet[2688]: E0813 01:37:21.178807 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:37:26.862232 systemd[1]: Started sshd@15-172.233.223.240:22-172.80.206.170:24497.service - OpenSSH per-connection server daemon (172.80.206.170:24497). Aug 13 01:37:27.182321 kubelet[2688]: E0813 01:37:27.182194 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:37:27.601715 sshd[4068]: Connection closed by 172.80.206.170 port 24497 [preauth] Aug 13 01:37:27.604328 systemd[1]: sshd@15-172.233.223.240:22-172.80.206.170:24497.service: Deactivated successfully. Aug 13 01:37:27.796358 kubelet[2688]: I0813 01:37:27.796312 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:27.796358 kubelet[2688]: I0813 01:37:27.796360 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:37:27.799192 kubelet[2688]: I0813 01:37:27.799162 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:27.813282 kubelet[2688]: I0813 01:37:27.813254 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:27.813411 kubelet[2688]: I0813 01:37:27.813360 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:37:27.813411 kubelet[2688]: E0813 01:37:27.813392 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:37:27.813411 kubelet[2688]: E0813 01:37:27.813404 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:37:27.813411 kubelet[2688]: E0813 01:37:27.813413 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:37:27.813717 kubelet[2688]: E0813 01:37:27.813423 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:37:27.813717 kubelet[2688]: E0813 01:37:27.813432 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:37:27.813717 kubelet[2688]: E0813 01:37:27.813440 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:37:27.813717 kubelet[2688]: E0813 01:37:27.813448 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:37:27.813717 kubelet[2688]: E0813 01:37:27.813456 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:37:27.813717 kubelet[2688]: I0813 01:37:27.813466 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:34.673183 systemd[1]: Started sshd@16-172.233.223.240:22-139.178.89.65:45964.service - OpenSSH per-connection server daemon (139.178.89.65:45964). Aug 13 01:37:35.015700 sshd[4074]: Accepted publickey for core from 139.178.89.65 port 45964 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:37:35.018077 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:35.025506 systemd-logind[1468]: New session 10 of user core. Aug 13 01:37:35.029030 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:37:35.379185 sshd[4076]: Connection closed by 139.178.89.65 port 45964 Aug 13 01:37:35.380357 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:35.386578 systemd[1]: sshd@16-172.233.223.240:22-139.178.89.65:45964.service: Deactivated successfully. Aug 13 01:37:35.390342 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:37:35.391416 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:37:35.392764 systemd-logind[1468]: Removed session 10. Aug 13 01:37:36.215758 systemd[1]: Started sshd@17-172.233.223.240:22-5.237.195.55:56984.service - OpenSSH per-connection server daemon (5.237.195.55:56984). Aug 13 01:37:36.961024 sshd[4089]: Connection closed by 5.237.195.55 port 56984 [preauth] Aug 13 01:37:36.963068 systemd[1]: sshd@17-172.233.223.240:22-5.237.195.55:56984.service: Deactivated successfully. Aug 13 01:37:37.835286 kubelet[2688]: I0813 01:37:37.835239 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:37.835286 kubelet[2688]: I0813 01:37:37.835282 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:37:37.838366 kubelet[2688]: I0813 01:37:37.838344 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:37.839853 kubelet[2688]: I0813 01:37:37.839795 2688 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" size=320368 runtimeHandler="" Aug 13 01:37:37.840711 containerd[1492]: time="2025-08-13T01:37:37.840621898Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:37:37.842127 containerd[1492]: time="2025-08-13T01:37:37.842081496Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.10\"" Aug 13 01:37:37.842641 containerd[1492]: time="2025-08-13T01:37:37.842589055Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\"" Aug 13 01:37:37.843197 containerd[1492]: time="2025-08-13T01:37:37.843170404Z" level=info msg="ImageDelete event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:37:37.852225 containerd[1492]: time="2025-08-13T01:37:37.852183437Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" returns successfully" Aug 13 01:37:37.852512 kubelet[2688]: I0813 01:37:37.852467 2688 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" Aug 13 01:37:37.852746 containerd[1492]: time="2025-08-13T01:37:37.852719276Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:37:37.853567 containerd[1492]: time="2025-08-13T01:37:37.853538814Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 01:37:37.854037 containerd[1492]: time="2025-08-13T01:37:37.854008133Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" Aug 13 01:37:37.855728 containerd[1492]: time="2025-08-13T01:37:37.854492062Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:37:37.936750 containerd[1492]: time="2025-08-13T01:37:37.936686478Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" Aug 13 01:37:37.949441 kubelet[2688]: I0813 01:37:37.949395 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:37.949559 kubelet[2688]: I0813 01:37:37.949539 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:37:37.949617 kubelet[2688]: E0813 01:37:37.949580 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:37:37.949617 kubelet[2688]: E0813 01:37:37.949597 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:37:37.949617 kubelet[2688]: E0813 01:37:37.949608 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:37:37.949617 kubelet[2688]: E0813 01:37:37.949617 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:37:37.949717 kubelet[2688]: E0813 01:37:37.949626 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:37:37.949717 kubelet[2688]: E0813 01:37:37.949635 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:37:37.949717 kubelet[2688]: E0813 01:37:37.949643 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:37:37.949717 kubelet[2688]: E0813 01:37:37.949651 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:37:37.949717 kubelet[2688]: I0813 01:37:37.949661 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:38.488460 systemd[1]: Started sshd@18-172.233.223.240:22-23.236.201.80:49512.service - OpenSSH per-connection server daemon (23.236.201.80:49512). Aug 13 01:37:39.460759 sshd[4096]: Connection closed by 23.236.201.80 port 49512 [preauth] Aug 13 01:37:39.462629 systemd[1]: sshd@18-172.233.223.240:22-23.236.201.80:49512.service: Deactivated successfully. Aug 13 01:37:40.179523 kubelet[2688]: E0813 01:37:40.179397 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:37:40.446335 systemd[1]: Started sshd@19-172.233.223.240:22-139.178.89.65:56748.service - OpenSSH per-connection server daemon (139.178.89.65:56748). Aug 13 01:37:40.784945 sshd[4101]: Accepted publickey for core from 139.178.89.65 port 56748 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:37:40.786909 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:40.793480 systemd-logind[1468]: New session 11 of user core. Aug 13 01:37:40.800045 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:37:41.097347 sshd[4103]: Connection closed by 139.178.89.65 port 56748 Aug 13 01:37:41.098271 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:41.103198 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:37:41.104084 systemd[1]: sshd@19-172.233.223.240:22-139.178.89.65:56748.service: Deactivated successfully. Aug 13 01:37:41.107532 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:37:41.109069 systemd-logind[1468]: Removed session 11. Aug 13 01:37:46.163383 systemd[1]: Started sshd@20-172.233.223.240:22-139.178.89.65:56764.service - OpenSSH per-connection server daemon (139.178.89.65:56764). Aug 13 01:37:46.497957 sshd[4117]: Accepted publickey for core from 139.178.89.65 port 56764 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:37:46.500066 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:46.507189 systemd-logind[1468]: New session 12 of user core. Aug 13 01:37:46.511169 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:37:46.810165 sshd[4119]: Connection closed by 139.178.89.65 port 56764 Aug 13 01:37:46.811241 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:46.816628 systemd[1]: sshd@20-172.233.223.240:22-139.178.89.65:56764.service: Deactivated successfully. Aug 13 01:37:46.820023 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:37:46.821022 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:37:46.822777 systemd-logind[1468]: Removed session 12. Aug 13 01:37:47.968838 kubelet[2688]: I0813 01:37:47.968806 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:47.968838 kubelet[2688]: I0813 01:37:47.968843 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:37:47.971378 kubelet[2688]: I0813 01:37:47.971281 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:47.985221 kubelet[2688]: I0813 01:37:47.985196 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:47.985418 kubelet[2688]: I0813 01:37:47.985384 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:37:47.985463 kubelet[2688]: E0813 01:37:47.985425 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:37:47.985463 kubelet[2688]: E0813 01:37:47.985440 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:37:47.985463 kubelet[2688]: E0813 01:37:47.985453 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:37:47.985548 kubelet[2688]: E0813 01:37:47.985465 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:37:47.985548 kubelet[2688]: E0813 01:37:47.985479 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:37:47.985548 kubelet[2688]: E0813 01:37:47.985491 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:37:47.985548 kubelet[2688]: E0813 01:37:47.985504 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:37:47.985548 kubelet[2688]: E0813 01:37:47.985515 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:37:47.985548 kubelet[2688]: I0813 01:37:47.985530 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:51.882236 systemd[1]: Started sshd@21-172.233.223.240:22-139.178.89.65:60826.service - OpenSSH per-connection server daemon (139.178.89.65:60826). Aug 13 01:37:52.211513 sshd[4133]: Accepted publickey for core from 139.178.89.65 port 60826 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:37:52.212938 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:52.218035 systemd-logind[1468]: New session 13 of user core. Aug 13 01:37:52.226018 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:37:52.540266 sshd[4135]: Connection closed by 139.178.89.65 port 60826 Aug 13 01:37:52.540961 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:52.546763 systemd[1]: sshd@21-172.233.223.240:22-139.178.89.65:60826.service: Deactivated successfully. Aug 13 01:37:52.549933 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:37:52.550800 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:37:52.552012 systemd-logind[1468]: Removed session 13. Aug 13 01:37:52.608219 systemd[1]: Started sshd@22-172.233.223.240:22-139.178.89.65:60840.service - OpenSSH per-connection server daemon (139.178.89.65:60840). Aug 13 01:37:52.939569 sshd[4148]: Accepted publickey for core from 139.178.89.65 port 60840 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:37:52.941507 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:52.946874 systemd-logind[1468]: New session 14 of user core. Aug 13 01:37:52.955035 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:37:53.312775 sshd[4150]: Connection closed by 139.178.89.65 port 60840 Aug 13 01:37:53.313912 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:53.319721 systemd[1]: sshd@22-172.233.223.240:22-139.178.89.65:60840.service: Deactivated successfully. Aug 13 01:37:53.323783 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:37:53.324933 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:37:53.326283 systemd-logind[1468]: Removed session 14. Aug 13 01:37:53.385912 systemd[1]: Started sshd@23-172.233.223.240:22-139.178.89.65:60856.service - OpenSSH per-connection server daemon (139.178.89.65:60856). Aug 13 01:37:53.721988 sshd[4160]: Accepted publickey for core from 139.178.89.65 port 60856 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:37:53.724499 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:53.731692 systemd-logind[1468]: New session 15 of user core. Aug 13 01:37:53.739109 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:37:54.040563 sshd[4162]: Connection closed by 139.178.89.65 port 60856 Aug 13 01:37:54.041306 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:54.045374 systemd[1]: sshd@23-172.233.223.240:22-139.178.89.65:60856.service: Deactivated successfully. Aug 13 01:37:54.047987 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:37:54.050184 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:37:54.051560 systemd-logind[1468]: Removed session 15. Aug 13 01:37:55.724182 systemd[1]: Started sshd@24-172.233.223.240:22-60.166.31.198:56238.service - OpenSSH per-connection server daemon (60.166.31.198:56238). Aug 13 01:37:58.005773 kubelet[2688]: I0813 01:37:58.005730 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:58.006263 kubelet[2688]: I0813 01:37:58.005798 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:37:58.007964 kubelet[2688]: I0813 01:37:58.007848 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:58.019317 kubelet[2688]: I0813 01:37:58.019278 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:58.019425 kubelet[2688]: I0813 01:37:58.019407 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:37:58.019484 kubelet[2688]: E0813 01:37:58.019445 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:37:58.019484 kubelet[2688]: E0813 01:37:58.019456 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:37:58.019484 kubelet[2688]: E0813 01:37:58.019465 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:37:58.019484 kubelet[2688]: E0813 01:37:58.019475 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:37:58.019484 kubelet[2688]: E0813 01:37:58.019483 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:37:58.019625 kubelet[2688]: E0813 01:37:58.019492 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:37:58.019625 kubelet[2688]: E0813 01:37:58.019501 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:37:58.019625 kubelet[2688]: E0813 01:37:58.019509 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:37:58.019625 kubelet[2688]: I0813 01:37:58.019520 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:59.104138 systemd[1]: Started sshd@25-172.233.223.240:22-139.178.89.65:42696.service - OpenSSH per-connection server daemon (139.178.89.65:42696). Aug 13 01:37:59.435392 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 42696 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:37:59.437283 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:59.443817 systemd-logind[1468]: New session 16 of user core. Aug 13 01:37:59.452084 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:37:59.741077 sshd[4179]: Connection closed by 139.178.89.65 port 42696 Aug 13 01:37:59.741787 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:59.745823 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:37:59.747008 systemd[1]: sshd@25-172.233.223.240:22-139.178.89.65:42696.service: Deactivated successfully. Aug 13 01:37:59.749406 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:37:59.750485 systemd-logind[1468]: Removed session 16. Aug 13 01:37:59.808154 systemd[1]: Started sshd@26-172.233.223.240:22-139.178.89.65:42700.service - OpenSSH per-connection server daemon (139.178.89.65:42700). Aug 13 01:38:00.148996 sshd[4191]: Accepted publickey for core from 139.178.89.65 port 42700 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:00.150657 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:00.155791 systemd-logind[1468]: New session 17 of user core. Aug 13 01:38:00.162030 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:38:00.522192 sshd-session[4193]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=60.166.31.198 user=root Aug 13 01:38:00.740536 sshd[4194]: Connection closed by 139.178.89.65 port 42700 Aug 13 01:38:00.741457 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:00.746334 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:38:00.747623 systemd[1]: sshd@26-172.233.223.240:22-139.178.89.65:42700.service: Deactivated successfully. Aug 13 01:38:00.750573 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:38:00.752204 systemd-logind[1468]: Removed session 17. Aug 13 01:38:00.815946 systemd[1]: Started sshd@27-172.233.223.240:22-139.178.89.65:42710.service - OpenSSH per-connection server daemon (139.178.89.65:42710). Aug 13 01:38:01.152959 sshd[4204]: Accepted publickey for core from 139.178.89.65 port 42710 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:01.154316 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:01.159063 systemd-logind[1468]: New session 18 of user core. Aug 13 01:38:01.170141 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:38:01.179984 kubelet[2688]: E0813 01:38:01.179010 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:38:01.998812 sshd[4206]: Connection closed by 139.178.89.65 port 42710 Aug 13 01:38:01.999514 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:02.004491 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:38:02.005205 systemd[1]: sshd@27-172.233.223.240:22-139.178.89.65:42710.service: Deactivated successfully. Aug 13 01:38:02.008235 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:38:02.009641 systemd-logind[1468]: Removed session 18. Aug 13 01:38:02.076287 systemd[1]: Started sshd@28-172.233.223.240:22-139.178.89.65:42720.service - OpenSSH per-connection server daemon (139.178.89.65:42720). Aug 13 01:38:02.406624 sshd[4223]: Accepted publickey for core from 139.178.89.65 port 42720 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:02.409031 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:02.415508 systemd-logind[1468]: New session 19 of user core. Aug 13 01:38:02.425052 sshd[4174]: PAM: Permission denied for root from 60.166.31.198 Aug 13 01:38:02.425271 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:38:02.823803 sshd[4225]: Connection closed by 139.178.89.65 port 42720 Aug 13 01:38:02.824606 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:02.829151 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:38:02.830476 systemd[1]: sshd@28-172.233.223.240:22-139.178.89.65:42720.service: Deactivated successfully. Aug 13 01:38:02.833388 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:38:02.835046 systemd-logind[1468]: Removed session 19. Aug 13 01:38:02.898196 systemd[1]: Started sshd@29-172.233.223.240:22-139.178.89.65:42734.service - OpenSSH per-connection server daemon (139.178.89.65:42734). Aug 13 01:38:03.122421 sshd[4174]: Connection closed by authenticating user root 60.166.31.198 port 56238 [preauth] Aug 13 01:38:03.124514 systemd[1]: sshd@24-172.233.223.240:22-60.166.31.198:56238.service: Deactivated successfully. Aug 13 01:38:03.237237 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 42734 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:03.239077 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:03.245052 systemd-logind[1468]: New session 20 of user core. Aug 13 01:38:03.251093 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:38:03.541538 sshd[4239]: Connection closed by 139.178.89.65 port 42734 Aug 13 01:38:03.542222 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:03.546796 systemd[1]: sshd@29-172.233.223.240:22-139.178.89.65:42734.service: Deactivated successfully. Aug 13 01:38:03.549340 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:38:03.550228 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:38:03.551451 systemd-logind[1468]: Removed session 20. Aug 13 01:38:04.178978 kubelet[2688]: E0813 01:38:04.178921 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:38:06.180168 kubelet[2688]: E0813 01:38:06.179982 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:38:06.180168 kubelet[2688]: E0813 01:38:06.179982 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:38:08.049367 kubelet[2688]: I0813 01:38:08.049312 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:08.049367 kubelet[2688]: I0813 01:38:08.049370 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:38:08.052293 kubelet[2688]: I0813 01:38:08.052128 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:38:08.068293 kubelet[2688]: I0813 01:38:08.068248 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:08.068530 kubelet[2688]: I0813 01:38:08.068493 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:38:08.068581 kubelet[2688]: E0813 01:38:08.068540 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:38:08.068581 kubelet[2688]: E0813 01:38:08.068558 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:38:08.068581 kubelet[2688]: E0813 01:38:08.068571 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:38:08.068800 kubelet[2688]: E0813 01:38:08.068584 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:38:08.068800 kubelet[2688]: E0813 01:38:08.068597 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:38:08.068800 kubelet[2688]: E0813 01:38:08.068612 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:38:08.068800 kubelet[2688]: E0813 01:38:08.068625 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:38:08.068800 kubelet[2688]: E0813 01:38:08.068637 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:38:08.068800 kubelet[2688]: I0813 01:38:08.068652 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:08.612393 systemd[1]: Started sshd@30-172.233.223.240:22-139.178.89.65:42736.service - OpenSSH per-connection server daemon (139.178.89.65:42736). Aug 13 01:38:08.946882 sshd[4254]: Accepted publickey for core from 139.178.89.65 port 42736 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:08.949077 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:08.955562 systemd-logind[1468]: New session 21 of user core. Aug 13 01:38:08.967068 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:38:09.266747 sshd[4256]: Connection closed by 139.178.89.65 port 42736 Aug 13 01:38:09.267486 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:09.271811 systemd[1]: sshd@30-172.233.223.240:22-139.178.89.65:42736.service: Deactivated successfully. Aug 13 01:38:09.274039 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:38:09.274950 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:38:09.276795 systemd-logind[1468]: Removed session 21. Aug 13 01:38:14.332183 systemd[1]: Started sshd@31-172.233.223.240:22-139.178.89.65:50196.service - OpenSSH per-connection server daemon (139.178.89.65:50196). Aug 13 01:38:14.665469 sshd[4268]: Accepted publickey for core from 139.178.89.65 port 50196 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:14.667360 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:14.673872 systemd-logind[1468]: New session 22 of user core. Aug 13 01:38:14.677067 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:38:14.970920 sshd[4270]: Connection closed by 139.178.89.65 port 50196 Aug 13 01:38:14.971631 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:14.977424 systemd[1]: sshd@31-172.233.223.240:22-139.178.89.65:50196.service: Deactivated successfully. Aug 13 01:38:14.980513 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:38:14.981815 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:38:14.983087 systemd-logind[1468]: Removed session 22. Aug 13 01:38:18.089126 kubelet[2688]: I0813 01:38:18.089066 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:18.090465 kubelet[2688]: I0813 01:38:18.089147 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:38:18.091979 kubelet[2688]: I0813 01:38:18.091693 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:38:18.106488 kubelet[2688]: I0813 01:38:18.106453 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:18.106635 kubelet[2688]: I0813 01:38:18.106616 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/coredns-668d6bf9bc-955pz","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:38:18.106672 kubelet[2688]: E0813 01:38:18.106659 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:38:18.106672 kubelet[2688]: E0813 01:38:18.106671 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:38:18.106739 kubelet[2688]: E0813 01:38:18.106680 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:38:18.106739 kubelet[2688]: E0813 01:38:18.106689 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:38:18.106739 kubelet[2688]: E0813 01:38:18.106698 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:38:18.106739 kubelet[2688]: E0813 01:38:18.106706 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:38:18.106739 kubelet[2688]: E0813 01:38:18.106714 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:38:18.106739 kubelet[2688]: E0813 01:38:18.106722 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:38:18.106739 kubelet[2688]: I0813 01:38:18.106732 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:20.041147 systemd[1]: Started sshd@32-172.233.223.240:22-139.178.89.65:46718.service - OpenSSH per-connection server daemon (139.178.89.65:46718). Aug 13 01:38:20.373770 sshd[4284]: Accepted publickey for core from 139.178.89.65 port 46718 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:20.375568 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:20.381877 systemd-logind[1468]: New session 23 of user core. Aug 13 01:38:20.386086 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:38:20.689176 sshd[4286]: Connection closed by 139.178.89.65 port 46718 Aug 13 01:38:20.690440 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:20.695353 systemd[1]: sshd@32-172.233.223.240:22-139.178.89.65:46718.service: Deactivated successfully. Aug 13 01:38:20.698472 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:38:20.699639 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:38:20.700662 systemd-logind[1468]: Removed session 23. Aug 13 01:38:23.957879 systemd[1]: Started sshd@33-172.233.223.240:22-49.13.136.222:32886.service - OpenSSH per-connection server daemon (49.13.136.222:32886). Aug 13 01:38:25.466701 sshd[4298]: Connection closed by 49.13.136.222 port 32886 [preauth] Aug 13 01:38:25.469035 systemd[1]: sshd@33-172.233.223.240:22-49.13.136.222:32886.service: Deactivated successfully. Aug 13 01:38:25.757253 systemd[1]: Started sshd@34-172.233.223.240:22-139.178.89.65:46734.service - OpenSSH per-connection server daemon (139.178.89.65:46734). Aug 13 01:38:26.090227 sshd[4306]: Accepted publickey for core from 139.178.89.65 port 46734 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:26.092145 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:26.098166 systemd-logind[1468]: New session 24 of user core. Aug 13 01:38:26.104058 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:38:26.405966 sshd[4308]: Connection closed by 139.178.89.65 port 46734 Aug 13 01:38:26.407225 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:26.411786 systemd[1]: sshd@34-172.233.223.240:22-139.178.89.65:46734.service: Deactivated successfully. Aug 13 01:38:26.413996 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:38:26.415682 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:38:26.417543 systemd-logind[1468]: Removed session 24. Aug 13 01:38:28.126764 kubelet[2688]: I0813 01:38:28.126725 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:28.126764 kubelet[2688]: I0813 01:38:28.126767 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:38:28.129517 kubelet[2688]: I0813 01:38:28.129481 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:38:28.142021 kubelet[2688]: I0813 01:38:28.141991 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:28.142348 kubelet[2688]: I0813 01:38:28.142143 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:38:28.142348 kubelet[2688]: E0813 01:38:28.142175 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:38:28.142348 kubelet[2688]: E0813 01:38:28.142187 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:38:28.142348 kubelet[2688]: E0813 01:38:28.142196 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:38:28.142348 kubelet[2688]: E0813 01:38:28.142210 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:38:28.142348 kubelet[2688]: E0813 01:38:28.142220 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:38:28.142348 kubelet[2688]: E0813 01:38:28.142229 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:38:28.142348 kubelet[2688]: E0813 01:38:28.142237 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:38:28.142348 kubelet[2688]: E0813 01:38:28.142245 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:38:28.142348 kubelet[2688]: I0813 01:38:28.142261 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:31.473215 systemd[1]: Started sshd@35-172.233.223.240:22-139.178.89.65:54390.service - OpenSSH per-connection server daemon (139.178.89.65:54390). Aug 13 01:38:31.797198 sshd[4320]: Accepted publickey for core from 139.178.89.65 port 54390 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:31.798848 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:31.804334 systemd-logind[1468]: New session 25 of user core. Aug 13 01:38:31.815098 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:38:32.097080 sshd[4322]: Connection closed by 139.178.89.65 port 54390 Aug 13 01:38:32.098136 sshd-session[4320]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:32.102330 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:38:32.103328 systemd[1]: sshd@35-172.233.223.240:22-139.178.89.65:54390.service: Deactivated successfully. Aug 13 01:38:32.105666 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:38:32.106793 systemd-logind[1468]: Removed session 25. Aug 13 01:38:35.233140 systemd[1]: Started sshd@36-172.233.223.240:22-125.69.76.148:51044.service - OpenSSH per-connection server daemon (125.69.76.148:51044). Aug 13 01:38:37.162122 systemd[1]: Started sshd@37-172.233.223.240:22-139.178.89.65:54392.service - OpenSSH per-connection server daemon (139.178.89.65:54392). Aug 13 01:38:37.181868 kubelet[2688]: E0813 01:38:37.181554 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:38:37.495256 sshd[4336]: Accepted publickey for core from 139.178.89.65 port 54392 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:37.497008 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:37.503032 systemd-logind[1468]: New session 26 of user core. Aug 13 01:38:37.512078 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:38:37.800949 sshd[4340]: Connection closed by 139.178.89.65 port 54392 Aug 13 01:38:37.801963 sshd-session[4336]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:37.806281 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:38:37.807216 systemd[1]: sshd@37-172.233.223.240:22-139.178.89.65:54392.service: Deactivated successfully. Aug 13 01:38:37.810746 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:38:37.812378 systemd-logind[1468]: Removed session 26. Aug 13 01:38:38.163136 kubelet[2688]: I0813 01:38:38.161684 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:38.163136 kubelet[2688]: I0813 01:38:38.161726 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:38:38.165319 kubelet[2688]: I0813 01:38:38.165036 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:38:38.183322 kubelet[2688]: I0813 01:38:38.183284 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:38.183682 kubelet[2688]: I0813 01:38:38.183416 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/coredns-668d6bf9bc-955pz","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:38:38.183682 kubelet[2688]: E0813 01:38:38.183445 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:38:38.183682 kubelet[2688]: E0813 01:38:38.183459 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:38:38.183682 kubelet[2688]: E0813 01:38:38.183467 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:38:38.183682 kubelet[2688]: E0813 01:38:38.183477 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:38:38.183682 kubelet[2688]: E0813 01:38:38.183486 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:38:38.183682 kubelet[2688]: E0813 01:38:38.183495 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:38:38.183682 kubelet[2688]: E0813 01:38:38.183502 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:38:38.183682 kubelet[2688]: E0813 01:38:38.183510 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:38:38.183682 kubelet[2688]: I0813 01:38:38.183521 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:39.180126 kubelet[2688]: E0813 01:38:39.179324 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:38:40.268919 sshd-session[4354]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=125.69.76.148 user=root Aug 13 01:38:42.859513 sshd[4333]: PAM: Permission denied for root from 125.69.76.148 Aug 13 01:38:42.865153 systemd[1]: Started sshd@38-172.233.223.240:22-139.178.89.65:39808.service - OpenSSH per-connection server daemon (139.178.89.65:39808). Aug 13 01:38:43.195950 sshd[4356]: Accepted publickey for core from 139.178.89.65 port 39808 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:43.197655 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:43.203510 systemd-logind[1468]: New session 27 of user core. Aug 13 01:38:43.211087 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:38:43.503818 sshd[4358]: Connection closed by 139.178.89.65 port 39808 Aug 13 01:38:43.504833 sshd-session[4356]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:43.509664 systemd[1]: sshd@38-172.233.223.240:22-139.178.89.65:39808.service: Deactivated successfully. Aug 13 01:38:43.513303 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:38:43.514485 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:38:43.515692 systemd-logind[1468]: Removed session 27. Aug 13 01:38:43.638868 sshd[4333]: Connection closed by authenticating user root 125.69.76.148 port 51044 [preauth] Aug 13 01:38:43.642104 systemd[1]: sshd@36-172.233.223.240:22-125.69.76.148:51044.service: Deactivated successfully. Aug 13 01:38:48.204471 kubelet[2688]: I0813 01:38:48.204437 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:48.204471 kubelet[2688]: I0813 01:38:48.204475 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:38:48.207947 kubelet[2688]: I0813 01:38:48.207928 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:38:48.220050 kubelet[2688]: I0813 01:38:48.220018 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:48.220186 kubelet[2688]: I0813 01:38:48.220164 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:38:48.220217 kubelet[2688]: E0813 01:38:48.220203 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:38:48.220241 kubelet[2688]: E0813 01:38:48.220217 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:38:48.220241 kubelet[2688]: E0813 01:38:48.220226 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:38:48.220241 kubelet[2688]: E0813 01:38:48.220236 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:38:48.220341 kubelet[2688]: E0813 01:38:48.220244 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:38:48.220341 kubelet[2688]: E0813 01:38:48.220253 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:38:48.220341 kubelet[2688]: E0813 01:38:48.220262 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:38:48.220341 kubelet[2688]: E0813 01:38:48.220270 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:38:48.220341 kubelet[2688]: I0813 01:38:48.220282 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:48.574205 systemd[1]: Started sshd@39-172.233.223.240:22-139.178.89.65:39816.service - OpenSSH per-connection server daemon (139.178.89.65:39816). Aug 13 01:38:48.788158 systemd[1]: Started sshd@40-172.233.223.240:22-5.121.187.104:12725.service - OpenSSH per-connection server daemon (5.121.187.104:12725). Aug 13 01:38:48.910247 sshd[4374]: Accepted publickey for core from 139.178.89.65 port 39816 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:48.911938 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:48.927210 systemd-logind[1468]: New session 28 of user core. Aug 13 01:38:48.941097 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:38:49.225690 sshd[4379]: Connection closed by 139.178.89.65 port 39816 Aug 13 01:38:49.226525 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:49.230960 systemd-logind[1468]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:38:49.231781 systemd[1]: sshd@39-172.233.223.240:22-139.178.89.65:39816.service: Deactivated successfully. Aug 13 01:38:49.234261 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:38:49.235351 systemd-logind[1468]: Removed session 28. Aug 13 01:38:50.536546 sshd[4377]: Connection reset by 5.121.187.104 port 12725 [preauth] Aug 13 01:38:50.538053 systemd[1]: sshd@40-172.233.223.240:22-5.121.187.104:12725.service: Deactivated successfully. Aug 13 01:38:53.180008 kubelet[2688]: E0813 01:38:53.179416 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:38:54.299139 systemd[1]: Started sshd@41-172.233.223.240:22-139.178.89.65:38692.service - OpenSSH per-connection server daemon (139.178.89.65:38692). Aug 13 01:38:54.632767 sshd[4393]: Accepted publickey for core from 139.178.89.65 port 38692 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:38:54.634430 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:54.639732 systemd-logind[1468]: New session 29 of user core. Aug 13 01:38:54.650095 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:38:54.973426 sshd[4395]: Connection closed by 139.178.89.65 port 38692 Aug 13 01:38:54.974140 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:54.978856 systemd-logind[1468]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:38:54.979694 systemd[1]: sshd@41-172.233.223.240:22-139.178.89.65:38692.service: Deactivated successfully. Aug 13 01:38:54.982220 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:38:54.983445 systemd-logind[1468]: Removed session 29. Aug 13 01:38:55.182002 kubelet[2688]: E0813 01:38:55.181970 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:38:58.243057 kubelet[2688]: I0813 01:38:58.243002 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:58.243057 kubelet[2688]: I0813 01:38:58.243061 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:38:58.245374 kubelet[2688]: I0813 01:38:58.245277 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:38:58.258585 kubelet[2688]: I0813 01:38:58.258551 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:58.258782 kubelet[2688]: I0813 01:38:58.258753 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/coredns-668d6bf9bc-955pz","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:38:58.258844 kubelet[2688]: E0813 01:38:58.258802 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:38:58.258844 kubelet[2688]: E0813 01:38:58.258822 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:38:58.258844 kubelet[2688]: E0813 01:38:58.258835 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:38:58.258966 kubelet[2688]: E0813 01:38:58.258849 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:38:58.258966 kubelet[2688]: E0813 01:38:58.258883 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:38:58.258966 kubelet[2688]: E0813 01:38:58.258954 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:38:58.259052 kubelet[2688]: E0813 01:38:58.258968 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:38:58.259052 kubelet[2688]: E0813 01:38:58.258980 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:38:58.259052 kubelet[2688]: I0813 01:38:58.259035 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:39:00.039235 systemd[1]: Started sshd@42-172.233.223.240:22-139.178.89.65:39570.service - OpenSSH per-connection server daemon (139.178.89.65:39570). Aug 13 01:39:00.367121 sshd[4407]: Accepted publickey for core from 139.178.89.65 port 39570 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:00.368496 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:00.373166 systemd-logind[1468]: New session 30 of user core. Aug 13 01:39:00.382091 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:39:00.673781 sshd[4409]: Connection closed by 139.178.89.65 port 39570 Aug 13 01:39:00.674863 sshd-session[4407]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:00.679433 systemd-logind[1468]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:39:00.680176 systemd[1]: sshd@42-172.233.223.240:22-139.178.89.65:39570.service: Deactivated successfully. Aug 13 01:39:00.683143 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:39:00.684233 systemd-logind[1468]: Removed session 30. Aug 13 01:39:05.746164 systemd[1]: Started sshd@43-172.233.223.240:22-139.178.89.65:39580.service - OpenSSH per-connection server daemon (139.178.89.65:39580). Aug 13 01:39:06.084549 sshd[4420]: Accepted publickey for core from 139.178.89.65 port 39580 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:06.086661 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:06.091458 systemd-logind[1468]: New session 31 of user core. Aug 13 01:39:06.095054 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:39:06.398582 sshd[4422]: Connection closed by 139.178.89.65 port 39580 Aug 13 01:39:06.399747 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:06.404350 systemd[1]: sshd@43-172.233.223.240:22-139.178.89.65:39580.service: Deactivated successfully. Aug 13 01:39:06.407444 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:39:06.408762 systemd-logind[1468]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:39:06.409949 systemd-logind[1468]: Removed session 31. Aug 13 01:39:08.280860 kubelet[2688]: I0813 01:39:08.280815 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:08.280860 kubelet[2688]: I0813 01:39:08.280864 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:08.283393 kubelet[2688]: I0813 01:39:08.282988 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:08.302145 kubelet[2688]: I0813 01:39:08.302113 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:08.302298 kubelet[2688]: I0813 01:39:08.302244 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:39:08.302327 kubelet[2688]: E0813 01:39:08.302299 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:39:08.302327 kubelet[2688]: E0813 01:39:08.302312 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:39:08.302327 kubelet[2688]: E0813 01:39:08.302320 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:39:08.302418 kubelet[2688]: E0813 01:39:08.302331 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:39:08.302418 kubelet[2688]: E0813 01:39:08.302359 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:39:08.302418 kubelet[2688]: E0813 01:39:08.302370 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:39:08.302418 kubelet[2688]: E0813 01:39:08.302378 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:39:08.302418 kubelet[2688]: E0813 01:39:08.302397 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:39:08.302418 kubelet[2688]: I0813 01:39:08.302407 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:39:11.461166 systemd[1]: Started sshd@44-172.233.223.240:22-139.178.89.65:39232.service - OpenSSH per-connection server daemon (139.178.89.65:39232). Aug 13 01:39:11.797709 sshd[4434]: Accepted publickey for core from 139.178.89.65 port 39232 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:11.799221 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:11.803792 systemd-logind[1468]: New session 32 of user core. Aug 13 01:39:11.810079 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:39:12.107562 sshd[4436]: Connection closed by 139.178.89.65 port 39232 Aug 13 01:39:12.108576 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:12.114234 systemd[1]: sshd@44-172.233.223.240:22-139.178.89.65:39232.service: Deactivated successfully. Aug 13 01:39:12.117722 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:39:12.119205 systemd-logind[1468]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:39:12.120410 systemd-logind[1468]: Removed session 32. Aug 13 01:39:17.178375 systemd[1]: Started sshd@45-172.233.223.240:22-139.178.89.65:39248.service - OpenSSH per-connection server daemon (139.178.89.65:39248). Aug 13 01:39:17.184485 kubelet[2688]: E0813 01:39:17.184031 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:39:17.524504 sshd[4451]: Accepted publickey for core from 139.178.89.65 port 39248 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:17.526879 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:17.533121 systemd-logind[1468]: New session 33 of user core. Aug 13 01:39:17.539143 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:39:17.842462 sshd[4453]: Connection closed by 139.178.89.65 port 39248 Aug 13 01:39:17.843517 sshd-session[4451]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:17.848063 systemd-logind[1468]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:39:17.849212 systemd[1]: sshd@45-172.233.223.240:22-139.178.89.65:39248.service: Deactivated successfully. Aug 13 01:39:17.851842 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:39:17.853675 systemd-logind[1468]: Removed session 33. Aug 13 01:39:18.323932 kubelet[2688]: I0813 01:39:18.323776 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:18.323932 kubelet[2688]: I0813 01:39:18.323837 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:18.326723 kubelet[2688]: I0813 01:39:18.326634 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:18.339295 kubelet[2688]: I0813 01:39:18.339270 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:18.339491 kubelet[2688]: I0813 01:39:18.339414 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:39:18.339491 kubelet[2688]: E0813 01:39:18.339445 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:39:18.339491 kubelet[2688]: E0813 01:39:18.339458 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:39:18.339491 kubelet[2688]: E0813 01:39:18.339467 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:39:18.339491 kubelet[2688]: E0813 01:39:18.339478 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:39:18.339491 kubelet[2688]: E0813 01:39:18.339487 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:39:18.339491 kubelet[2688]: E0813 01:39:18.339495 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:39:18.339857 kubelet[2688]: E0813 01:39:18.339503 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:39:18.339857 kubelet[2688]: E0813 01:39:18.339511 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:39:18.339857 kubelet[2688]: I0813 01:39:18.339522 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:39:22.908142 systemd[1]: Started sshd@46-172.233.223.240:22-139.178.89.65:58278.service - OpenSSH per-connection server daemon (139.178.89.65:58278). Aug 13 01:39:23.235532 sshd[4465]: Accepted publickey for core from 139.178.89.65 port 58278 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:23.237507 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:23.243760 systemd-logind[1468]: New session 34 of user core. Aug 13 01:39:23.249140 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:39:23.554775 sshd[4467]: Connection closed by 139.178.89.65 port 58278 Aug 13 01:39:23.556122 sshd-session[4465]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:23.561360 systemd-logind[1468]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:39:23.562491 systemd[1]: sshd@46-172.233.223.240:22-139.178.89.65:58278.service: Deactivated successfully. Aug 13 01:39:23.565665 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:39:23.567199 systemd-logind[1468]: Removed session 34. Aug 13 01:39:26.610441 systemd[1]: Started sshd@47-172.233.223.240:22-107.0.200.227:36188.service - OpenSSH per-connection server daemon (107.0.200.227:36188). Aug 13 01:39:27.179807 kubelet[2688]: E0813 01:39:27.179396 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:39:28.364805 kubelet[2688]: I0813 01:39:28.364726 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:28.364805 kubelet[2688]: I0813 01:39:28.364777 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:28.370377 kubelet[2688]: I0813 01:39:28.369625 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:28.390607 kubelet[2688]: I0813 01:39:28.390562 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:28.390908 kubelet[2688]: I0813 01:39:28.390824 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:39:28.390908 kubelet[2688]: E0813 01:39:28.390888 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:39:28.391098 kubelet[2688]: E0813 01:39:28.390932 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:39:28.391098 kubelet[2688]: E0813 01:39:28.390947 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:39:28.391098 kubelet[2688]: E0813 01:39:28.390962 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:39:28.391098 kubelet[2688]: E0813 01:39:28.390971 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:39:28.391098 kubelet[2688]: E0813 01:39:28.390981 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:39:28.391098 kubelet[2688]: E0813 01:39:28.390989 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:39:28.391098 kubelet[2688]: E0813 01:39:28.390998 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:39:28.391098 kubelet[2688]: I0813 01:39:28.391009 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:39:28.624236 systemd[1]: Started sshd@48-172.233.223.240:22-139.178.89.65:58290.service - OpenSSH per-connection server daemon (139.178.89.65:58290). Aug 13 01:39:28.948779 sshd[4482]: Accepted publickey for core from 139.178.89.65 port 58290 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:28.950511 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:28.955362 systemd-logind[1468]: New session 35 of user core. Aug 13 01:39:28.959134 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:39:29.252174 sshd[4484]: Connection closed by 139.178.89.65 port 58290 Aug 13 01:39:29.252950 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:29.258057 systemd[1]: sshd@48-172.233.223.240:22-139.178.89.65:58290.service: Deactivated successfully. Aug 13 01:39:29.260777 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:39:29.261747 systemd-logind[1468]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:39:29.263486 systemd-logind[1468]: Removed session 35. Aug 13 01:39:31.838650 sshd[4479]: Invalid user teest from 107.0.200.227 port 36188 Aug 13 01:39:32.622042 sshd[4479]: Received disconnect from 107.0.200.227 port 36188:11: Bye Bye [preauth] Aug 13 01:39:32.622042 sshd[4479]: Disconnected from invalid user teest 107.0.200.227 port 36188 [preauth] Aug 13 01:39:32.624346 systemd[1]: sshd@47-172.233.223.240:22-107.0.200.227:36188.service: Deactivated successfully. Aug 13 01:39:33.179384 kubelet[2688]: E0813 01:39:33.178951 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:39:33.180303 kubelet[2688]: E0813 01:39:33.180041 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:39:34.322229 systemd[1]: Started sshd@49-172.233.223.240:22-139.178.89.65:36878.service - OpenSSH per-connection server daemon (139.178.89.65:36878). Aug 13 01:39:34.644229 sshd[4498]: Accepted publickey for core from 139.178.89.65 port 36878 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:34.645868 sshd-session[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:34.652130 systemd-logind[1468]: New session 36 of user core. Aug 13 01:39:34.659058 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:39:34.939782 sshd[4500]: Connection closed by 139.178.89.65 port 36878 Aug 13 01:39:34.940277 sshd-session[4498]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:34.946525 systemd[1]: sshd@49-172.233.223.240:22-139.178.89.65:36878.service: Deactivated successfully. Aug 13 01:39:34.949018 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:39:34.949860 systemd-logind[1468]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:39:34.951237 systemd-logind[1468]: Removed session 36. Aug 13 01:39:38.411310 kubelet[2688]: I0813 01:39:38.411269 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:38.411701 kubelet[2688]: I0813 01:39:38.411333 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:38.413547 kubelet[2688]: I0813 01:39:38.413025 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:38.432128 kubelet[2688]: I0813 01:39:38.432099 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:38.432253 kubelet[2688]: I0813 01:39:38.432215 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:39:38.432314 kubelet[2688]: E0813 01:39:38.432255 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:39:38.432314 kubelet[2688]: E0813 01:39:38.432268 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:39:38.432314 kubelet[2688]: E0813 01:39:38.432277 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:39:38.432314 kubelet[2688]: E0813 01:39:38.432291 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:39:38.432314 kubelet[2688]: E0813 01:39:38.432301 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:39:38.432314 kubelet[2688]: E0813 01:39:38.432309 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:39:38.432437 kubelet[2688]: E0813 01:39:38.432318 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:39:38.432437 kubelet[2688]: E0813 01:39:38.432327 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:39:38.432437 kubelet[2688]: I0813 01:39:38.432336 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:39:40.007216 systemd[1]: Started sshd@50-172.233.223.240:22-139.178.89.65:34802.service - OpenSSH per-connection server daemon (139.178.89.65:34802). Aug 13 01:39:40.347730 sshd[4515]: Accepted publickey for core from 139.178.89.65 port 34802 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:40.349332 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:40.354794 systemd-logind[1468]: New session 37 of user core. Aug 13 01:39:40.362132 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:39:40.650023 sshd[4517]: Connection closed by 139.178.89.65 port 34802 Aug 13 01:39:40.651002 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:40.654912 systemd-logind[1468]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:39:40.655834 systemd[1]: sshd@50-172.233.223.240:22-139.178.89.65:34802.service: Deactivated successfully. Aug 13 01:39:40.658376 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:39:40.659457 systemd-logind[1468]: Removed session 37. Aug 13 01:39:45.722168 systemd[1]: Started sshd@51-172.233.223.240:22-139.178.89.65:34810.service - OpenSSH per-connection server daemon (139.178.89.65:34810). Aug 13 01:39:46.060812 sshd[4533]: Accepted publickey for core from 139.178.89.65 port 34810 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:46.063185 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:46.070247 systemd-logind[1468]: New session 38 of user core. Aug 13 01:39:46.076066 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:39:46.387232 sshd[4535]: Connection closed by 139.178.89.65 port 34810 Aug 13 01:39:46.388285 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:46.392792 systemd[1]: sshd@51-172.233.223.240:22-139.178.89.65:34810.service: Deactivated successfully. Aug 13 01:39:46.395738 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:39:46.397214 systemd-logind[1468]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:39:46.398286 systemd-logind[1468]: Removed session 38. Aug 13 01:39:48.454592 kubelet[2688]: I0813 01:39:48.454551 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:48.455252 kubelet[2688]: I0813 01:39:48.454603 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:48.462919 kubelet[2688]: I0813 01:39:48.462560 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:48.478150 kubelet[2688]: I0813 01:39:48.478122 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:48.478314 kubelet[2688]: I0813 01:39:48.478246 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:39:48.478314 kubelet[2688]: E0813 01:39:48.478278 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:39:48.478314 kubelet[2688]: E0813 01:39:48.478291 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:39:48.478314 kubelet[2688]: E0813 01:39:48.478300 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:39:48.478314 kubelet[2688]: E0813 01:39:48.478310 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:39:48.478314 kubelet[2688]: E0813 01:39:48.478320 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:39:48.478488 kubelet[2688]: E0813 01:39:48.478329 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:39:48.478488 kubelet[2688]: E0813 01:39:48.478338 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:39:48.478488 kubelet[2688]: E0813 01:39:48.478346 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:39:48.478488 kubelet[2688]: I0813 01:39:48.478356 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:39:49.182385 kubelet[2688]: E0813 01:39:49.182297 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:39:51.455544 systemd[1]: Started sshd@52-172.233.223.240:22-139.178.89.65:40852.service - OpenSSH per-connection server daemon (139.178.89.65:40852). Aug 13 01:39:51.788723 sshd[4547]: Accepted publickey for core from 139.178.89.65 port 40852 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:51.790830 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:51.796477 systemd-logind[1468]: New session 39 of user core. Aug 13 01:39:51.800034 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:39:52.093160 sshd[4549]: Connection closed by 139.178.89.65 port 40852 Aug 13 01:39:52.094371 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:52.098677 systemd[1]: sshd@52-172.233.223.240:22-139.178.89.65:40852.service: Deactivated successfully. Aug 13 01:39:52.101409 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:39:52.102158 systemd-logind[1468]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:39:52.103638 systemd-logind[1468]: Removed session 39. Aug 13 01:39:55.180334 kubelet[2688]: E0813 01:39:55.179469 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:39:57.166296 systemd[1]: Started sshd@53-172.233.223.240:22-139.178.89.65:40854.service - OpenSSH per-connection server daemon (139.178.89.65:40854). Aug 13 01:39:57.492791 sshd[4561]: Accepted publickey for core from 139.178.89.65 port 40854 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:39:57.494760 sshd-session[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:39:57.501579 systemd-logind[1468]: New session 40 of user core. Aug 13 01:39:57.512114 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:39:57.808214 sshd[4563]: Connection closed by 139.178.89.65 port 40854 Aug 13 01:39:57.809293 sshd-session[4561]: pam_unix(sshd:session): session closed for user core Aug 13 01:39:57.814473 systemd[1]: sshd@53-172.233.223.240:22-139.178.89.65:40854.service: Deactivated successfully. Aug 13 01:39:57.818465 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:39:57.819501 systemd-logind[1468]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:39:57.821637 systemd-logind[1468]: Removed session 40. Aug 13 01:39:58.498783 kubelet[2688]: I0813 01:39:58.498746 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:58.498783 kubelet[2688]: I0813 01:39:58.498786 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:39:58.500718 kubelet[2688]: I0813 01:39:58.500693 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:39:58.514583 kubelet[2688]: I0813 01:39:58.514554 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:39:58.514722 kubelet[2688]: I0813 01:39:58.514678 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:39:58.514722 kubelet[2688]: E0813 01:39:58.514710 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:39:58.514722 kubelet[2688]: E0813 01:39:58.514722 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:39:58.514957 kubelet[2688]: E0813 01:39:58.514732 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:39:58.514957 kubelet[2688]: E0813 01:39:58.514741 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:39:58.514957 kubelet[2688]: E0813 01:39:58.514749 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:39:58.514957 kubelet[2688]: E0813 01:39:58.514764 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:39:58.514957 kubelet[2688]: E0813 01:39:58.514772 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:39:58.514957 kubelet[2688]: E0813 01:39:58.514780 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:39:58.514957 kubelet[2688]: I0813 01:39:58.514790 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:40:02.880152 systemd[1]: Started sshd@54-172.233.223.240:22-139.178.89.65:47684.service - OpenSSH per-connection server daemon (139.178.89.65:47684). Aug 13 01:40:03.214395 sshd[4576]: Accepted publickey for core from 139.178.89.65 port 47684 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:03.216956 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:03.223224 systemd-logind[1468]: New session 41 of user core. Aug 13 01:40:03.233148 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:40:03.529543 sshd[4578]: Connection closed by 139.178.89.65 port 47684 Aug 13 01:40:03.530367 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:03.534202 systemd[1]: sshd@54-172.233.223.240:22-139.178.89.65:47684.service: Deactivated successfully. Aug 13 01:40:03.537046 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:40:03.539014 systemd-logind[1468]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:40:03.540653 systemd-logind[1468]: Removed session 41. Aug 13 01:40:08.535938 kubelet[2688]: I0813 01:40:08.535877 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:08.536418 kubelet[2688]: I0813 01:40:08.535951 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:40:08.537718 kubelet[2688]: I0813 01:40:08.537697 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:40:08.549778 kubelet[2688]: I0813 01:40:08.549748 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:08.550066 kubelet[2688]: I0813 01:40:08.550040 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:40:08.550142 kubelet[2688]: E0813 01:40:08.550098 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:40:08.550142 kubelet[2688]: E0813 01:40:08.550115 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:40:08.550142 kubelet[2688]: E0813 01:40:08.550128 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:40:08.550142 kubelet[2688]: E0813 01:40:08.550142 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:40:08.550291 kubelet[2688]: E0813 01:40:08.550156 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:40:08.550291 kubelet[2688]: E0813 01:40:08.550168 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:40:08.550291 kubelet[2688]: E0813 01:40:08.550179 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:40:08.550291 kubelet[2688]: E0813 01:40:08.550189 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:40:08.550291 kubelet[2688]: I0813 01:40:08.550205 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:40:08.589820 systemd[1]: Started sshd@55-172.233.223.240:22-139.178.89.65:47692.service - OpenSSH per-connection server daemon (139.178.89.65:47692). Aug 13 01:40:08.929035 sshd[4589]: Accepted publickey for core from 139.178.89.65 port 47692 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:08.930910 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:08.937555 systemd-logind[1468]: New session 42 of user core. Aug 13 01:40:08.944074 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:40:09.239343 sshd[4591]: Connection closed by 139.178.89.65 port 47692 Aug 13 01:40:09.240092 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:09.244870 systemd[1]: sshd@55-172.233.223.240:22-139.178.89.65:47692.service: Deactivated successfully. Aug 13 01:40:09.247401 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:40:09.248763 systemd-logind[1468]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:40:09.250508 systemd-logind[1468]: Removed session 42. Aug 13 01:40:14.306199 systemd[1]: Started sshd@56-172.233.223.240:22-139.178.89.65:45468.service - OpenSSH per-connection server daemon (139.178.89.65:45468). Aug 13 01:40:14.632258 sshd[4603]: Accepted publickey for core from 139.178.89.65 port 45468 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:14.633924 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:14.639437 systemd-logind[1468]: New session 43 of user core. Aug 13 01:40:14.651154 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:40:14.903294 systemd[1]: Started sshd@57-172.233.223.240:22-188.132.141.200:60098.service - OpenSSH per-connection server daemon (188.132.141.200:60098). Aug 13 01:40:14.934785 sshd[4605]: Connection closed by 139.178.89.65 port 45468 Aug 13 01:40:14.935563 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:14.940731 systemd[1]: sshd@56-172.233.223.240:22-139.178.89.65:45468.service: Deactivated successfully. Aug 13 01:40:14.943842 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:40:14.945810 systemd-logind[1468]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:40:14.947029 systemd-logind[1468]: Removed session 43. Aug 13 01:40:14.977246 sshd[4614]: Connection reset by 188.132.141.200 port 60098 [preauth] Aug 13 01:40:14.979945 systemd[1]: sshd@57-172.233.223.240:22-188.132.141.200:60098.service: Deactivated successfully. Aug 13 01:40:18.575666 kubelet[2688]: I0813 01:40:18.575573 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:18.575666 kubelet[2688]: I0813 01:40:18.575644 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:40:18.578519 kubelet[2688]: I0813 01:40:18.578498 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:40:18.591632 kubelet[2688]: I0813 01:40:18.591595 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:18.591802 kubelet[2688]: I0813 01:40:18.591753 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:40:18.591846 kubelet[2688]: E0813 01:40:18.591815 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:40:18.591846 kubelet[2688]: E0813 01:40:18.591829 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:40:18.591846 kubelet[2688]: E0813 01:40:18.591838 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:40:18.591960 kubelet[2688]: E0813 01:40:18.591849 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:40:18.591960 kubelet[2688]: E0813 01:40:18.591859 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:40:18.591960 kubelet[2688]: E0813 01:40:18.591869 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:40:18.591960 kubelet[2688]: E0813 01:40:18.591877 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:40:18.591960 kubelet[2688]: E0813 01:40:18.591887 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:40:18.591960 kubelet[2688]: I0813 01:40:18.591936 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:40:19.180022 kubelet[2688]: E0813 01:40:19.179355 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:40:20.003164 systemd[1]: Started sshd@58-172.233.223.240:22-139.178.89.65:49816.service - OpenSSH per-connection server daemon (139.178.89.65:49816). Aug 13 01:40:20.330321 sshd[4624]: Accepted publickey for core from 139.178.89.65 port 49816 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:20.331835 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:20.337652 systemd-logind[1468]: New session 44 of user core. Aug 13 01:40:20.343065 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:40:20.630311 sshd[4626]: Connection closed by 139.178.89.65 port 49816 Aug 13 01:40:20.631287 sshd-session[4624]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:20.635664 systemd[1]: sshd@58-172.233.223.240:22-139.178.89.65:49816.service: Deactivated successfully. Aug 13 01:40:20.638061 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:40:20.639204 systemd-logind[1468]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:40:20.640300 systemd-logind[1468]: Removed session 44. Aug 13 01:40:23.179839 kubelet[2688]: E0813 01:40:23.179110 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:40:25.701174 systemd[1]: Started sshd@59-172.233.223.240:22-139.178.89.65:49820.service - OpenSSH per-connection server daemon (139.178.89.65:49820). Aug 13 01:40:26.048285 sshd[4637]: Accepted publickey for core from 139.178.89.65 port 49820 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:26.050052 sshd-session[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:26.055964 systemd-logind[1468]: New session 45 of user core. Aug 13 01:40:26.064178 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:40:26.348945 sshd[4639]: Connection closed by 139.178.89.65 port 49820 Aug 13 01:40:26.350158 sshd-session[4637]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:26.355215 systemd-logind[1468]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:40:26.356325 systemd[1]: sshd@59-172.233.223.240:22-139.178.89.65:49820.service: Deactivated successfully. Aug 13 01:40:26.358727 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:40:26.359862 systemd-logind[1468]: Removed session 45. Aug 13 01:40:28.610530 kubelet[2688]: I0813 01:40:28.610479 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:28.610530 kubelet[2688]: I0813 01:40:28.610519 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:40:28.613002 kubelet[2688]: I0813 01:40:28.612963 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:40:28.627116 kubelet[2688]: I0813 01:40:28.627072 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:28.627226 kubelet[2688]: I0813 01:40:28.627197 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:40:28.627270 kubelet[2688]: E0813 01:40:28.627230 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:40:28.627270 kubelet[2688]: E0813 01:40:28.627243 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:40:28.627270 kubelet[2688]: E0813 01:40:28.627252 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:40:28.627270 kubelet[2688]: E0813 01:40:28.627261 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:40:28.627270 kubelet[2688]: E0813 01:40:28.627271 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:40:28.627444 kubelet[2688]: E0813 01:40:28.627279 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:40:28.627444 kubelet[2688]: E0813 01:40:28.627288 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:40:28.627444 kubelet[2688]: E0813 01:40:28.627297 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:40:28.627444 kubelet[2688]: I0813 01:40:28.627307 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:40:31.420361 systemd[1]: Started sshd@60-172.233.223.240:22-139.178.89.65:38374.service - OpenSSH per-connection server daemon (139.178.89.65:38374). Aug 13 01:40:31.770055 sshd[4652]: Accepted publickey for core from 139.178.89.65 port 38374 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:31.771655 sshd-session[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:31.777019 systemd-logind[1468]: New session 46 of user core. Aug 13 01:40:31.782036 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:40:32.079223 sshd[4654]: Connection closed by 139.178.89.65 port 38374 Aug 13 01:40:32.080188 sshd-session[4652]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:32.083580 systemd[1]: sshd@60-172.233.223.240:22-139.178.89.65:38374.service: Deactivated successfully. Aug 13 01:40:32.086098 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:40:32.088103 systemd-logind[1468]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:40:32.089593 systemd-logind[1468]: Removed session 46. Aug 13 01:40:37.145140 systemd[1]: Started sshd@61-172.233.223.240:22-139.178.89.65:38378.service - OpenSSH per-connection server daemon (139.178.89.65:38378). Aug 13 01:40:37.173165 kubelet[2688]: I0813 01:40:37.173106 2688 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=88 highThreshold=85 amountToFree=155945369 lowThreshold=80 Aug 13 01:40:37.173613 kubelet[2688]: E0813 01:40:37.173202 2688 kubelet.go:1551] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 155945369 bytes, but only found 0 bytes eligible to free." Aug 13 01:40:37.486257 sshd[4666]: Accepted publickey for core from 139.178.89.65 port 38378 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:37.488169 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:37.494989 systemd-logind[1468]: New session 47 of user core. Aug 13 01:40:37.501092 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:40:37.791381 sshd[4670]: Connection closed by 139.178.89.65 port 38378 Aug 13 01:40:37.792442 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:37.796822 systemd[1]: sshd@61-172.233.223.240:22-139.178.89.65:38378.service: Deactivated successfully. Aug 13 01:40:37.799259 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:40:37.800733 systemd-logind[1468]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:40:37.802546 systemd-logind[1468]: Removed session 47. Aug 13 01:40:38.179453 kubelet[2688]: E0813 01:40:38.179274 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:40:38.647503 kubelet[2688]: I0813 01:40:38.647456 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:38.648168 kubelet[2688]: I0813 01:40:38.647714 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:40:38.650238 kubelet[2688]: I0813 01:40:38.650118 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:40:38.674400 kubelet[2688]: I0813 01:40:38.674365 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:38.674540 kubelet[2688]: I0813 01:40:38.674501 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:40:38.674540 kubelet[2688]: E0813 01:40:38.674533 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:40:38.674646 kubelet[2688]: E0813 01:40:38.674546 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:40:38.674646 kubelet[2688]: E0813 01:40:38.674555 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:40:38.674646 kubelet[2688]: E0813 01:40:38.674566 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:40:38.674646 kubelet[2688]: E0813 01:40:38.674575 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:40:38.674646 kubelet[2688]: E0813 01:40:38.674584 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:40:38.674646 kubelet[2688]: E0813 01:40:38.674592 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:40:38.674646 kubelet[2688]: E0813 01:40:38.674600 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:40:38.674646 kubelet[2688]: I0813 01:40:38.674610 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:40:42.864236 systemd[1]: Started sshd@62-172.233.223.240:22-139.178.89.65:48498.service - OpenSSH per-connection server daemon (139.178.89.65:48498). Aug 13 01:40:43.179618 kubelet[2688]: E0813 01:40:43.179492 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:40:43.182268 kubelet[2688]: E0813 01:40:43.182118 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:40:43.187529 sshd[4681]: Accepted publickey for core from 139.178.89.65 port 48498 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:43.189601 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:43.194216 systemd-logind[1468]: New session 48 of user core. Aug 13 01:40:43.200015 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:40:43.492332 sshd[4683]: Connection closed by 139.178.89.65 port 48498 Aug 13 01:40:43.493251 sshd-session[4681]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:43.497521 systemd[1]: sshd@62-172.233.223.240:22-139.178.89.65:48498.service: Deactivated successfully. Aug 13 01:40:43.500028 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:40:43.502647 systemd-logind[1468]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:40:43.503803 systemd-logind[1468]: Removed session 48. Aug 13 01:40:46.179159 kubelet[2688]: E0813 01:40:46.179126 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:40:48.562145 systemd[1]: Started sshd@63-172.233.223.240:22-139.178.89.65:48512.service - OpenSSH per-connection server daemon (139.178.89.65:48512). Aug 13 01:40:48.695514 kubelet[2688]: I0813 01:40:48.695472 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:48.695514 kubelet[2688]: I0813 01:40:48.695516 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:40:48.697394 kubelet[2688]: I0813 01:40:48.697371 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:40:48.710787 kubelet[2688]: I0813 01:40:48.710725 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:48.711203 kubelet[2688]: I0813 01:40:48.711181 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:40:48.711295 kubelet[2688]: E0813 01:40:48.711281 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:40:48.711401 kubelet[2688]: E0813 01:40:48.711389 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:40:48.711520 kubelet[2688]: E0813 01:40:48.711485 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:40:48.711614 kubelet[2688]: E0813 01:40:48.711602 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:40:48.711667 kubelet[2688]: E0813 01:40:48.711657 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:40:48.711771 kubelet[2688]: E0813 01:40:48.711759 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:40:48.711955 kubelet[2688]: E0813 01:40:48.711941 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:40:48.712053 kubelet[2688]: E0813 01:40:48.712041 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:40:48.712129 kubelet[2688]: I0813 01:40:48.712118 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:40:48.893675 sshd[4697]: Accepted publickey for core from 139.178.89.65 port 48512 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:48.896129 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:48.903415 systemd-logind[1468]: New session 49 of user core. Aug 13 01:40:48.908052 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 01:40:49.233789 sshd[4699]: Connection closed by 139.178.89.65 port 48512 Aug 13 01:40:49.234810 sshd-session[4697]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:49.240744 systemd[1]: sshd@63-172.233.223.240:22-139.178.89.65:48512.service: Deactivated successfully. Aug 13 01:40:49.243778 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 01:40:49.245092 systemd-logind[1468]: Session 49 logged out. Waiting for processes to exit. Aug 13 01:40:49.246290 systemd-logind[1468]: Removed session 49. Aug 13 01:40:54.305405 systemd[1]: Started sshd@64-172.233.223.240:22-139.178.89.65:46854.service - OpenSSH per-connection server daemon (139.178.89.65:46854). Aug 13 01:40:54.644345 sshd[4711]: Accepted publickey for core from 139.178.89.65 port 46854 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:40:54.646118 sshd-session[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:40:54.652369 systemd-logind[1468]: New session 50 of user core. Aug 13 01:40:54.662039 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 01:40:54.959106 sshd[4713]: Connection closed by 139.178.89.65 port 46854 Aug 13 01:40:54.960236 sshd-session[4711]: pam_unix(sshd:session): session closed for user core Aug 13 01:40:54.965006 systemd[1]: sshd@64-172.233.223.240:22-139.178.89.65:46854.service: Deactivated successfully. Aug 13 01:40:54.967607 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 01:40:54.968945 systemd-logind[1468]: Session 50 logged out. Waiting for processes to exit. Aug 13 01:40:54.970105 systemd-logind[1468]: Removed session 50. Aug 13 01:40:56.508159 systemd[1]: Started sshd@65-172.233.223.240:22-37.255.40.103:45704.service - OpenSSH per-connection server daemon (37.255.40.103:45704). Aug 13 01:40:57.486107 sshd[4725]: Connection closed by 37.255.40.103 port 45704 [preauth] Aug 13 01:40:57.488641 systemd[1]: sshd@65-172.233.223.240:22-37.255.40.103:45704.service: Deactivated successfully. Aug 13 01:40:58.733914 kubelet[2688]: I0813 01:40:58.733856 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:58.733914 kubelet[2688]: I0813 01:40:58.733921 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:40:58.736008 kubelet[2688]: I0813 01:40:58.735960 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:40:58.749682 kubelet[2688]: I0813 01:40:58.749652 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:40:58.749842 kubelet[2688]: I0813 01:40:58.749783 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:40:58.749842 kubelet[2688]: E0813 01:40:58.749815 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:40:58.749842 kubelet[2688]: E0813 01:40:58.749828 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:40:58.749842 kubelet[2688]: E0813 01:40:58.749838 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:40:58.749983 kubelet[2688]: E0813 01:40:58.749848 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:40:58.749983 kubelet[2688]: E0813 01:40:58.749857 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:40:58.749983 kubelet[2688]: E0813 01:40:58.749865 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:40:58.749983 kubelet[2688]: E0813 01:40:58.749874 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:40:58.749983 kubelet[2688]: E0813 01:40:58.749882 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:40:58.749983 kubelet[2688]: I0813 01:40:58.749937 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:41:00.022143 systemd[1]: Started sshd@66-172.233.223.240:22-139.178.89.65:40732.service - OpenSSH per-connection server daemon (139.178.89.65:40732). Aug 13 01:41:00.370693 sshd[4730]: Accepted publickey for core from 139.178.89.65 port 40732 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:41:00.372392 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:41:00.380019 systemd-logind[1468]: New session 51 of user core. Aug 13 01:41:00.385081 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 01:41:00.677406 sshd[4732]: Connection closed by 139.178.89.65 port 40732 Aug 13 01:41:00.678500 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Aug 13 01:41:00.682109 systemd[1]: sshd@66-172.233.223.240:22-139.178.89.65:40732.service: Deactivated successfully. Aug 13 01:41:00.684507 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 01:41:00.686732 systemd-logind[1468]: Session 51 logged out. Waiting for processes to exit. Aug 13 01:41:00.688301 systemd-logind[1468]: Removed session 51. Aug 13 01:41:05.179965 kubelet[2688]: E0813 01:41:05.179339 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:05.180453 kubelet[2688]: E0813 01:41:05.180176 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:05.743149 systemd[1]: Started sshd@67-172.233.223.240:22-139.178.89.65:40734.service - OpenSSH per-connection server daemon (139.178.89.65:40734). Aug 13 01:41:06.071358 sshd[4744]: Accepted publickey for core from 139.178.89.65 port 40734 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:41:06.073172 sshd-session[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:41:06.078885 systemd-logind[1468]: New session 52 of user core. Aug 13 01:41:06.084043 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 01:41:06.381809 sshd[4746]: Connection closed by 139.178.89.65 port 40734 Aug 13 01:41:06.382830 sshd-session[4744]: pam_unix(sshd:session): session closed for user core Aug 13 01:41:06.387594 systemd[1]: sshd@67-172.233.223.240:22-139.178.89.65:40734.service: Deactivated successfully. Aug 13 01:41:06.390483 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 01:41:06.391405 systemd-logind[1468]: Session 52 logged out. Waiting for processes to exit. Aug 13 01:41:06.392650 systemd-logind[1468]: Removed session 52. Aug 13 01:41:08.772108 kubelet[2688]: I0813 01:41:08.772055 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:41:08.772787 kubelet[2688]: I0813 01:41:08.772137 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:41:08.774359 kubelet[2688]: I0813 01:41:08.774336 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:41:08.788245 kubelet[2688]: I0813 01:41:08.787983 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:41:08.788245 kubelet[2688]: I0813 01:41:08.788100 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:41:08.788245 kubelet[2688]: E0813 01:41:08.788146 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:41:08.788245 kubelet[2688]: E0813 01:41:08.788159 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:41:08.788245 kubelet[2688]: E0813 01:41:08.788168 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:41:08.788245 kubelet[2688]: E0813 01:41:08.788179 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:41:08.788245 kubelet[2688]: E0813 01:41:08.788187 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:41:08.788245 kubelet[2688]: E0813 01:41:08.788195 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:41:08.788245 kubelet[2688]: E0813 01:41:08.788205 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:41:08.788245 kubelet[2688]: E0813 01:41:08.788213 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:41:08.788245 kubelet[2688]: I0813 01:41:08.788224 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:41:10.118685 systemd[1]: Started sshd@68-172.233.223.240:22-5.122.225.207:61978.service - OpenSSH per-connection server daemon (5.122.225.207:61978). Aug 13 01:41:10.782142 sshd[4758]: Connection closed by 5.122.225.207 port 61978 [preauth] Aug 13 01:41:10.785080 systemd[1]: sshd@68-172.233.223.240:22-5.122.225.207:61978.service: Deactivated successfully. Aug 13 01:41:11.452315 systemd[1]: Started sshd@69-172.233.223.240:22-139.178.89.65:41362.service - OpenSSH per-connection server daemon (139.178.89.65:41362). Aug 13 01:41:11.790015 sshd[4763]: Accepted publickey for core from 139.178.89.65 port 41362 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:41:11.791848 sshd-session[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:41:11.797198 systemd-logind[1468]: New session 53 of user core. Aug 13 01:41:11.805084 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 01:41:12.093560 sshd[4765]: Connection closed by 139.178.89.65 port 41362 Aug 13 01:41:12.094591 sshd-session[4763]: pam_unix(sshd:session): session closed for user core Aug 13 01:41:12.097974 systemd[1]: sshd@69-172.233.223.240:22-139.178.89.65:41362.service: Deactivated successfully. Aug 13 01:41:12.100041 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 01:41:12.101334 systemd-logind[1468]: Session 53 logged out. Waiting for processes to exit. Aug 13 01:41:12.102817 systemd-logind[1468]: Removed session 53. Aug 13 01:41:17.162323 systemd[1]: Started sshd@70-172.233.223.240:22-139.178.89.65:41368.service - OpenSSH per-connection server daemon (139.178.89.65:41368). Aug 13 01:41:17.494308 sshd[4779]: Accepted publickey for core from 139.178.89.65 port 41368 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:41:17.496314 sshd-session[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:41:17.502912 systemd-logind[1468]: New session 54 of user core. Aug 13 01:41:17.515182 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 01:41:17.799824 sshd[4781]: Connection closed by 139.178.89.65 port 41368 Aug 13 01:41:17.801040 sshd-session[4779]: pam_unix(sshd:session): session closed for user core Aug 13 01:41:17.805509 systemd[1]: sshd@70-172.233.223.240:22-139.178.89.65:41368.service: Deactivated successfully. Aug 13 01:41:17.807946 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 01:41:17.808994 systemd-logind[1468]: Session 54 logged out. Waiting for processes to exit. Aug 13 01:41:17.810451 systemd-logind[1468]: Removed session 54. Aug 13 01:41:18.811240 kubelet[2688]: I0813 01:41:18.811165 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:41:18.812545 kubelet[2688]: I0813 01:41:18.811273 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:41:18.813853 kubelet[2688]: I0813 01:41:18.813824 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:41:18.826455 kubelet[2688]: I0813 01:41:18.826394 2688 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:41:18.826650 kubelet[2688]: I0813 01:41:18.826564 2688 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-fzpq4","kube-system/coredns-668d6bf9bc-955pz","kube-system/coredns-668d6bf9bc-nx2pw","kube-system/cilium-h64hf","kube-system/kube-controller-manager-172-233-223-240","kube-system/kube-proxy-jl9t6","kube-system/kube-apiserver-172-233-223-240","kube-system/kube-scheduler-172-233-223-240"] Aug 13 01:41:18.826650 kubelet[2688]: E0813 01:41:18.826636 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-fzpq4" Aug 13 01:41:18.826650 kubelet[2688]: E0813 01:41:18.826648 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-955pz" Aug 13 01:41:18.826849 kubelet[2688]: E0813 01:41:18.826658 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-nx2pw" Aug 13 01:41:18.826849 kubelet[2688]: E0813 01:41:18.826669 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-h64hf" Aug 13 01:41:18.826849 kubelet[2688]: E0813 01:41:18.826678 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-223-240" Aug 13 01:41:18.826849 kubelet[2688]: E0813 01:41:18.826687 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-jl9t6" Aug 13 01:41:18.826849 kubelet[2688]: E0813 01:41:18.826696 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-223-240" Aug 13 01:41:18.826849 kubelet[2688]: E0813 01:41:18.826705 2688 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-223-240" Aug 13 01:41:18.826849 kubelet[2688]: I0813 01:41:18.826716 2688 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:41:19.562246 systemd[1]: Started sshd@71-172.233.223.240:22-74.208.177.56:40810.service - OpenSSH per-connection server daemon (74.208.177.56:40810). Aug 13 01:41:21.465623 sshd[4793]: Invalid user odoo from 74.208.177.56 port 40810 Aug 13 01:41:21.704645 sshd-session[4795]: pam_faillock(sshd:auth): User unknown Aug 13 01:41:21.709859 sshd[4793]: Postponed keyboard-interactive for invalid user odoo from 74.208.177.56 port 40810 ssh2 [preauth] Aug 13 01:41:22.143803 sshd-session[4795]: pam_unix(sshd:auth): check pass; user unknown Aug 13 01:41:22.143839 sshd-session[4795]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=74.208.177.56 Aug 13 01:41:22.144593 sshd-session[4795]: pam_faillock(sshd:auth): User unknown Aug 13 01:41:22.865132 systemd[1]: Started sshd@72-172.233.223.240:22-139.178.89.65:53054.service - OpenSSH per-connection server daemon (139.178.89.65:53054). Aug 13 01:41:23.189041 sshd[4797]: Accepted publickey for core from 139.178.89.65 port 53054 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:41:23.191308 sshd-session[4797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:41:23.197744 systemd-logind[1468]: New session 55 of user core. Aug 13 01:41:23.204087 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 01:41:23.495359 sshd[4799]: Connection closed by 139.178.89.65 port 53054 Aug 13 01:41:23.496363 sshd-session[4797]: pam_unix(sshd:session): session closed for user core Aug 13 01:41:23.501203 systemd[1]: sshd@72-172.233.223.240:22-139.178.89.65:53054.service: Deactivated successfully. Aug 13 01:41:23.504024 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 01:41:23.505373 systemd-logind[1468]: Session 55 logged out. Waiting for processes to exit. Aug 13 01:41:23.506459 systemd-logind[1468]: Removed session 55. Aug 13 01:41:23.565177 systemd[1]: Started sshd@73-172.233.223.240:22-139.178.89.65:53068.service - OpenSSH per-connection server daemon (139.178.89.65:53068). Aug 13 01:41:23.892140 sshd[4811]: Accepted publickey for core from 139.178.89.65 port 53068 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:41:23.894099 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:41:23.899817 systemd-logind[1468]: New session 56 of user core. Aug 13 01:41:23.906071 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 01:41:24.308779 sshd[4793]: PAM: Permission denied for illegal user odoo from 74.208.177.56 Aug 13 01:41:24.309700 sshd[4793]: Failed keyboard-interactive/pam for invalid user odoo from 74.208.177.56 port 40810 ssh2 Aug 13 01:41:24.922679 sshd[4793]: Connection closed by invalid user odoo 74.208.177.56 port 40810 [preauth] Aug 13 01:41:24.925781 systemd[1]: sshd@71-172.233.223.240:22-74.208.177.56:40810.service: Deactivated successfully. Aug 13 01:41:25.400647 containerd[1492]: time="2025-08-13T01:41:25.400541839Z" level=info msg="StopContainer for \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\" with timeout 30 (s)" Aug 13 01:41:25.402714 containerd[1492]: time="2025-08-13T01:41:25.401696216Z" level=info msg="Stop container \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\" with signal terminated" Aug 13 01:41:25.461558 systemd[1]: cri-containerd-4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197.scope: Deactivated successfully. Aug 13 01:41:25.477379 containerd[1492]: time="2025-08-13T01:41:25.477286668Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:41:25.494555 containerd[1492]: time="2025-08-13T01:41:25.494408397Z" level=info msg="StopContainer for \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\" with timeout 2 (s)" Aug 13 01:41:25.494932 containerd[1492]: time="2025-08-13T01:41:25.494912726Z" level=info msg="Stop container \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\" with signal terminated" Aug 13 01:41:25.510285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197-rootfs.mount: Deactivated successfully. Aug 13 01:41:25.514588 systemd-networkd[1404]: lxc_health: Link DOWN Aug 13 01:41:25.514598 systemd-networkd[1404]: lxc_health: Lost carrier Aug 13 01:41:25.523090 containerd[1492]: time="2025-08-13T01:41:25.522190736Z" level=info msg="shim disconnected" id=4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197 namespace=k8s.io Aug 13 01:41:25.523090 containerd[1492]: time="2025-08-13T01:41:25.522307116Z" level=warning msg="cleaning up after shim disconnected" id=4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197 namespace=k8s.io Aug 13 01:41:25.523090 containerd[1492]: time="2025-08-13T01:41:25.522325076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:41:25.544473 systemd[1]: cri-containerd-546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9.scope: Deactivated successfully. Aug 13 01:41:25.545047 systemd[1]: cri-containerd-546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9.scope: Consumed 9.545s CPU time, 127.3M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 01:41:25.588962 containerd[1492]: time="2025-08-13T01:41:25.588128105Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:41:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:41:25.596514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9-rootfs.mount: Deactivated successfully. Aug 13 01:41:25.600739 containerd[1492]: time="2025-08-13T01:41:25.600686342Z" level=info msg="StopContainer for \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\" returns successfully" Aug 13 01:41:25.604945 containerd[1492]: time="2025-08-13T01:41:25.602270289Z" level=info msg="StopPodSandbox for \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\"" Aug 13 01:41:25.604945 containerd[1492]: time="2025-08-13T01:41:25.602378719Z" level=info msg="Container to stop \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:41:25.605688 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f-shm.mount: Deactivated successfully. Aug 13 01:41:25.605959 containerd[1492]: time="2025-08-13T01:41:25.605860133Z" level=info msg="shim disconnected" id=546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9 namespace=k8s.io Aug 13 01:41:25.606012 containerd[1492]: time="2025-08-13T01:41:25.605959543Z" level=warning msg="cleaning up after shim disconnected" id=546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9 namespace=k8s.io Aug 13 01:41:25.606012 containerd[1492]: time="2025-08-13T01:41:25.605969553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:41:25.618973 systemd[1]: cri-containerd-346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f.scope: Deactivated successfully. Aug 13 01:41:25.638208 containerd[1492]: time="2025-08-13T01:41:25.638090964Z" level=info msg="StopContainer for \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\" returns successfully" Aug 13 01:41:25.638976 containerd[1492]: time="2025-08-13T01:41:25.638666903Z" level=info msg="StopPodSandbox for \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\"" Aug 13 01:41:25.638976 containerd[1492]: time="2025-08-13T01:41:25.638718793Z" level=info msg="Container to stop \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:41:25.638976 containerd[1492]: time="2025-08-13T01:41:25.638759013Z" level=info msg="Container to stop \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:41:25.638976 containerd[1492]: time="2025-08-13T01:41:25.638768193Z" level=info msg="Container to stop \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:41:25.638976 containerd[1492]: time="2025-08-13T01:41:25.638777663Z" level=info msg="Container to stop \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:41:25.638976 containerd[1492]: time="2025-08-13T01:41:25.638788383Z" level=info msg="Container to stop \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:41:25.647701 systemd[1]: cri-containerd-05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01.scope: Deactivated successfully. Aug 13 01:41:25.659702 containerd[1492]: time="2025-08-13T01:41:25.659493875Z" level=info msg="shim disconnected" id=346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f namespace=k8s.io Aug 13 01:41:25.659702 containerd[1492]: time="2025-08-13T01:41:25.659542095Z" level=warning msg="cleaning up after shim disconnected" id=346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f namespace=k8s.io Aug 13 01:41:25.659702 containerd[1492]: time="2025-08-13T01:41:25.659550814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:41:25.682006 containerd[1492]: time="2025-08-13T01:41:25.681936934Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:41:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:41:25.683612 containerd[1492]: time="2025-08-13T01:41:25.683567141Z" level=info msg="TearDown network for sandbox \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\" successfully" Aug 13 01:41:25.683985 containerd[1492]: time="2025-08-13T01:41:25.683799790Z" level=info msg="StopPodSandbox for \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\" returns successfully" Aug 13 01:41:25.689816 containerd[1492]: time="2025-08-13T01:41:25.689746289Z" level=info msg="shim disconnected" id=05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01 namespace=k8s.io Aug 13 01:41:25.689816 containerd[1492]: time="2025-08-13T01:41:25.689795259Z" level=warning msg="cleaning up after shim disconnected" id=05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01 namespace=k8s.io Aug 13 01:41:25.689816 containerd[1492]: time="2025-08-13T01:41:25.689804279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:41:25.712849 containerd[1492]: time="2025-08-13T01:41:25.712795357Z" level=info msg="TearDown network for sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" successfully" Aug 13 01:41:25.712849 containerd[1492]: time="2025-08-13T01:41:25.712831177Z" level=info msg="StopPodSandbox for \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" returns successfully" Aug 13 01:41:25.760935 kubelet[2688]: I0813 01:41:25.760351 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c67b6e0c-16a5-47ac-92fd-af9bf0169651-cilium-config-path\") pod \"c67b6e0c-16a5-47ac-92fd-af9bf0169651\" (UID: \"c67b6e0c-16a5-47ac-92fd-af9bf0169651\") " Aug 13 01:41:25.760935 kubelet[2688]: I0813 01:41:25.760628 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lgkw\" (UniqueName: \"kubernetes.io/projected/c67b6e0c-16a5-47ac-92fd-af9bf0169651-kube-api-access-6lgkw\") pod \"c67b6e0c-16a5-47ac-92fd-af9bf0169651\" (UID: \"c67b6e0c-16a5-47ac-92fd-af9bf0169651\") " Aug 13 01:41:25.766069 kubelet[2688]: I0813 01:41:25.765987 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c67b6e0c-16a5-47ac-92fd-af9bf0169651-kube-api-access-6lgkw" (OuterVolumeSpecName: "kube-api-access-6lgkw") pod "c67b6e0c-16a5-47ac-92fd-af9bf0169651" (UID: "c67b6e0c-16a5-47ac-92fd-af9bf0169651"). InnerVolumeSpecName "kube-api-access-6lgkw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:41:25.766258 kubelet[2688]: I0813 01:41:25.766213 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c67b6e0c-16a5-47ac-92fd-af9bf0169651-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c67b6e0c-16a5-47ac-92fd-af9bf0169651" (UID: "c67b6e0c-16a5-47ac-92fd-af9bf0169651"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:41:25.861706 kubelet[2688]: I0813 01:41:25.861607 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-config-path\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.861706 kubelet[2688]: I0813 01:41:25.861698 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-bpf-maps\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862469 kubelet[2688]: I0813 01:41:25.861739 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smjzn\" (UniqueName: \"kubernetes.io/projected/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-kube-api-access-smjzn\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862469 kubelet[2688]: I0813 01:41:25.861766 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-hostproc\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862469 kubelet[2688]: I0813 01:41:25.861828 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-host-proc-sys-net\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862469 kubelet[2688]: I0813 01:41:25.861859 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-host-proc-sys-kernel\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862469 kubelet[2688]: I0813 01:41:25.861941 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-run\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862469 kubelet[2688]: I0813 01:41:25.861991 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-clustermesh-secrets\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862869 kubelet[2688]: I0813 01:41:25.862019 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-hubble-tls\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862869 kubelet[2688]: I0813 01:41:25.862043 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-cgroup\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862869 kubelet[2688]: I0813 01:41:25.862070 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-xtables-lock\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862869 kubelet[2688]: I0813 01:41:25.862098 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-etc-cni-netd\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862869 kubelet[2688]: I0813 01:41:25.862120 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-lib-modules\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.862869 kubelet[2688]: I0813 01:41:25.862145 2688 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cni-path\") pod \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\" (UID: \"b570fe5d-eb8b-4763-9890-9e7f066c4c2e\") " Aug 13 01:41:25.863136 kubelet[2688]: I0813 01:41:25.862243 2688 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6lgkw\" (UniqueName: \"kubernetes.io/projected/c67b6e0c-16a5-47ac-92fd-af9bf0169651-kube-api-access-6lgkw\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.863136 kubelet[2688]: I0813 01:41:25.862264 2688 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c67b6e0c-16a5-47ac-92fd-af9bf0169651-cilium-config-path\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.863136 kubelet[2688]: I0813 01:41:25.862338 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cni-path" (OuterVolumeSpecName: "cni-path") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.863136 kubelet[2688]: I0813 01:41:25.862398 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-hostproc" (OuterVolumeSpecName: "hostproc") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.863136 kubelet[2688]: I0813 01:41:25.862425 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.863311 kubelet[2688]: I0813 01:41:25.862452 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.863311 kubelet[2688]: I0813 01:41:25.862475 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.867384 kubelet[2688]: I0813 01:41:25.866742 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-kube-api-access-smjzn" (OuterVolumeSpecName: "kube-api-access-smjzn") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "kube-api-access-smjzn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:41:25.867384 kubelet[2688]: I0813 01:41:25.867228 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.867715 kubelet[2688]: I0813 01:41:25.867483 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.867715 kubelet[2688]: I0813 01:41:25.867584 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:41:25.867715 kubelet[2688]: I0813 01:41:25.867618 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.867715 kubelet[2688]: I0813 01:41:25.867636 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.867715 kubelet[2688]: I0813 01:41:25.867656 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:41:25.871341 kubelet[2688]: I0813 01:41:25.871294 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:41:25.873019 kubelet[2688]: I0813 01:41:25.872966 2688 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b570fe5d-eb8b-4763-9890-9e7f066c4c2e" (UID: "b570fe5d-eb8b-4763-9890-9e7f066c4c2e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:41:25.963088 kubelet[2688]: I0813 01:41:25.962838 2688 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-hubble-tls\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963088 kubelet[2688]: I0813 01:41:25.962913 2688 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-cgroup\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963088 kubelet[2688]: I0813 01:41:25.962928 2688 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-xtables-lock\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963088 kubelet[2688]: I0813 01:41:25.962938 2688 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-clustermesh-secrets\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963088 kubelet[2688]: I0813 01:41:25.962951 2688 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cni-path\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963088 kubelet[2688]: I0813 01:41:25.962961 2688 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-etc-cni-netd\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963088 kubelet[2688]: I0813 01:41:25.962970 2688 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-lib-modules\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963088 kubelet[2688]: I0813 01:41:25.962980 2688 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-config-path\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963554 kubelet[2688]: I0813 01:41:25.962991 2688 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-bpf-maps\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963554 kubelet[2688]: I0813 01:41:25.963002 2688 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-smjzn\" (UniqueName: \"kubernetes.io/projected/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-kube-api-access-smjzn\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963554 kubelet[2688]: I0813 01:41:25.963011 2688 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-hostproc\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963554 kubelet[2688]: I0813 01:41:25.963021 2688 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-host-proc-sys-kernel\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963554 kubelet[2688]: I0813 01:41:25.963031 2688 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-host-proc-sys-net\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:25.963554 kubelet[2688]: I0813 01:41:25.963040 2688 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b570fe5d-eb8b-4763-9890-9e7f066c4c2e-cilium-run\") on node \"172-233-223-240\" DevicePath \"\"" Aug 13 01:41:26.179803 kubelet[2688]: E0813 01:41:26.179758 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:26.256920 kubelet[2688]: I0813 01:41:26.255034 2688 scope.go:117] "RemoveContainer" containerID="546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9" Aug 13 01:41:26.258172 containerd[1492]: time="2025-08-13T01:41:26.258137930Z" level=info msg="RemoveContainer for \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\"" Aug 13 01:41:26.262446 systemd[1]: Removed slice kubepods-burstable-podb570fe5d_eb8b_4763_9890_9e7f066c4c2e.slice - libcontainer container kubepods-burstable-podb570fe5d_eb8b_4763_9890_9e7f066c4c2e.slice. Aug 13 01:41:26.262541 systemd[1]: kubepods-burstable-podb570fe5d_eb8b_4763_9890_9e7f066c4c2e.slice: Consumed 9.717s CPU time, 127.7M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 01:41:26.267257 containerd[1492]: time="2025-08-13T01:41:26.267222104Z" level=info msg="RemoveContainer for \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\" returns successfully" Aug 13 01:41:26.268206 kubelet[2688]: I0813 01:41:26.268184 2688 scope.go:117] "RemoveContainer" containerID="9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5" Aug 13 01:41:26.268203 systemd[1]: Removed slice kubepods-besteffort-podc67b6e0c_16a5_47ac_92fd_af9bf0169651.slice - libcontainer container kubepods-besteffort-podc67b6e0c_16a5_47ac_92fd_af9bf0169651.slice. Aug 13 01:41:26.269598 containerd[1492]: time="2025-08-13T01:41:26.269578900Z" level=info msg="RemoveContainer for \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\"" Aug 13 01:41:26.275097 containerd[1492]: time="2025-08-13T01:41:26.275070500Z" level=info msg="RemoveContainer for \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\" returns successfully" Aug 13 01:41:26.275222 kubelet[2688]: I0813 01:41:26.275202 2688 scope.go:117] "RemoveContainer" containerID="13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed" Aug 13 01:41:26.276358 containerd[1492]: time="2025-08-13T01:41:26.276324627Z" level=info msg="RemoveContainer for \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\"" Aug 13 01:41:26.278946 containerd[1492]: time="2025-08-13T01:41:26.278916393Z" level=info msg="RemoveContainer for \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\" returns successfully" Aug 13 01:41:26.281290 kubelet[2688]: I0813 01:41:26.281242 2688 scope.go:117] "RemoveContainer" containerID="7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5" Aug 13 01:41:26.282051 containerd[1492]: time="2025-08-13T01:41:26.282026737Z" level=info msg="RemoveContainer for \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\"" Aug 13 01:41:26.284491 containerd[1492]: time="2025-08-13T01:41:26.284448823Z" level=info msg="RemoveContainer for \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\" returns successfully" Aug 13 01:41:26.284705 kubelet[2688]: I0813 01:41:26.284644 2688 scope.go:117] "RemoveContainer" containerID="bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2" Aug 13 01:41:26.287763 containerd[1492]: time="2025-08-13T01:41:26.287727297Z" level=info msg="RemoveContainer for \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\"" Aug 13 01:41:26.290971 containerd[1492]: time="2025-08-13T01:41:26.290873991Z" level=info msg="RemoveContainer for \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\" returns successfully" Aug 13 01:41:26.291071 kubelet[2688]: I0813 01:41:26.291049 2688 scope.go:117] "RemoveContainer" containerID="546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9" Aug 13 01:41:26.291368 containerd[1492]: time="2025-08-13T01:41:26.291270870Z" level=error msg="ContainerStatus for \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\": not found" Aug 13 01:41:26.292172 kubelet[2688]: E0813 01:41:26.292127 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\": not found" containerID="546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9" Aug 13 01:41:26.292343 kubelet[2688]: I0813 01:41:26.292168 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9"} err="failed to get container status \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"546c7767436a1dc4f94afb1bdfd1077c94517b184a528e5f2af942faa8fc32e9\": not found" Aug 13 01:41:26.292343 kubelet[2688]: I0813 01:41:26.292296 2688 scope.go:117] "RemoveContainer" containerID="9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5" Aug 13 01:41:26.293220 containerd[1492]: time="2025-08-13T01:41:26.293157027Z" level=error msg="ContainerStatus for \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\": not found" Aug 13 01:41:26.293661 kubelet[2688]: E0813 01:41:26.293411 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\": not found" containerID="9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5" Aug 13 01:41:26.293661 kubelet[2688]: I0813 01:41:26.293508 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5"} err="failed to get container status \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c594e253e1b3beeb74ab84a1dfbab06bfa93890d1ba3ccdbb40514bef3869e5\": not found" Aug 13 01:41:26.293661 kubelet[2688]: I0813 01:41:26.293533 2688 scope.go:117] "RemoveContainer" containerID="13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed" Aug 13 01:41:26.293884 containerd[1492]: time="2025-08-13T01:41:26.293854895Z" level=error msg="ContainerStatus for \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\": not found" Aug 13 01:41:26.294155 kubelet[2688]: E0813 01:41:26.294037 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\": not found" containerID="13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed" Aug 13 01:41:26.294155 kubelet[2688]: I0813 01:41:26.294055 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed"} err="failed to get container status \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"13ed1e457eff096b7bb156fe78f1a1ba31a97b3ab2a11944a2cc98d03161e9ed\": not found" Aug 13 01:41:26.294155 kubelet[2688]: I0813 01:41:26.294117 2688 scope.go:117] "RemoveContainer" containerID="7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5" Aug 13 01:41:26.294448 containerd[1492]: time="2025-08-13T01:41:26.294384974Z" level=error msg="ContainerStatus for \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\": not found" Aug 13 01:41:26.294627 kubelet[2688]: E0813 01:41:26.294555 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\": not found" containerID="7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5" Aug 13 01:41:26.294627 kubelet[2688]: I0813 01:41:26.294593 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5"} err="failed to get container status \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\": rpc error: code = NotFound desc = an error occurred when try to find container \"7bc5e488d8adff79a8be4a34e8a66f69f1290d113ebc1fe0bfd9bd0022691ce5\": not found" Aug 13 01:41:26.294627 kubelet[2688]: I0813 01:41:26.294608 2688 scope.go:117] "RemoveContainer" containerID="bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2" Aug 13 01:41:26.295980 containerd[1492]: time="2025-08-13T01:41:26.294945173Z" level=error msg="ContainerStatus for \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\": not found" Aug 13 01:41:26.296039 kubelet[2688]: E0813 01:41:26.295687 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\": not found" containerID="bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2" Aug 13 01:41:26.296039 kubelet[2688]: I0813 01:41:26.295708 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2"} err="failed to get container status \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd089fad863ce2ee900c48797cf47891aacd80fb8c1bc18c76aa87a09e74b2b2\": not found" Aug 13 01:41:26.296039 kubelet[2688]: I0813 01:41:26.295723 2688 scope.go:117] "RemoveContainer" containerID="4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197" Aug 13 01:41:26.296872 containerd[1492]: time="2025-08-13T01:41:26.296807560Z" level=info msg="RemoveContainer for \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\"" Aug 13 01:41:26.299597 containerd[1492]: time="2025-08-13T01:41:26.299570755Z" level=info msg="RemoveContainer for \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\" returns successfully" Aug 13 01:41:26.299791 kubelet[2688]: I0813 01:41:26.299701 2688 scope.go:117] "RemoveContainer" containerID="4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197" Aug 13 01:41:26.299916 containerd[1492]: time="2025-08-13T01:41:26.299856165Z" level=error msg="ContainerStatus for \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\": not found" Aug 13 01:41:26.300045 kubelet[2688]: E0813 01:41:26.300004 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\": not found" containerID="4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197" Aug 13 01:41:26.300045 kubelet[2688]: I0813 01:41:26.300031 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197"} err="failed to get container status \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c5a52393190dcf53312fbcf1fb880eac6f17b53c286f74695b0d6a95474d197\": not found" Aug 13 01:41:26.443377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01-rootfs.mount: Deactivated successfully. Aug 13 01:41:26.443560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01-shm.mount: Deactivated successfully. Aug 13 01:41:26.443673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f-rootfs.mount: Deactivated successfully. Aug 13 01:41:26.443781 systemd[1]: var-lib-kubelet-pods-b570fe5d\x2deb8b\x2d4763\x2d9890\x2d9e7f066c4c2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsmjzn.mount: Deactivated successfully. Aug 13 01:41:26.443870 systemd[1]: var-lib-kubelet-pods-c67b6e0c\x2d16a5\x2d47ac\x2d92fd\x2daf9bf0169651-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6lgkw.mount: Deactivated successfully. Aug 13 01:41:26.444029 systemd[1]: var-lib-kubelet-pods-b570fe5d\x2deb8b\x2d4763\x2d9890\x2d9e7f066c4c2e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:41:26.444174 systemd[1]: var-lib-kubelet-pods-b570fe5d\x2deb8b\x2d4763\x2d9890\x2d9e7f066c4c2e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:41:27.180835 kubelet[2688]: I0813 01:41:27.180789 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b570fe5d-eb8b-4763-9890-9e7f066c4c2e" path="/var/lib/kubelet/pods/b570fe5d-eb8b-4763-9890-9e7f066c4c2e/volumes" Aug 13 01:41:27.182141 kubelet[2688]: I0813 01:41:27.181803 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c67b6e0c-16a5-47ac-92fd-af9bf0169651" path="/var/lib/kubelet/pods/c67b6e0c-16a5-47ac-92fd-af9bf0169651/volumes" Aug 13 01:41:27.396050 sshd[4813]: Connection closed by 139.178.89.65 port 53068 Aug 13 01:41:27.397098 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Aug 13 01:41:27.402256 systemd[1]: sshd@73-172.233.223.240:22-139.178.89.65:53068.service: Deactivated successfully. Aug 13 01:41:27.405235 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 01:41:27.406857 systemd-logind[1468]: Session 56 logged out. Waiting for processes to exit. Aug 13 01:41:27.408530 systemd-logind[1468]: Removed session 56. Aug 13 01:41:27.464194 systemd[1]: Started sshd@74-172.233.223.240:22-139.178.89.65:53082.service - OpenSSH per-connection server daemon (139.178.89.65:53082). Aug 13 01:41:27.788350 sshd[4978]: Accepted publickey for core from 139.178.89.65 port 53082 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:41:27.790162 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:41:27.796505 systemd-logind[1468]: New session 57 of user core. Aug 13 01:41:27.800024 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 01:41:28.457931 kubelet[2688]: I0813 01:41:28.456508 2688 memory_manager.go:355] "RemoveStaleState removing state" podUID="c67b6e0c-16a5-47ac-92fd-af9bf0169651" containerName="cilium-operator" Aug 13 01:41:28.457931 kubelet[2688]: I0813 01:41:28.456559 2688 memory_manager.go:355] "RemoveStaleState removing state" podUID="b570fe5d-eb8b-4763-9890-9e7f066c4c2e" containerName="cilium-agent" Aug 13 01:41:28.473920 sshd[4980]: Connection closed by 139.178.89.65 port 53082 Aug 13 01:41:28.471178 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Aug 13 01:41:28.470561 systemd[1]: Created slice kubepods-burstable-podbb9f1f00_fa65_41b7_b36a_e0f55cc63bd3.slice - libcontainer container kubepods-burstable-podbb9f1f00_fa65_41b7_b36a_e0f55cc63bd3.slice. Aug 13 01:41:28.478524 systemd[1]: sshd@74-172.233.223.240:22-139.178.89.65:53082.service: Deactivated successfully. Aug 13 01:41:28.484042 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 01:41:28.487158 kubelet[2688]: E0813 01:41:28.486997 2688 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:41:28.490375 systemd-logind[1468]: Session 57 logged out. Waiting for processes to exit. Aug 13 01:41:28.493961 systemd-logind[1468]: Removed session 57. Aug 13 01:41:28.543461 systemd[1]: Started sshd@75-172.233.223.240:22-139.178.89.65:53088.service - OpenSSH per-connection server daemon (139.178.89.65:53088). Aug 13 01:41:28.581021 kubelet[2688]: I0813 01:41:28.580912 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-hostproc\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581021 kubelet[2688]: I0813 01:41:28.580964 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-cilium-run\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581021 kubelet[2688]: I0813 01:41:28.580992 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-etc-cni-netd\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581021 kubelet[2688]: I0813 01:41:28.581011 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-clustermesh-secrets\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581021 kubelet[2688]: I0813 01:41:28.581032 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-cilium-ipsec-secrets\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581395 kubelet[2688]: I0813 01:41:28.581050 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-host-proc-sys-kernel\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581395 kubelet[2688]: I0813 01:41:28.581069 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kw8p\" (UniqueName: \"kubernetes.io/projected/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-kube-api-access-6kw8p\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581395 kubelet[2688]: I0813 01:41:28.581086 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-bpf-maps\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581395 kubelet[2688]: I0813 01:41:28.581100 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-cilium-cgroup\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581395 kubelet[2688]: I0813 01:41:28.581124 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-xtables-lock\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581395 kubelet[2688]: I0813 01:41:28.581141 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-host-proc-sys-net\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581563 kubelet[2688]: I0813 01:41:28.581160 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-cilium-config-path\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581563 kubelet[2688]: I0813 01:41:28.581175 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-hubble-tls\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581563 kubelet[2688]: I0813 01:41:28.581192 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-cni-path\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.581563 kubelet[2688]: I0813 01:41:28.581208 2688 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3-lib-modules\") pod \"cilium-j2mh7\" (UID: \"bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3\") " pod="kube-system/cilium-j2mh7" Aug 13 01:41:28.786891 kubelet[2688]: E0813 01:41:28.786838 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:28.788658 containerd[1492]: time="2025-08-13T01:41:28.787952125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j2mh7,Uid:bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3,Namespace:kube-system,Attempt:0,}" Aug 13 01:41:28.813959 containerd[1492]: time="2025-08-13T01:41:28.813034040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:41:28.813959 containerd[1492]: time="2025-08-13T01:41:28.813831708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:41:28.813959 containerd[1492]: time="2025-08-13T01:41:28.813853518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:41:28.814365 containerd[1492]: time="2025-08-13T01:41:28.814266698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:41:28.838084 systemd[1]: Started cri-containerd-90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3.scope - libcontainer container 90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3. Aug 13 01:41:28.873334 kubelet[2688]: E0813 01:41:28.873214 2688 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb9f1f00_fa65_41b7_b36a_e0f55cc63bd3.slice/cri-containerd-90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3.scope\": RecentStats: unable to find data in memory cache]" Aug 13 01:41:28.875383 kubelet[2688]: I0813 01:41:28.875356 2688 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:41:28.875475 kubelet[2688]: I0813 01:41:28.875447 2688 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:41:28.876021 containerd[1492]: time="2025-08-13T01:41:28.875885086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j2mh7,Uid:bb9f1f00-fa65-41b7-b36a-e0f55cc63bd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\"" Aug 13 01:41:28.876645 kubelet[2688]: E0813 01:41:28.876620 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:28.880347 containerd[1492]: time="2025-08-13T01:41:28.880268828Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:41:28.882135 containerd[1492]: time="2025-08-13T01:41:28.880769477Z" level=info msg="StopPodSandbox for \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\"" Aug 13 01:41:28.882210 containerd[1492]: time="2025-08-13T01:41:28.882190875Z" level=info msg="TearDown network for sandbox \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\" successfully" Aug 13 01:41:28.882210 containerd[1492]: time="2025-08-13T01:41:28.882203205Z" level=info msg="StopPodSandbox for \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\" returns successfully" Aug 13 01:41:28.883115 containerd[1492]: time="2025-08-13T01:41:28.883073383Z" level=info msg="RemovePodSandbox for \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\"" Aug 13 01:41:28.883115 containerd[1492]: time="2025-08-13T01:41:28.883111893Z" level=info msg="Forcibly stopping sandbox \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\"" Aug 13 01:41:28.883295 containerd[1492]: time="2025-08-13T01:41:28.883164513Z" level=info msg="TearDown network for sandbox \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\" successfully" Aug 13 01:41:28.886943 sshd[4991]: Accepted publickey for core from 139.178.89.65 port 53088 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:41:28.891445 sshd-session[4991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:41:28.892186 containerd[1492]: time="2025-08-13T01:41:28.891965647Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:41:28.892186 containerd[1492]: time="2025-08-13T01:41:28.892036617Z" level=info msg="RemovePodSandbox \"346cba9cd705d276e0400d13b8aeb4b2c300296152e4d0ea436c432483a4f04f\" returns successfully" Aug 13 01:41:28.893578 containerd[1492]: time="2025-08-13T01:41:28.892722626Z" level=info msg="StopPodSandbox for \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\"" Aug 13 01:41:28.893578 containerd[1492]: time="2025-08-13T01:41:28.892802145Z" level=info msg="TearDown network for sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" successfully" Aug 13 01:41:28.893578 containerd[1492]: time="2025-08-13T01:41:28.892813215Z" level=info msg="StopPodSandbox for \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" returns successfully" Aug 13 01:41:28.894058 containerd[1492]: time="2025-08-13T01:41:28.893774494Z" level=info msg="RemovePodSandbox for \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\"" Aug 13 01:41:28.894058 containerd[1492]: time="2025-08-13T01:41:28.893803694Z" level=info msg="Forcibly stopping sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\"" Aug 13 01:41:28.894058 containerd[1492]: time="2025-08-13T01:41:28.893854874Z" level=info msg="TearDown network for sandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" successfully" Aug 13 01:41:28.906587 systemd-logind[1468]: New session 58 of user core. Aug 13 01:41:28.910972 containerd[1492]: time="2025-08-13T01:41:28.910812943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:41:28.911129 containerd[1492]: time="2025-08-13T01:41:28.911110212Z" level=info msg="RemovePodSandbox \"05e02bfba29da12e6471491c64dc5037307656f6452683abff5da808be77af01\" returns successfully" Aug 13 01:41:28.913654 systemd[1]: Started session-58.scope - Session 58 of User core. Aug 13 01:41:28.915330 kubelet[2688]: I0813 01:41:28.914074 2688 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:41:28.917306 kubelet[2688]: I0813 01:41:28.917215 2688 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c" size=18897442 runtimeHandler="" Aug 13 01:41:28.917540 containerd[1492]: time="2025-08-13T01:41:28.917517481Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:41:28.922031 containerd[1492]: time="2025-08-13T01:41:28.922001343Z" level=info msg="ImageDelete event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:41:28.923205 containerd[1492]: time="2025-08-13T01:41:28.922235762Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fda7f6150c9abdef3e2742096c891f47651d7dba7f584dd445e83e7a836aa30e\"" Aug 13 01:41:28.925992 containerd[1492]: time="2025-08-13T01:41:28.924402288Z" level=info msg="StartContainer for \"fda7f6150c9abdef3e2742096c891f47651d7dba7f584dd445e83e7a836aa30e\"" Aug 13 01:41:28.932250 containerd[1492]: time="2025-08-13T01:41:28.931966625Z" level=info msg="ImageDelete event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:41:28.969778 containerd[1492]: time="2025-08-13T01:41:28.969734356Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" returns successfully" Aug 13 01:41:28.970210 kubelet[2688]: I0813 01:41:28.970171 2688 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b" size=166719855 runtimeHandler="" Aug 13 01:41:28.970630 containerd[1492]: time="2025-08-13T01:41:28.970606635Z" level=info msg="RemoveImage \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:41:28.971244 systemd[1]: Started cri-containerd-fda7f6150c9abdef3e2742096c891f47651d7dba7f584dd445e83e7a836aa30e.scope - libcontainer container fda7f6150c9abdef3e2742096c891f47651d7dba7f584dd445e83e7a836aa30e. Aug 13 01:41:28.972163 containerd[1492]: time="2025-08-13T01:41:28.971990872Z" level=info msg="ImageDelete event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:41:28.973696 containerd[1492]: time="2025-08-13T01:41:28.973655849Z" level=info msg="ImageDelete event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:41:29.012598 containerd[1492]: time="2025-08-13T01:41:29.012546859Z" level=info msg="StartContainer for \"fda7f6150c9abdef3e2742096c891f47651d7dba7f584dd445e83e7a836aa30e\" returns successfully" Aug 13 01:41:29.029178 containerd[1492]: time="2025-08-13T01:41:29.029050999Z" level=info msg="RemoveImage \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" returns successfully" Aug 13 01:41:29.029163 systemd[1]: cri-containerd-fda7f6150c9abdef3e2742096c891f47651d7dba7f584dd445e83e7a836aa30e.scope: Deactivated successfully. Aug 13 01:41:29.060423 kubelet[2688]: I0813 01:41:29.060149 2688 eviction_manager.go:383] "Eviction manager: able to reduce resource pressure without evicting pods." resourceName="ephemeral-storage" Aug 13 01:41:29.078966 containerd[1492]: time="2025-08-13T01:41:29.078858429Z" level=info msg="shim disconnected" id=fda7f6150c9abdef3e2742096c891f47651d7dba7f584dd445e83e7a836aa30e namespace=k8s.io Aug 13 01:41:29.078966 containerd[1492]: time="2025-08-13T01:41:29.078942549Z" level=warning msg="cleaning up after shim disconnected" id=fda7f6150c9abdef3e2742096c891f47651d7dba7f584dd445e83e7a836aa30e namespace=k8s.io Aug 13 01:41:29.078966 containerd[1492]: time="2025-08-13T01:41:29.078952569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:41:29.140560 sshd[5041]: Connection closed by 139.178.89.65 port 53088 Aug 13 01:41:29.142182 sshd-session[4991]: pam_unix(sshd:session): session closed for user core Aug 13 01:41:29.145997 systemd[1]: sshd@75-172.233.223.240:22-139.178.89.65:53088.service: Deactivated successfully. Aug 13 01:41:29.149258 systemd[1]: session-58.scope: Deactivated successfully. Aug 13 01:41:29.151063 systemd-logind[1468]: Session 58 logged out. Waiting for processes to exit. Aug 13 01:41:29.153553 systemd-logind[1468]: Removed session 58. Aug 13 01:41:29.205465 systemd[1]: Started sshd@76-172.233.223.240:22-139.178.89.65:38016.service - OpenSSH per-connection server daemon (139.178.89.65:38016). Aug 13 01:41:29.269500 kubelet[2688]: E0813 01:41:29.269449 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:29.270856 containerd[1492]: time="2025-08-13T01:41:29.270694143Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:41:29.540660 sshd[5110]: Accepted publickey for core from 139.178.89.65 port 38016 ssh2: RSA SHA256:i/TZLewXLzdHEK5VnGeUosSnCiLBbfMeziqy3I8laao Aug 13 01:41:29.543467 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:41:29.550185 systemd-logind[1468]: New session 59 of user core. Aug 13 01:41:29.556353 systemd[1]: Started session-59.scope - Session 59 of User core. Aug 13 01:41:30.147923 containerd[1492]: time="2025-08-13T01:41:30.147684053Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=7950" Aug 13 01:41:30.147923 containerd[1492]: time="2025-08-13T01:41:30.147846443Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:41:30.151202 containerd[1492]: time="2025-08-13T01:41:30.150727158Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 879.976425ms" Aug 13 01:41:30.151202 containerd[1492]: time="2025-08-13T01:41:30.150760208Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:41:30.151202 containerd[1492]: time="2025-08-13T01:41:30.151087927Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:41:30.154875 containerd[1492]: time="2025-08-13T01:41:30.154668351Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:41:30.171450 containerd[1492]: time="2025-08-13T01:41:30.171391141Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6\"" Aug 13 01:41:30.172245 containerd[1492]: time="2025-08-13T01:41:30.172207739Z" level=info msg="StartContainer for \"0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6\"" Aug 13 01:41:30.224078 systemd[1]: Started cri-containerd-0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6.scope - libcontainer container 0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6. Aug 13 01:41:30.263004 containerd[1492]: time="2025-08-13T01:41:30.262461227Z" level=info msg="StartContainer for \"0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6\" returns successfully" Aug 13 01:41:30.276308 systemd[1]: cri-containerd-0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6.scope: Deactivated successfully. Aug 13 01:41:30.279213 kubelet[2688]: E0813 01:41:30.277915 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:30.338484 containerd[1492]: time="2025-08-13T01:41:30.338254231Z" level=info msg="shim disconnected" id=0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6 namespace=k8s.io Aug 13 01:41:30.338484 containerd[1492]: time="2025-08-13T01:41:30.338305621Z" level=warning msg="cleaning up after shim disconnected" id=0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6 namespace=k8s.io Aug 13 01:41:30.338484 containerd[1492]: time="2025-08-13T01:41:30.338316621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:41:30.375013 containerd[1492]: time="2025-08-13T01:41:30.374075417Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:41:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:41:30.689863 systemd[1]: run-containerd-runc-k8s.io-0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6-runc.MQGRhJ.mount: Deactivated successfully. Aug 13 01:41:30.690090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ec24b7d945cfac5d0f733347fac8adb14bf0c630e16192f19a8847e4b0525a6-rootfs.mount: Deactivated successfully. Aug 13 01:41:31.280963 kubelet[2688]: E0813 01:41:31.280928 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:31.283016 containerd[1492]: time="2025-08-13T01:41:31.282966756Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:41:31.300802 containerd[1492]: time="2025-08-13T01:41:31.300761294Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0\"" Aug 13 01:41:31.302365 containerd[1492]: time="2025-08-13T01:41:31.302334661Z" level=info msg="StartContainer for \"347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0\"" Aug 13 01:41:31.349045 systemd[1]: Started cri-containerd-347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0.scope - libcontainer container 347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0. Aug 13 01:41:31.383054 containerd[1492]: time="2025-08-13T01:41:31.382153988Z" level=info msg="StartContainer for \"347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0\" returns successfully" Aug 13 01:41:31.388654 systemd[1]: cri-containerd-347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0.scope: Deactivated successfully. Aug 13 01:41:31.411704 containerd[1492]: time="2025-08-13T01:41:31.411650006Z" level=info msg="shim disconnected" id=347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0 namespace=k8s.io Aug 13 01:41:31.411704 containerd[1492]: time="2025-08-13T01:41:31.411697936Z" level=warning msg="cleaning up after shim disconnected" id=347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0 namespace=k8s.io Aug 13 01:41:31.411704 containerd[1492]: time="2025-08-13T01:41:31.411707306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:41:31.689326 systemd[1]: run-containerd-runc-k8s.io-347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0-runc.bCoALZ.mount: Deactivated successfully. Aug 13 01:41:31.689461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-347920251f3fe2b9ddb1125b98ce206f0357b9f346c6d7a17b9aae33aa383fa0-rootfs.mount: Deactivated successfully. Aug 13 01:41:32.284668 kubelet[2688]: E0813 01:41:32.283983 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:32.286948 containerd[1492]: time="2025-08-13T01:41:32.286881951Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:41:32.302604 containerd[1492]: time="2025-08-13T01:41:32.302554343Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd3ac8360ff88465bf0c72ee2f2f2291985ef9bb3bc9ae543e8b32cbc689b804\"" Aug 13 01:41:32.303933 containerd[1492]: time="2025-08-13T01:41:32.303241202Z" level=info msg="StartContainer for \"dd3ac8360ff88465bf0c72ee2f2f2291985ef9bb3bc9ae543e8b32cbc689b804\"" Aug 13 01:41:32.344060 systemd[1]: Started cri-containerd-dd3ac8360ff88465bf0c72ee2f2f2291985ef9bb3bc9ae543e8b32cbc689b804.scope - libcontainer container dd3ac8360ff88465bf0c72ee2f2f2291985ef9bb3bc9ae543e8b32cbc689b804. Aug 13 01:41:32.373259 systemd[1]: cri-containerd-dd3ac8360ff88465bf0c72ee2f2f2291985ef9bb3bc9ae543e8b32cbc689b804.scope: Deactivated successfully. Aug 13 01:41:32.375565 containerd[1492]: time="2025-08-13T01:41:32.375528923Z" level=info msg="StartContainer for \"dd3ac8360ff88465bf0c72ee2f2f2291985ef9bb3bc9ae543e8b32cbc689b804\" returns successfully" Aug 13 01:41:32.398314 containerd[1492]: time="2025-08-13T01:41:32.398231573Z" level=info msg="shim disconnected" id=dd3ac8360ff88465bf0c72ee2f2f2291985ef9bb3bc9ae543e8b32cbc689b804 namespace=k8s.io Aug 13 01:41:32.398314 containerd[1492]: time="2025-08-13T01:41:32.398306722Z" level=warning msg="cleaning up after shim disconnected" id=dd3ac8360ff88465bf0c72ee2f2f2291985ef9bb3bc9ae543e8b32cbc689b804 namespace=k8s.io Aug 13 01:41:32.398314 containerd[1492]: time="2025-08-13T01:41:32.398315682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:41:32.689138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd3ac8360ff88465bf0c72ee2f2f2291985ef9bb3bc9ae543e8b32cbc689b804-rootfs.mount: Deactivated successfully. Aug 13 01:41:33.237283 kubelet[2688]: I0813 01:41:33.237225 2688 setters.go:602] "Node became not ready" node="172-233-223-240" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T01:41:33Z","lastTransitionTime":"2025-08-13T01:41:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 01:41:33.289261 kubelet[2688]: E0813 01:41:33.288864 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:33.292154 containerd[1492]: time="2025-08-13T01:41:33.291792381Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:41:33.318967 containerd[1492]: time="2025-08-13T01:41:33.316847996Z" level=info msg="CreateContainer within sandbox \"90613a907d39a685fb442418942ba2bc6d437a7277dcd1a3db803523451662b3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5e7f5f8ab2c27ddc12780b52dc747a690a86ae53266325846cdda740e487760c\"" Aug 13 01:41:33.320153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383234672.mount: Deactivated successfully. Aug 13 01:41:33.323311 containerd[1492]: time="2025-08-13T01:41:33.323106475Z" level=info msg="StartContainer for \"5e7f5f8ab2c27ddc12780b52dc747a690a86ae53266325846cdda740e487760c\"" Aug 13 01:41:33.361059 systemd[1]: Started cri-containerd-5e7f5f8ab2c27ddc12780b52dc747a690a86ae53266325846cdda740e487760c.scope - libcontainer container 5e7f5f8ab2c27ddc12780b52dc747a690a86ae53266325846cdda740e487760c. Aug 13 01:41:33.404835 containerd[1492]: time="2025-08-13T01:41:33.404776940Z" level=info msg="StartContainer for \"5e7f5f8ab2c27ddc12780b52dc747a690a86ae53266325846cdda740e487760c\" returns successfully" Aug 13 01:41:33.489701 kubelet[2688]: E0813 01:41:33.489508 2688 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:41:34.024945 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 01:41:34.297683 kubelet[2688]: E0813 01:41:34.297517 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:34.319329 kubelet[2688]: I0813 01:41:34.319221 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j2mh7" podStartSLOduration=5.437431256 podStartE2EDuration="6.319191187s" podCreationTimestamp="2025-08-13 01:41:28 +0000 UTC" firstStartedPulling="2025-08-13 01:41:29.270315554 +0000 UTC m=+352.224187029" lastFinishedPulling="2025-08-13 01:41:30.152075485 +0000 UTC m=+353.105946960" observedRunningTime="2025-08-13 01:41:34.316768711 +0000 UTC m=+357.270640186" watchObservedRunningTime="2025-08-13 01:41:34.319191187 +0000 UTC m=+357.273062662" Aug 13 01:41:35.300545 kubelet[2688]: E0813 01:41:35.300500 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:37.191736 systemd-networkd[1404]: lxc_health: Link UP Aug 13 01:41:37.195253 systemd-networkd[1404]: lxc_health: Gained carrier Aug 13 01:41:38.446139 systemd-networkd[1404]: lxc_health: Gained IPv6LL Aug 13 01:41:38.791077 kubelet[2688]: E0813 01:41:38.791042 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:39.312331 kubelet[2688]: E0813 01:41:39.311116 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:39.794159 systemd[1]: Started sshd@77-172.233.223.240:22-91.99.206.168:45676.service - OpenSSH per-connection server daemon (91.99.206.168:45676). Aug 13 01:41:40.038782 sshd[5914]: Unable to negotiate with 91.99.206.168 port 45676: no matching MAC found. Their offer: hmac-sha1-96 [preauth] Aug 13 01:41:40.041030 systemd[1]: sshd@77-172.233.223.240:22-91.99.206.168:45676.service: Deactivated successfully. Aug 13 01:41:40.315660 kubelet[2688]: E0813 01:41:40.313249 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:41.180493 kubelet[2688]: E0813 01:41:41.180450 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Aug 13 01:41:42.517439 systemd[1]: run-containerd-runc-k8s.io-5e7f5f8ab2c27ddc12780b52dc747a690a86ae53266325846cdda740e487760c-runc.QLgRrk.mount: Deactivated successfully. Aug 13 01:41:42.617947 sshd[5112]: Connection closed by 139.178.89.65 port 38016 Aug 13 01:41:42.619863 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Aug 13 01:41:42.624004 systemd[1]: sshd@76-172.233.223.240:22-139.178.89.65:38016.service: Deactivated successfully. Aug 13 01:41:42.626655 systemd[1]: session-59.scope: Deactivated successfully. Aug 13 01:41:42.628424 systemd-logind[1468]: Session 59 logged out. Waiting for processes to exit. Aug 13 01:41:42.630764 systemd-logind[1468]: Removed session 59. Aug 13 01:41:48.089180 systemd[1]: Started sshd@78-172.233.223.240:22-85.185.96.240:53800.service - OpenSSH per-connection server daemon (85.185.96.240:53800). Aug 13 01:41:48.605393 sshd[5971]: Connection closed by 85.185.96.240 port 53800 [preauth] Aug 13 01:41:48.606546 systemd[1]: sshd@78-172.233.223.240:22-85.185.96.240:53800.service: Deactivated successfully.