Aug 13 00:34:52.922522 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 13 00:34:52.922554 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:34:52.922564 kernel: BIOS-provided physical RAM map: Aug 13 00:34:52.922570 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 00:34:52.922576 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 00:34:52.922585 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:34:52.922592 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 00:34:52.922598 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 00:34:52.922604 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 00:34:52.922610 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 00:34:52.922617 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:34:52.922623 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:34:52.922629 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 00:34:52.922635 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 00:34:52.922645 kernel: NX (Execute Disable) protection: active Aug 13 00:34:52.922651 kernel: APIC: Static calls initialized Aug 13 00:34:52.922657 kernel: SMBIOS 2.8 present. Aug 13 00:34:52.922664 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 00:34:52.922670 kernel: Hypervisor detected: KVM Aug 13 00:34:52.922680 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:34:52.922686 kernel: kvm-clock: using sched offset of 4566213124 cycles Aug 13 00:34:52.922693 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:34:52.922700 kernel: tsc: Detected 1999.996 MHz processor Aug 13 00:34:52.922706 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:34:52.922713 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:34:52.922720 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 00:34:52.923774 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 00:34:52.923785 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:34:52.923796 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 00:34:52.923803 kernel: Using GB pages for direct mapping Aug 13 00:34:52.923809 kernel: ACPI: Early table checksum verification disabled Aug 13 00:34:52.923816 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 00:34:52.923822 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:34:52.923829 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:34:52.923835 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:34:52.923842 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:34:52.923848 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:34:52.923858 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:34:52.923864 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:34:52.923871 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:34:52.923881 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 00:34:52.923888 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 00:34:52.923895 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:34:52.923904 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 00:34:52.923911 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 00:34:52.923918 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 00:34:52.923924 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 00:34:52.923931 kernel: No NUMA configuration found Aug 13 00:34:52.923938 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 00:34:52.923945 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Aug 13 00:34:52.923952 kernel: Zone ranges: Aug 13 00:34:52.923959 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:34:52.923969 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:34:52.923975 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:34:52.923982 kernel: Movable zone start for each node Aug 13 00:34:52.923989 kernel: Early memory node ranges Aug 13 00:34:52.923996 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:34:52.924003 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 00:34:52.924010 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 00:34:52.924016 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 00:34:52.924023 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:34:52.924032 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:34:52.924039 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 00:34:52.924045 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:34:52.924051 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:34:52.924058 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:34:52.924065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:34:52.924071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:34:52.924079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:34:52.924090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:34:52.924104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:34:52.924116 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:34:52.924127 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:34:52.924139 kernel: TSC deadline timer available Aug 13 00:34:52.924148 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:34:52.924159 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 00:34:52.924170 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 00:34:52.924181 kernel: kvm-guest: setup PV sched yield Aug 13 00:34:52.924192 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 00:34:52.924206 kernel: Booting paravirtualized kernel on KVM Aug 13 00:34:52.924213 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:34:52.924220 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:34:52.924227 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 00:34:52.924234 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 00:34:52.924241 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:34:52.924248 kernel: kvm-guest: PV spinlocks enabled Aug 13 00:34:52.924254 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:34:52.924262 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:34:52.924272 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:34:52.924279 kernel: random: crng init done Aug 13 00:34:52.924285 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:34:52.924292 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:34:52.924298 kernel: Fallback order for Node 0: 0 Aug 13 00:34:52.924305 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Aug 13 00:34:52.924312 kernel: Policy zone: Normal Aug 13 00:34:52.924318 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:34:52.924327 kernel: software IO TLB: area num 2. Aug 13 00:34:52.924334 kernel: Memory: 3964164K/4193772K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 229348K reserved, 0K cma-reserved) Aug 13 00:34:52.924344 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:34:52.924355 kernel: ftrace: allocating 37942 entries in 149 pages Aug 13 00:34:52.924366 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 00:34:52.924377 kernel: Dynamic Preempt: voluntary Aug 13 00:34:52.924389 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:34:52.924401 kernel: rcu: RCU event tracing is enabled. Aug 13 00:34:52.924412 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:34:52.924428 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:34:52.924439 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:34:52.924451 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:34:52.924463 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:34:52.924469 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:34:52.924476 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:34:52.924483 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:34:52.924489 kernel: Console: colour VGA+ 80x25 Aug 13 00:34:52.924496 kernel: printk: console [tty0] enabled Aug 13 00:34:52.924506 kernel: printk: console [ttyS0] enabled Aug 13 00:34:52.924522 kernel: ACPI: Core revision 20230628 Aug 13 00:34:52.924533 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:34:52.924543 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:34:52.924564 kernel: x2apic enabled Aug 13 00:34:52.924574 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 00:34:52.924581 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 00:34:52.924588 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 00:34:52.924595 kernel: kvm-guest: setup PV IPIs Aug 13 00:34:52.924602 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:34:52.924609 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 00:34:52.924616 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999996) Aug 13 00:34:52.924626 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 00:34:52.924633 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 00:34:52.924640 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 00:34:52.924647 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:34:52.924654 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:34:52.924663 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:34:52.924670 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:34:52.924677 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:34:52.924684 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 00:34:52.924691 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 00:34:52.924699 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 00:34:52.924706 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 00:34:52.924713 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 00:34:52.924722 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:34:52.924742 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:34:52.925030 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:34:52.925038 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:34:52.925045 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 00:34:52.925052 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:34:52.925059 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 00:34:52.925066 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 00:34:52.925073 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:34:52.925084 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:34:52.925091 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:34:52.925098 kernel: landlock: Up and running. Aug 13 00:34:52.925105 kernel: SELinux: Initializing. Aug 13 00:34:52.925112 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:34:52.925119 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:34:52.925126 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 00:34:52.925133 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:34:52.925140 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:34:52.925150 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:34:52.925157 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 00:34:52.925164 kernel: ... version: 0 Aug 13 00:34:52.925171 kernel: ... bit width: 48 Aug 13 00:34:52.925177 kernel: ... generic registers: 6 Aug 13 00:34:52.925184 kernel: ... value mask: 0000ffffffffffff Aug 13 00:34:52.925191 kernel: ... max period: 00007fffffffffff Aug 13 00:34:52.925198 kernel: ... fixed-purpose events: 0 Aug 13 00:34:52.925205 kernel: ... event mask: 000000000000003f Aug 13 00:34:52.925215 kernel: signal: max sigframe size: 3376 Aug 13 00:34:52.925222 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:34:52.925229 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:34:52.925236 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:34:52.925243 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:34:52.925250 kernel: .... node #0, CPUs: #1 Aug 13 00:34:52.925257 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:34:52.925264 kernel: smpboot: Max logical packages: 1 Aug 13 00:34:52.925271 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Aug 13 00:34:52.925281 kernel: devtmpfs: initialized Aug 13 00:34:52.925288 kernel: x86/mm: Memory block size: 128MB Aug 13 00:34:52.925295 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:34:52.925302 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:34:52.925309 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:34:52.925316 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:34:52.925322 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:34:52.925329 kernel: audit: type=2000 audit(1755045292.420:1): state=initialized audit_enabled=0 res=1 Aug 13 00:34:52.925337 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:34:52.925346 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:34:52.925353 kernel: cpuidle: using governor menu Aug 13 00:34:52.925360 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:34:52.925366 kernel: dca service started, version 1.12.1 Aug 13 00:34:52.925373 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 00:34:52.925380 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 00:34:52.925387 kernel: PCI: Using configuration type 1 for base access Aug 13 00:34:52.925394 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:34:52.925401 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:34:52.925411 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:34:52.925418 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:34:52.925424 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:34:52.925431 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:34:52.925438 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:34:52.925445 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:34:52.925452 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:34:52.925459 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 00:34:52.925466 kernel: ACPI: Interpreter enabled Aug 13 00:34:52.925475 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 00:34:52.925482 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:34:52.925489 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:34:52.925496 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 00:34:52.925508 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 00:34:52.925520 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:34:52.926640 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:34:52.926855 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 00:34:52.927019 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 00:34:52.927037 kernel: PCI host bridge to bus 0000:00 Aug 13 00:34:52.927214 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:34:52.927372 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:34:52.927515 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:34:52.927664 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 00:34:52.927829 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 00:34:52.927992 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 00:34:52.928138 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:34:52.928323 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 00:34:52.928508 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 00:34:52.928674 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 00:34:52.928849 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 00:34:52.928979 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 00:34:52.929098 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:34:52.929230 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Aug 13 00:34:52.929348 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 00:34:52.929470 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 00:34:52.929619 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 00:34:52.929827 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:34:52.930046 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 00:34:52.930169 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 00:34:52.930284 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 00:34:52.930424 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 00:34:52.930613 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 00:34:52.931872 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 00:34:52.932018 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 00:34:52.932145 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Aug 13 00:34:52.932260 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Aug 13 00:34:52.932382 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 00:34:52.932495 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 00:34:52.932511 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:34:52.932525 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:34:52.932535 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:34:52.932547 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:34:52.932554 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 00:34:52.932561 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 00:34:52.932568 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 00:34:52.932575 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 00:34:52.932582 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 00:34:52.932589 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 00:34:52.932595 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 00:34:52.932602 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 00:34:52.932612 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 00:34:52.932619 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 00:34:52.932625 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 00:34:52.932632 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 00:34:52.932639 kernel: iommu: Default domain type: Translated Aug 13 00:34:52.932646 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:34:52.932654 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:34:52.932661 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:34:52.932667 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 00:34:52.932677 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 00:34:52.933847 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 00:34:52.933970 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 00:34:52.934131 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:34:52.934142 kernel: vgaarb: loaded Aug 13 00:34:52.934150 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:34:52.934157 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:34:52.934164 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:34:52.934176 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:34:52.934183 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:34:52.934190 kernel: pnp: PnP ACPI init Aug 13 00:34:52.934349 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 00:34:52.934370 kernel: pnp: PnP ACPI: found 5 devices Aug 13 00:34:52.934384 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:34:52.934396 kernel: NET: Registered PF_INET protocol family Aug 13 00:34:52.934409 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:34:52.934422 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:34:52.934440 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:34:52.934453 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:34:52.934465 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:34:52.934476 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:34:52.934483 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:34:52.934490 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:34:52.934498 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:34:52.934511 kernel: NET: Registered PF_XDP protocol family Aug 13 00:34:52.934668 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:34:52.934875 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:34:52.935002 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:34:52.935108 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 00:34:52.935233 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 00:34:52.935362 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 00:34:52.935374 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:34:52.935381 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:34:52.935389 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 00:34:52.935401 kernel: Initialise system trusted keyrings Aug 13 00:34:52.935409 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:34:52.935415 kernel: Key type asymmetric registered Aug 13 00:34:52.935422 kernel: Asymmetric key parser 'x509' registered Aug 13 00:34:52.935429 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 00:34:52.935436 kernel: io scheduler mq-deadline registered Aug 13 00:34:52.935443 kernel: io scheduler kyber registered Aug 13 00:34:52.935450 kernel: io scheduler bfq registered Aug 13 00:34:52.935457 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:34:52.935467 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 00:34:52.935475 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 00:34:52.935482 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:34:52.935489 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:34:52.935496 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:34:52.935508 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:34:52.935520 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:34:52.935656 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:34:52.935672 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:34:52.936835 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:34:52.936952 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:34:52 UTC (1755045292) Aug 13 00:34:52.937060 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 00:34:52.937070 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 00:34:52.937077 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:34:52.937084 kernel: Segment Routing with IPv6 Aug 13 00:34:52.937091 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:34:52.937098 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:34:52.937110 kernel: Key type dns_resolver registered Aug 13 00:34:52.937117 kernel: IPI shorthand broadcast: enabled Aug 13 00:34:52.937124 kernel: sched_clock: Marking stable (679003692, 214897549)->(936098802, -42197561) Aug 13 00:34:52.937131 kernel: registered taskstats version 1 Aug 13 00:34:52.937138 kernel: Loading compiled-in X.509 certificates Aug 13 00:34:52.937145 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 13 00:34:52.937152 kernel: Key type .fscrypt registered Aug 13 00:34:52.937159 kernel: Key type fscrypt-provisioning registered Aug 13 00:34:52.937169 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:34:52.937175 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:34:52.937182 kernel: ima: No architecture policies found Aug 13 00:34:52.937189 kernel: clk: Disabling unused clocks Aug 13 00:34:52.937196 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 13 00:34:52.937204 kernel: Write protecting the kernel read-only data: 38912k Aug 13 00:34:52.937211 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 13 00:34:52.937218 kernel: Run /init as init process Aug 13 00:34:52.937225 kernel: with arguments: Aug 13 00:34:52.937234 kernel: /init Aug 13 00:34:52.937240 kernel: with environment: Aug 13 00:34:52.937247 kernel: HOME=/ Aug 13 00:34:52.937254 kernel: TERM=linux Aug 13 00:34:52.937261 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:34:52.937268 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:34:52.937280 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:34:52.937288 systemd[1]: Detected virtualization kvm. Aug 13 00:34:52.937297 systemd[1]: Detected architecture x86-64. Aug 13 00:34:52.937305 systemd[1]: Running in initrd. Aug 13 00:34:52.937312 systemd[1]: No hostname configured, using default hostname. Aug 13 00:34:52.937320 systemd[1]: Hostname set to . Aug 13 00:34:52.937327 systemd[1]: Initializing machine ID from random generator. Aug 13 00:34:52.937348 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:34:52.937360 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:34:52.937368 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:34:52.937376 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:34:52.937384 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:34:52.937392 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:34:52.937400 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:34:52.937409 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:34:52.937419 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:34:52.937427 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:34:52.937435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:34:52.937443 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:34:52.937451 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:34:52.937458 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:34:52.937466 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:34:52.937474 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:34:52.937484 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:34:52.937491 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:34:52.937501 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:34:52.937515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:34:52.937528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:34:52.937538 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:34:52.937545 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:34:52.937553 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:34:52.937561 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:34:52.937572 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:34:52.937580 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:34:52.937588 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:34:52.937596 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:34:52.937604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:34:52.937611 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:34:52.937619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:34:52.937630 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:34:52.937662 systemd-journald[177]: Collecting audit messages is disabled. Aug 13 00:34:52.937685 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:34:52.937694 systemd-journald[177]: Journal started Aug 13 00:34:52.937714 systemd-journald[177]: Runtime Journal (/run/log/journal/8467c332ebee43ec95298943b7461151) is 8M, max 78.3M, 70.3M free. Aug 13 00:34:52.932052 systemd-modules-load[178]: Inserted module 'overlay' Aug 13 00:34:52.940760 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:34:52.959777 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:34:52.959957 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:34:53.010408 kernel: Bridge firewalling registered Aug 13 00:34:52.961144 systemd-modules-load[178]: Inserted module 'br_netfilter' Aug 13 00:34:53.017012 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:34:53.017969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:34:53.019249 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:34:53.030906 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:34:53.055010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:34:53.058848 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:34:53.060773 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:34:53.062393 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:34:53.065909 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:34:53.072053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:34:53.076914 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:34:53.079960 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:34:53.088648 dracut-cmdline[210]: dracut-dracut-053 Aug 13 00:34:53.091922 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:34:53.126208 systemd-resolved[214]: Positive Trust Anchors: Aug 13 00:34:53.126223 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:34:53.126249 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:34:53.133384 systemd-resolved[214]: Defaulting to hostname 'linux'. Aug 13 00:34:53.136124 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:34:53.136870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:34:53.166777 kernel: SCSI subsystem initialized Aug 13 00:34:53.175757 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:34:53.187758 kernel: iscsi: registered transport (tcp) Aug 13 00:34:53.210009 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:34:53.210047 kernel: QLogic iSCSI HBA Driver Aug 13 00:34:53.259306 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:34:53.263864 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:34:53.291155 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:34:53.291194 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:34:53.291766 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:34:53.337766 kernel: raid6: avx2x4 gen() 27900 MB/s Aug 13 00:34:53.355753 kernel: raid6: avx2x2 gen() 28762 MB/s Aug 13 00:34:53.374572 kernel: raid6: avx2x1 gen() 20406 MB/s Aug 13 00:34:53.374619 kernel: raid6: using algorithm avx2x2 gen() 28762 MB/s Aug 13 00:34:53.393502 kernel: raid6: .... xor() 26726 MB/s, rmw enabled Aug 13 00:34:53.393566 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:34:53.415768 kernel: xor: automatically using best checksumming function avx Aug 13 00:34:53.542752 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:34:53.556457 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:34:53.562856 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:34:53.578637 systemd-udevd[397]: Using default interface naming scheme 'v255'. Aug 13 00:34:53.583520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:34:53.592058 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:34:53.606382 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Aug 13 00:34:53.636079 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:34:53.641851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:34:53.701527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:34:53.709913 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:34:53.721511 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:34:53.724716 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:34:53.726708 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:34:53.727252 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:34:53.732892 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:34:53.745308 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:34:53.779153 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:34:53.794967 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:34:53.800826 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:34:53.802063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:34:53.804514 kernel: libata version 3.00 loaded. Aug 13 00:34:53.804532 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 00:34:53.802217 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:34:53.806104 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:34:53.806749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:34:53.806863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:34:53.956843 kernel: AES CTR mode by8 optimization enabled Aug 13 00:34:53.809150 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:34:53.928612 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:34:54.004056 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 00:34:54.004467 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 00:34:54.004670 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:34:54.004888 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 00:34:54.005122 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 00:34:54.006756 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 00:34:54.006924 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:34:54.006943 kernel: GPT:9289727 != 9297919 Aug 13 00:34:54.006952 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:34:54.006960 kernel: GPT:9289727 != 9297919 Aug 13 00:34:54.006969 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:34:54.006978 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:34:54.006987 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:34:54.007130 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 00:34:54.007797 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 00:34:54.007951 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 00:34:54.017866 kernel: scsi host1: ahci Aug 13 00:34:54.018942 kernel: scsi host2: ahci Aug 13 00:34:54.019112 kernel: scsi host3: ahci Aug 13 00:34:54.020778 kernel: scsi host4: ahci Aug 13 00:34:54.022298 kernel: scsi host5: ahci Aug 13 00:34:54.022458 kernel: scsi host6: ahci Aug 13 00:34:54.022599 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Aug 13 00:34:54.022610 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Aug 13 00:34:54.022619 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Aug 13 00:34:54.022628 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Aug 13 00:34:54.022637 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Aug 13 00:34:54.022645 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Aug 13 00:34:54.071462 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 00:34:54.108561 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (442) Aug 13 00:34:54.108578 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/sda3 scanned by (udev-worker) (453) Aug 13 00:34:54.108524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:34:54.127961 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 00:34:54.136252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:34:54.143057 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 00:34:54.143634 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 00:34:54.155850 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:34:54.158955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:34:54.161039 disk-uuid[561]: Primary Header is updated. Aug 13 00:34:54.161039 disk-uuid[561]: Secondary Entries is updated. Aug 13 00:34:54.161039 disk-uuid[561]: Secondary Header is updated. Aug 13 00:34:54.165747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:34:54.184647 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:34:54.328746 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 00:34:54.328784 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 00:34:54.334444 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 00:34:54.334467 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 00:34:54.334477 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 00:34:54.334746 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 00:34:55.178763 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:34:55.180147 disk-uuid[562]: The operation has completed successfully. Aug 13 00:34:55.236654 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:34:55.236806 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:34:55.273928 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:34:55.277981 sh[584]: Success Aug 13 00:34:55.292612 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 00:34:55.341149 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:34:55.362455 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:34:55.363308 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:34:55.394174 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 13 00:34:55.394217 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:34:55.396187 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:34:55.399541 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:34:55.399555 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:34:55.408740 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:34:55.410104 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:34:55.411156 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:34:55.419894 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:34:55.422872 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:34:55.444702 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:34:55.444743 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:34:55.444755 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:34:55.449741 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:34:55.449766 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:34:55.456769 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:34:55.458250 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:34:55.462894 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:34:55.531971 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:34:55.540937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:34:55.550336 ignition[682]: Ignition 2.20.0 Aug 13 00:34:55.550823 ignition[682]: Stage: fetch-offline Aug 13 00:34:55.550871 ignition[682]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:34:55.550882 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:34:55.550975 ignition[682]: parsed url from cmdline: "" Aug 13 00:34:55.555033 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:34:55.550979 ignition[682]: no config URL provided Aug 13 00:34:55.550984 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:34:55.550994 ignition[682]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:34:55.550999 ignition[682]: failed to fetch config: resource requires networking Aug 13 00:34:55.551148 ignition[682]: Ignition finished successfully Aug 13 00:34:55.582262 systemd-networkd[767]: lo: Link UP Aug 13 00:34:55.582274 systemd-networkd[767]: lo: Gained carrier Aug 13 00:34:55.584054 systemd-networkd[767]: Enumeration completed Aug 13 00:34:55.584444 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:34:55.584448 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:34:55.585572 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:34:55.586072 systemd-networkd[767]: eth0: Link UP Aug 13 00:34:55.586077 systemd-networkd[767]: eth0: Gained carrier Aug 13 00:34:55.586084 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:34:55.587362 systemd[1]: Reached target network.target - Network. Aug 13 00:34:55.593961 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:34:55.607401 ignition[771]: Ignition 2.20.0 Aug 13 00:34:55.607814 ignition[771]: Stage: fetch Aug 13 00:34:55.607968 ignition[771]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:34:55.607981 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:34:55.608081 ignition[771]: parsed url from cmdline: "" Aug 13 00:34:55.608085 ignition[771]: no config URL provided Aug 13 00:34:55.608091 ignition[771]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:34:55.608101 ignition[771]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:34:55.608133 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 00:34:55.608318 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:34:55.808536 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 00:34:55.808817 ignition[771]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 00:34:56.128847 systemd-networkd[767]: eth0: DHCPv4 address 172.234.21.106/24, gateway 172.234.21.1 acquired from 23.194.118.56 Aug 13 00:34:56.209088 ignition[771]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 00:34:56.305992 ignition[771]: PUT result: OK Aug 13 00:34:56.306635 ignition[771]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 00:34:56.434489 ignition[771]: GET result: OK Aug 13 00:34:56.434607 ignition[771]: parsing config with SHA512: 39d1834096471b696b71ae6755593233fee737afde823e193c6f5d0f9769103a76c5db9304d64b1f50bd74ec61345bf566db0839f59ea14bf9946ea680d2394a Aug 13 00:34:56.438430 unknown[771]: fetched base config from "system" Aug 13 00:34:56.438448 unknown[771]: fetched base config from "system" Aug 13 00:34:56.439100 ignition[771]: fetch: fetch complete Aug 13 00:34:56.438454 unknown[771]: fetched user config from "akamai" Aug 13 00:34:56.439107 ignition[771]: fetch: fetch passed Aug 13 00:34:56.441551 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:34:56.439153 ignition[771]: Ignition finished successfully Aug 13 00:34:56.447940 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:34:56.464425 ignition[778]: Ignition 2.20.0 Aug 13 00:34:56.464447 ignition[778]: Stage: kargs Aug 13 00:34:56.464649 ignition[778]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:34:56.464662 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:34:56.468191 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:34:56.465432 ignition[778]: kargs: kargs passed Aug 13 00:34:56.465473 ignition[778]: Ignition finished successfully Aug 13 00:34:56.473968 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:34:56.488126 ignition[785]: Ignition 2.20.0 Aug 13 00:34:56.488142 ignition[785]: Stage: disks Aug 13 00:34:56.488291 ignition[785]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:34:56.488305 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:34:56.489029 ignition[785]: disks: disks passed Aug 13 00:34:56.490575 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:34:56.489069 ignition[785]: Ignition finished successfully Aug 13 00:34:56.491913 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:34:56.513679 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:34:56.514781 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:34:56.515794 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:34:56.517028 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:34:56.524917 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:34:56.540918 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:34:56.545052 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:34:56.548913 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:34:56.643869 kernel: EXT4-fs (sda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 13 00:34:56.644331 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:34:56.645621 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:34:56.658885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:34:56.661575 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:34:56.662480 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:34:56.662538 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:34:56.662571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:34:56.672935 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:34:56.676054 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (801) Aug 13 00:34:56.676077 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:34:56.676089 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:34:56.676099 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:34:56.683744 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:34:56.683768 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:34:56.686110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:34:56.693872 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:34:56.736100 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:34:56.742295 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:34:56.746428 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:34:56.751657 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:34:56.849340 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:34:56.856812 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:34:56.859512 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:34:56.866241 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:34:56.867811 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:34:56.894763 ignition[914]: INFO : Ignition 2.20.0 Aug 13 00:34:56.894763 ignition[914]: INFO : Stage: mount Aug 13 00:34:56.894763 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:34:56.894763 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:34:56.900300 ignition[914]: INFO : mount: mount passed Aug 13 00:34:56.900300 ignition[914]: INFO : Ignition finished successfully Aug 13 00:34:56.898521 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:34:56.905911 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:34:56.907707 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:34:57.104991 systemd-networkd[767]: eth0: Gained IPv6LL Aug 13 00:34:57.653477 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:34:57.669302 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (926) Aug 13 00:34:57.669334 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:34:57.669346 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:34:57.672632 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:34:57.677783 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 00:34:57.677906 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:34:57.680451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:34:57.701239 ignition[943]: INFO : Ignition 2.20.0 Aug 13 00:34:57.701239 ignition[943]: INFO : Stage: files Aug 13 00:34:57.702478 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:34:57.702478 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:34:57.702478 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:34:57.704693 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:34:57.704693 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:34:57.706213 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:34:57.706213 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:34:57.706213 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:34:57.705889 unknown[943]: wrote ssh authorized keys file for user: core Aug 13 00:34:57.709192 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:34:57.709192 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 00:34:57.969798 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:34:58.466177 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:34:58.466177 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:34:58.468560 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:34:58.654601 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:34:58.732801 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:34:58.732801 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:34:58.735200 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 00:34:59.296984 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:34:59.723699 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:34:59.723699 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:34:59.747676 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:34:59.747676 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:34:59.747676 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:34:59.747676 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 00:34:59.747676 ignition[943]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:34:59.747676 ignition[943]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 00:34:59.747676 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 00:34:59.747676 ignition[943]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:34:59.747676 ignition[943]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:34:59.747676 ignition[943]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:34:59.747676 ignition[943]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:34:59.747676 ignition[943]: INFO : files: files passed Aug 13 00:34:59.747676 ignition[943]: INFO : Ignition finished successfully Aug 13 00:34:59.747258 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:34:59.754873 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:34:59.756345 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:34:59.761200 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:34:59.761323 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:34:59.772924 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:34:59.772924 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:34:59.774785 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:34:59.777675 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:34:59.778601 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:34:59.783818 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:34:59.806484 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:34:59.806581 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:34:59.807707 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:34:59.808753 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:34:59.809791 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:34:59.810871 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:34:59.824485 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:34:59.828824 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:34:59.837713 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:34:59.838255 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:34:59.839343 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:34:59.840310 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:34:59.840419 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:34:59.841704 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:34:59.842419 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:34:59.843393 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:34:59.844249 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:34:59.845077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:34:59.846030 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:34:59.846974 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:34:59.847953 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:34:59.848877 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:34:59.849837 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:34:59.850853 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:34:59.850950 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:34:59.852165 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:34:59.852776 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:34:59.853588 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:34:59.853667 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:34:59.854559 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:34:59.854637 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:34:59.855908 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:34:59.855996 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:34:59.856636 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:34:59.856719 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:34:59.868068 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:34:59.870859 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:34:59.871390 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:34:59.871533 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:34:59.873687 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:34:59.873857 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:34:59.881035 ignition[995]: INFO : Ignition 2.20.0 Aug 13 00:34:59.881035 ignition[995]: INFO : Stage: umount Aug 13 00:34:59.884386 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:34:59.884386 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 00:34:59.884386 ignition[995]: INFO : umount: umount passed Aug 13 00:34:59.884386 ignition[995]: INFO : Ignition finished successfully Aug 13 00:34:59.885954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:34:59.886039 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:34:59.887859 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:34:59.887942 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:34:59.893616 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:34:59.893672 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:34:59.895381 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:34:59.895425 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:34:59.895958 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:34:59.895996 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:34:59.896518 systemd[1]: Stopped target network.target - Network. Aug 13 00:34:59.897003 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:34:59.897053 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:34:59.897611 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:34:59.902294 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:34:59.909242 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:34:59.920947 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:34:59.921925 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:34:59.922974 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:34:59.923016 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:34:59.923930 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:34:59.923976 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:34:59.925050 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:34:59.925096 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:34:59.926261 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:34:59.926299 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:34:59.927162 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:34:59.928087 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:34:59.930553 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:34:59.931116 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:34:59.931211 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:34:59.932106 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:34:59.932190 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:34:59.935348 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:34:59.935458 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:34:59.939024 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:34:59.939246 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:34:59.939359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:34:59.941288 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:34:59.941840 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:34:59.941900 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:34:59.950801 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:34:59.952068 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:34:59.952123 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:34:59.952687 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:34:59.952747 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:34:59.953820 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:34:59.953870 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:34:59.954877 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:34:59.954924 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:34:59.956363 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:34:59.959045 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:34:59.959105 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:34:59.967673 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:34:59.968357 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:34:59.973375 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:34:59.973514 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:34:59.974616 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:34:59.974661 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:34:59.975593 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:34:59.975628 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:34:59.976620 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:34:59.976659 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:34:59.977928 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:34:59.977968 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:34:59.978867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:34:59.978905 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:34:59.987834 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:34:59.988498 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:34:59.988541 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:34:59.989068 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:34:59.989107 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:34:59.989568 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:34:59.989604 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:34:59.990081 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:34:59.990118 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:34:59.995443 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:34:59.995500 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:34:59.995829 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:34:59.995921 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:34:59.997270 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:35:00.007828 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:35:00.013472 systemd[1]: Switching root. Aug 13 00:35:00.042138 systemd-journald[177]: Journal stopped Aug 13 00:35:01.107413 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Aug 13 00:35:01.107440 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:35:01.107453 kernel: SELinux: policy capability open_perms=1 Aug 13 00:35:01.107462 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:35:01.107471 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:35:01.107483 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:35:01.107493 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:35:01.107502 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:35:01.107512 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:35:01.107521 kernel: audit: type=1403 audit(1755045300.167:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:35:01.107531 systemd[1]: Successfully loaded SELinux policy in 49.028ms. Aug 13 00:35:01.107545 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.147ms. Aug 13 00:35:01.107556 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:35:01.107567 systemd[1]: Detected virtualization kvm. Aug 13 00:35:01.107577 systemd[1]: Detected architecture x86-64. Aug 13 00:35:01.107587 systemd[1]: Detected first boot. Aug 13 00:35:01.107600 systemd[1]: Initializing machine ID from random generator. Aug 13 00:35:01.107610 zram_generator::config[1039]: No configuration found. Aug 13 00:35:01.107621 kernel: Guest personality initialized and is inactive Aug 13 00:35:01.107632 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 00:35:01.107641 kernel: Initialized host personality Aug 13 00:35:01.107651 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:35:01.107661 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:35:01.107673 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:35:01.107684 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:35:01.107694 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:35:01.107704 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:35:01.107714 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:35:01.109054 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:35:01.109073 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:35:01.109088 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:35:01.109099 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:35:01.109109 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:35:01.109119 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:35:01.109131 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:35:01.109141 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:35:01.109151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:35:01.109161 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:35:01.109171 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:35:01.109184 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:35:01.109198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:35:01.109208 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:35:01.109218 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:35:01.109229 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:35:01.109239 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:35:01.109280 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:35:01.109293 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:35:01.109303 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:35:01.109313 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:35:01.109324 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:35:01.109334 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:35:01.109344 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:35:01.109354 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:35:01.109365 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:35:01.109375 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:35:01.109390 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:35:01.109400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:35:01.109410 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:35:01.109421 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:35:01.109434 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:35:01.109444 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:35:01.109454 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:01.109465 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:35:01.109475 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:35:01.109485 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:35:01.109496 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:35:01.109506 systemd[1]: Reached target machines.target - Containers. Aug 13 00:35:01.109518 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:35:01.109529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:35:01.109539 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:35:01.109549 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:35:01.109560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:35:01.109570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:35:01.109580 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:35:01.109590 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:35:01.109600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:35:01.109614 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:35:01.109625 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:35:01.109635 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:35:01.109645 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:35:01.109655 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:35:01.109666 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:35:01.109676 kernel: fuse: init (API version 7.39) Aug 13 00:35:01.109689 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:35:01.109699 kernel: loop: module loaded Aug 13 00:35:01.109709 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:35:01.109719 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:35:01.109744 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:35:01.109754 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:35:01.109764 kernel: ACPI: bus type drm_connector registered Aug 13 00:35:01.109793 systemd-journald[1130]: Collecting audit messages is disabled. Aug 13 00:35:01.109819 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:35:01.109830 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:35:01.109841 systemd[1]: Stopped verity-setup.service. Aug 13 00:35:01.109851 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:01.109895 systemd-journald[1130]: Journal started Aug 13 00:35:01.109918 systemd-journald[1130]: Runtime Journal (/run/log/journal/ac245d79666e4382ab0c3432802fa561) is 8M, max 78.3M, 70.3M free. Aug 13 00:35:00.784182 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:35:00.796155 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:35:00.796570 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:35:01.118757 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:35:01.120597 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:35:01.121240 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:35:01.121964 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:35:01.124089 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:35:01.124975 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:35:01.125603 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:35:01.126485 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:35:01.127653 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:35:01.128676 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:35:01.128940 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:35:01.130177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:35:01.130427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:35:01.131410 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:35:01.131658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:35:01.132593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:35:01.132903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:35:01.133871 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:35:01.134074 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:35:01.135078 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:35:01.135538 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:35:01.136630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:35:01.138078 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:35:01.139025 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:35:01.143675 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:35:01.157455 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:35:01.164356 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:35:01.171785 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:35:01.172805 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:35:01.172905 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:35:01.175991 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:35:01.182259 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:35:01.186818 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:35:01.187474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:35:01.193999 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:35:01.196853 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:35:01.197533 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:35:01.200902 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:35:01.201832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:35:01.209844 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:35:01.213716 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:35:01.222851 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:35:01.227477 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:35:01.232527 systemd-journald[1130]: Time spent on flushing to /var/log/journal/ac245d79666e4382ab0c3432802fa561 is 33.288ms for 991 entries. Aug 13 00:35:01.232527 systemd-journald[1130]: System Journal (/var/log/journal/ac245d79666e4382ab0c3432802fa561) is 8M, max 195.6M, 187.6M free. Aug 13 00:35:01.273048 systemd-journald[1130]: Received client request to flush runtime journal. Aug 13 00:35:01.241163 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:35:01.242287 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:35:01.244421 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:35:01.253930 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:35:01.263911 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:35:01.276150 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:35:01.285746 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 00:35:01.303837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:35:01.305101 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:35:01.308537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:35:01.317113 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Aug 13 00:35:01.317128 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Aug 13 00:35:01.318160 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:35:01.326022 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:35:01.335869 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:35:01.342034 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:35:01.356507 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:35:01.376771 kernel: loop1: detected capacity change from 0 to 138176 Aug 13 00:35:01.387903 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:35:01.403841 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:35:01.422455 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 13 00:35:01.422826 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 13 00:35:01.435582 kernel: loop2: detected capacity change from 0 to 8 Aug 13 00:35:01.434823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:35:01.454747 kernel: loop3: detected capacity change from 0 to 147912 Aug 13 00:35:01.496753 kernel: loop4: detected capacity change from 0 to 224512 Aug 13 00:35:01.518765 kernel: loop5: detected capacity change from 0 to 138176 Aug 13 00:35:01.538788 kernel: loop6: detected capacity change from 0 to 8 Aug 13 00:35:01.541787 kernel: loop7: detected capacity change from 0 to 147912 Aug 13 00:35:01.554679 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 00:35:01.555589 (sd-merge)[1195]: Merged extensions into '/usr'. Aug 13 00:35:01.564138 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:35:01.564156 systemd[1]: Reloading... Aug 13 00:35:01.673798 zram_generator::config[1232]: No configuration found. Aug 13 00:35:01.770802 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:35:01.818560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:35:01.877435 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:35:01.878106 systemd[1]: Reloading finished in 313 ms. Aug 13 00:35:01.898455 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:35:01.899672 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:35:01.900776 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:35:01.913251 systemd[1]: Starting ensure-sysext.service... Aug 13 00:35:01.916866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:35:01.921840 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:35:01.944564 systemd[1]: Reload requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:35:01.944583 systemd[1]: Reloading... Aug 13 00:35:01.947905 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:35:01.948151 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:35:01.948992 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:35:01.949217 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Aug 13 00:35:01.949284 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Aug 13 00:35:01.958692 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:35:01.959054 systemd-tmpfiles[1268]: Skipping /boot Aug 13 00:35:01.964300 systemd-udevd[1269]: Using default interface naming scheme 'v255'. Aug 13 00:35:01.982694 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:35:01.985849 systemd-tmpfiles[1268]: Skipping /boot Aug 13 00:35:02.059755 zram_generator::config[1319]: No configuration found. Aug 13 00:35:02.189652 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:35:02.203754 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:35:02.220748 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1302) Aug 13 00:35:02.233791 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:35:02.255358 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:35:02.256072 systemd[1]: Reloading finished in 311 ms. Aug 13 00:35:02.266643 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:35:02.268492 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:35:02.288764 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:35:02.304708 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 00:35:02.304983 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 00:35:02.305166 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 00:35:02.342984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 00:35:02.359765 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:35:02.358465 systemd[1]: Finished ensure-sysext.service. Aug 13 00:35:02.359432 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:02.367147 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:35:02.370690 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:35:02.373902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:35:02.385484 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:35:02.386024 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:35:02.388659 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:35:02.395029 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:35:02.401972 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:35:02.405178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:35:02.408953 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:35:02.410156 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:35:02.412157 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:35:02.416315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:35:02.420977 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:35:02.431018 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:35:02.434496 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:35:02.438260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:35:02.439363 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:35:02.443049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:35:02.443838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:35:02.444987 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:35:02.445345 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:35:02.447132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:35:02.447660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:35:02.448704 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:35:02.449589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:35:02.457372 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:35:02.471075 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:35:02.479079 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:35:02.490891 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:35:02.491475 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:35:02.491546 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:35:02.494137 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:35:02.504543 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:35:02.511913 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:35:02.514541 augenrules[1422]: No rules Aug 13 00:35:02.515052 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:35:02.515299 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:35:02.518883 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:35:02.547545 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:35:02.548318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:35:02.557919 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:35:02.559104 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:35:02.563975 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:35:02.573751 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:35:02.626197 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:35:02.628840 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:35:02.630079 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:35:02.635520 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:35:02.688633 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:35:02.689224 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:35:02.701397 systemd-networkd[1391]: lo: Link UP Aug 13 00:35:02.701405 systemd-networkd[1391]: lo: Gained carrier Aug 13 00:35:02.703488 systemd-networkd[1391]: Enumeration completed Aug 13 00:35:02.704116 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:35:02.704179 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:35:02.704211 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:35:02.704821 systemd-networkd[1391]: eth0: Link UP Aug 13 00:35:02.704864 systemd-networkd[1391]: eth0: Gained carrier Aug 13 00:35:02.704915 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:35:02.709946 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:35:02.712149 systemd-resolved[1392]: Positive Trust Anchors: Aug 13 00:35:02.712354 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:35:02.712444 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:35:02.713288 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:35:02.719006 systemd-resolved[1392]: Defaulting to hostname 'linux'. Aug 13 00:35:02.720812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:35:02.721451 systemd[1]: Reached target network.target - Network. Aug 13 00:35:02.721951 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:35:02.722482 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:35:02.723144 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:35:02.724045 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:35:02.724849 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:35:02.725563 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:35:02.726225 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:35:02.726859 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:35:02.726933 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:35:02.727505 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:35:02.729145 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:35:02.731442 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:35:02.734140 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:35:02.734869 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:35:02.735426 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:35:02.738193 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:35:02.739082 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:35:02.740423 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:35:02.741158 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:35:02.742338 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:35:02.742898 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:35:02.743452 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:35:02.743490 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:35:02.744601 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:35:02.747864 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:35:02.752193 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:35:02.755862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:35:02.767892 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:35:02.770874 jq[1454]: false Aug 13 00:35:02.770444 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:35:02.772472 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:35:02.780798 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:35:02.782938 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:35:02.787931 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:35:02.800888 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:35:02.802197 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:35:02.804572 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:35:02.808864 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:35:02.817815 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:35:02.829082 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:35:02.829320 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:35:02.846416 jq[1464]: true Aug 13 00:35:02.864252 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:35:02.865840 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:35:02.876377 dbus-daemon[1453]: [system] SELinux support is enabled Aug 13 00:35:02.882265 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:35:02.884057 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:35:02.888063 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:35:02.888101 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:35:02.888894 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:35:02.888922 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:35:02.890884 coreos-metadata[1452]: Aug 13 00:35:02.890 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:35:02.905230 jq[1477]: true Aug 13 00:35:02.911079 tar[1469]: linux-amd64/LICENSE Aug 13 00:35:02.911284 tar[1469]: linux-amd64/helm Aug 13 00:35:02.929005 update_engine[1463]: I20250813 00:35:02.928930 1463 main.cc:92] Flatcar Update Engine starting Aug 13 00:35:02.936114 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:35:02.939360 update_engine[1463]: I20250813 00:35:02.938226 1463 update_check_scheduler.cc:74] Next update check in 3m31s Aug 13 00:35:02.940414 extend-filesystems[1455]: Found loop4 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found loop5 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found loop6 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found loop7 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found sda Aug 13 00:35:02.951594 extend-filesystems[1455]: Found sda1 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found sda2 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found sda3 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found usr Aug 13 00:35:02.951594 extend-filesystems[1455]: Found sda4 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found sda6 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found sda7 Aug 13 00:35:02.951594 extend-filesystems[1455]: Found sda9 Aug 13 00:35:02.951594 extend-filesystems[1455]: Checking size of /dev/sda9 Aug 13 00:35:03.025330 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 00:35:03.025360 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 00:35:02.944858 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:35:03.025513 extend-filesystems[1455]: Resized partition /dev/sda9 Aug 13 00:35:02.945935 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:35:03.026094 extend-filesystems[1508]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:35:03.026094 extend-filesystems[1508]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 00:35:03.026094 extend-filesystems[1508]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:35:03.026094 extend-filesystems[1508]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 00:35:02.946174 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:35:03.029543 extend-filesystems[1455]: Resized filesystem in /dev/sda9 Aug 13 00:35:02.975085 systemd-logind[1462]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:35:02.975105 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:35:03.001472 systemd-logind[1462]: New seat seat0. Aug 13 00:35:03.020060 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:35:03.028199 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:35:03.028613 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:35:03.040631 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:35:03.043323 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:35:03.051021 systemd[1]: Starting sshkeys.service... Aug 13 00:35:03.083913 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:35:03.096014 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:35:03.134893 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1310) Aug 13 00:35:03.175454 coreos-metadata[1515]: Aug 13 00:35:03.175 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 00:35:03.273062 systemd-networkd[1391]: eth0: DHCPv4 address 172.234.21.106/24, gateway 172.234.21.1 acquired from 23.194.118.56 Aug 13 00:35:03.274126 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:35:03.274526 dbus-daemon[1453]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1391 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:35:03.287098 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:35:03.307972 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:35:03.392755 containerd[1479]: time="2025-08-13T00:35:03.389428264Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 00:35:03.450101 containerd[1479]: time="2025-08-13T00:35:03.450056635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.454856685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.454883585Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.454897855Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.455075845Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.455091835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.455156465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.455168735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.455404396Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.455417936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.455430346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456743 containerd[1479]: time="2025-08-13T00:35:03.455438896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456948 containerd[1479]: time="2025-08-13T00:35:03.455526286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456948 containerd[1479]: time="2025-08-13T00:35:03.455783947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456948 containerd[1479]: time="2025-08-13T00:35:03.455933897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:35:03.456948 containerd[1479]: time="2025-08-13T00:35:03.455944967Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:35:03.456948 containerd[1479]: time="2025-08-13T00:35:03.456036927Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:35:03.456948 containerd[1479]: time="2025-08-13T00:35:03.456088747Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:35:03.459480 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:35:03.461083 containerd[1479]: time="2025-08-13T00:35:03.461054387Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:35:03.461118 containerd[1479]: time="2025-08-13T00:35:03.461109737Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:35:03.461139 containerd[1479]: time="2025-08-13T00:35:03.461127757Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:35:03.461208 containerd[1479]: time="2025-08-13T00:35:03.461184238Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:35:03.461242 containerd[1479]: time="2025-08-13T00:35:03.461209748Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:35:03.461360 containerd[1479]: time="2025-08-13T00:35:03.461335438Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:35:03.461830 containerd[1479]: time="2025-08-13T00:35:03.461806709Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:35:03.461947 containerd[1479]: time="2025-08-13T00:35:03.461918819Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:35:03.461980 containerd[1479]: time="2025-08-13T00:35:03.461948379Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:35:03.461980 containerd[1479]: time="2025-08-13T00:35:03.461963129Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:35:03.461980 containerd[1479]: time="2025-08-13T00:35:03.461975649Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:35:03.462043 containerd[1479]: time="2025-08-13T00:35:03.461990739Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:35:03.462043 containerd[1479]: time="2025-08-13T00:35:03.462002009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:35:03.462043 containerd[1479]: time="2025-08-13T00:35:03.462013359Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:35:03.462043 containerd[1479]: time="2025-08-13T00:35:03.462026539Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:35:03.462043 containerd[1479]: time="2025-08-13T00:35:03.462039369Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:35:03.462118 containerd[1479]: time="2025-08-13T00:35:03.462052419Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:35:03.462118 containerd[1479]: time="2025-08-13T00:35:03.462063479Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:35:03.462118 containerd[1479]: time="2025-08-13T00:35:03.462082749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462118 containerd[1479]: time="2025-08-13T00:35:03.462096719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462118 containerd[1479]: time="2025-08-13T00:35:03.462108019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462202 containerd[1479]: time="2025-08-13T00:35:03.462120419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462202 containerd[1479]: time="2025-08-13T00:35:03.462132249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462202 containerd[1479]: time="2025-08-13T00:35:03.462144279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462202 containerd[1479]: time="2025-08-13T00:35:03.462154179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462202 containerd[1479]: time="2025-08-13T00:35:03.462165499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462202 containerd[1479]: time="2025-08-13T00:35:03.462176919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462202 containerd[1479]: time="2025-08-13T00:35:03.462190020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462202 containerd[1479]: time="2025-08-13T00:35:03.462199580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462328 containerd[1479]: time="2025-08-13T00:35:03.462211240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462328 containerd[1479]: time="2025-08-13T00:35:03.462222340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462328 containerd[1479]: time="2025-08-13T00:35:03.462234350Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:35:03.462328 containerd[1479]: time="2025-08-13T00:35:03.462250960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462328 containerd[1479]: time="2025-08-13T00:35:03.462261510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.462328 containerd[1479]: time="2025-08-13T00:35:03.462270310Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:35:03.462637 dbus-daemon[1453]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:35:03.463168 containerd[1479]: time="2025-08-13T00:35:03.463140851Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:35:03.463204 containerd[1479]: time="2025-08-13T00:35:03.463166781Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:35:03.463306 containerd[1479]: time="2025-08-13T00:35:03.463177131Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:35:03.463333 containerd[1479]: time="2025-08-13T00:35:03.463305322Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:35:03.463333 containerd[1479]: time="2025-08-13T00:35:03.463316872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.463333 containerd[1479]: time="2025-08-13T00:35:03.463327982Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:35:03.463394 containerd[1479]: time="2025-08-13T00:35:03.463336812Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:35:03.463394 containerd[1479]: time="2025-08-13T00:35:03.463346732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:35:03.463621 containerd[1479]: time="2025-08-13T00:35:03.463571932Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:35:03.463621 containerd[1479]: time="2025-08-13T00:35:03.463619232Z" level=info msg="Connect containerd service" Aug 13 00:35:03.463806 containerd[1479]: time="2025-08-13T00:35:03.463641302Z" level=info msg="using legacy CRI server" Aug 13 00:35:03.463806 containerd[1479]: time="2025-08-13T00:35:03.463648262Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:35:03.463889 dbus-daemon[1453]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1528 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:35:03.464643 containerd[1479]: time="2025-08-13T00:35:03.464615584Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:35:03.467388 containerd[1479]: time="2025-08-13T00:35:03.465556386Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:35:03.467388 containerd[1479]: time="2025-08-13T00:35:03.466149067Z" level=info msg="Start subscribing containerd event" Aug 13 00:35:03.467388 containerd[1479]: time="2025-08-13T00:35:03.466189658Z" level=info msg="Start recovering state" Aug 13 00:35:03.467388 containerd[1479]: time="2025-08-13T00:35:03.466242398Z" level=info msg="Start event monitor" Aug 13 00:35:03.467388 containerd[1479]: time="2025-08-13T00:35:03.466257648Z" level=info msg="Start snapshots syncer" Aug 13 00:35:03.467388 containerd[1479]: time="2025-08-13T00:35:03.466265878Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:35:03.467388 containerd[1479]: time="2025-08-13T00:35:03.466272398Z" level=info msg="Start streaming server" Aug 13 00:35:03.470028 containerd[1479]: time="2025-08-13T00:35:03.469807515Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:35:03.470028 containerd[1479]: time="2025-08-13T00:35:03.469869595Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:35:03.470028 containerd[1479]: time="2025-08-13T00:35:03.469946255Z" level=info msg="containerd successfully booted in 0.084044s" Aug 13 00:35:03.471228 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:35:03.472363 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:35:03.491833 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:35:03.493275 polkitd[1533]: Started polkitd version 121 Aug 13 00:35:03.500025 polkitd[1533]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:35:03.500081 polkitd[1533]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:35:03.502914 polkitd[1533]: Finished loading, compiling and executing 2 rules Aug 13 00:35:03.505060 dbus-daemon[1453]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:35:03.505203 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:35:03.506992 polkitd[1533]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:35:03.518095 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:35:03.527048 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:35:03.531418 systemd-resolved[1392]: System hostname changed to '172-234-21-106'. Aug 13 00:35:03.531525 systemd-hostnamed[1528]: Hostname set to <172-234-21-106> (transient) Aug 13 00:35:03.534253 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:35:03.534593 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:35:03.542996 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:35:03.554320 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:35:03.563248 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:35:03.567930 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:35:03.568648 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:35:03.670185 tar[1469]: linux-amd64/README.md Aug 13 00:35:03.685250 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:35:03.901143 coreos-metadata[1452]: Aug 13 00:35:03.901 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 00:35:04.001805 coreos-metadata[1452]: Aug 13 00:35:04.001 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 00:35:04.080965 systemd-networkd[1391]: eth0: Gained IPv6LL Aug 13 00:35:04.081702 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:35:04.085367 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:35:04.086686 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:35:04.097112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:04.101007 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:35:04.130840 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:35:04.184663 coreos-metadata[1515]: Aug 13 00:35:04.184 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 00:35:04.222427 coreos-metadata[1452]: Aug 13 00:35:04.222 INFO Fetch successful Aug 13 00:35:04.222427 coreos-metadata[1452]: Aug 13 00:35:04.222 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 00:35:04.285777 coreos-metadata[1515]: Aug 13 00:35:04.285 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 00:35:04.445658 coreos-metadata[1515]: Aug 13 00:35:04.445 INFO Fetch successful Aug 13 00:35:04.461718 update-ssh-keys[1578]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:35:04.462269 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:35:04.466221 systemd[1]: Finished sshkeys.service. Aug 13 00:35:04.529809 coreos-metadata[1452]: Aug 13 00:35:04.529 INFO Fetch successful Aug 13 00:35:04.627568 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:35:04.629018 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:35:04.630886 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:35:05.003291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:05.004677 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:35:05.007378 systemd[1]: Startup finished in 804ms (kernel) + 7.470s (initrd) + 4.886s (userspace) = 13.162s. Aug 13 00:35:05.048209 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:35:05.521131 kubelet[1606]: E0813 00:35:05.521072 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:35:05.524826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:35:05.525044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:35:05.525673 systemd[1]: kubelet.service: Consumed 870ms CPU time, 267.4M memory peak. Aug 13 00:35:06.321438 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:35:07.251593 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:35:07.258936 systemd[1]: Started sshd@0-172.234.21.106:22-139.178.89.65:40894.service - OpenSSH per-connection server daemon (139.178.89.65:40894). Aug 13 00:35:07.601899 sshd[1618]: Accepted publickey for core from 139.178.89.65 port 40894 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:35:07.604538 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:07.616117 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:35:07.627918 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:35:07.635064 systemd-logind[1462]: New session 1 of user core. Aug 13 00:35:07.645775 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:35:07.652206 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:35:07.656233 (systemd)[1622]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:35:07.658637 systemd-logind[1462]: New session c1 of user core. Aug 13 00:35:07.776749 systemd[1622]: Queued start job for default target default.target. Aug 13 00:35:07.786959 systemd[1622]: Created slice app.slice - User Application Slice. Aug 13 00:35:07.786988 systemd[1622]: Reached target paths.target - Paths. Aug 13 00:35:07.787030 systemd[1622]: Reached target timers.target - Timers. Aug 13 00:35:07.788416 systemd[1622]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:35:07.798794 systemd[1622]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:35:07.798976 systemd[1622]: Reached target sockets.target - Sockets. Aug 13 00:35:07.799016 systemd[1622]: Reached target basic.target - Basic System. Aug 13 00:35:07.799062 systemd[1622]: Reached target default.target - Main User Target. Aug 13 00:35:07.799092 systemd[1622]: Startup finished in 134ms. Aug 13 00:35:07.799450 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:35:07.809858 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:35:08.071948 systemd[1]: Started sshd@1-172.234.21.106:22-139.178.89.65:40904.service - OpenSSH per-connection server daemon (139.178.89.65:40904). Aug 13 00:35:08.405173 sshd[1633]: Accepted publickey for core from 139.178.89.65 port 40904 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:35:08.407228 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:08.415151 systemd-logind[1462]: New session 2 of user core. Aug 13 00:35:08.425882 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:35:08.658187 sshd[1635]: Connection closed by 139.178.89.65 port 40904 Aug 13 00:35:08.659067 sshd-session[1633]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:08.663578 systemd[1]: sshd@1-172.234.21.106:22-139.178.89.65:40904.service: Deactivated successfully. Aug 13 00:35:08.666479 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:35:08.667376 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:35:08.668568 systemd-logind[1462]: Removed session 2. Aug 13 00:35:08.724948 systemd[1]: Started sshd@2-172.234.21.106:22-139.178.89.65:40606.service - OpenSSH per-connection server daemon (139.178.89.65:40606). Aug 13 00:35:09.070366 sshd[1641]: Accepted publickey for core from 139.178.89.65 port 40606 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:35:09.072997 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:09.078164 systemd-logind[1462]: New session 3 of user core. Aug 13 00:35:09.088868 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:35:09.311966 sshd[1643]: Connection closed by 139.178.89.65 port 40606 Aug 13 00:35:09.312913 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:09.318236 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:35:09.319163 systemd[1]: sshd@2-172.234.21.106:22-139.178.89.65:40606.service: Deactivated successfully. Aug 13 00:35:09.321990 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:35:09.323047 systemd-logind[1462]: Removed session 3. Aug 13 00:35:09.381969 systemd[1]: Started sshd@3-172.234.21.106:22-139.178.89.65:40610.service - OpenSSH per-connection server daemon (139.178.89.65:40610). Aug 13 00:35:09.716586 sshd[1649]: Accepted publickey for core from 139.178.89.65 port 40610 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:35:09.718425 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:09.723114 systemd-logind[1462]: New session 4 of user core. Aug 13 00:35:09.732885 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:35:09.969434 sshd[1651]: Connection closed by 139.178.89.65 port 40610 Aug 13 00:35:09.970032 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:09.973424 systemd[1]: sshd@3-172.234.21.106:22-139.178.89.65:40610.service: Deactivated successfully. Aug 13 00:35:09.975769 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:35:09.977511 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:35:09.978589 systemd-logind[1462]: Removed session 4. Aug 13 00:35:10.031110 systemd[1]: Started sshd@4-172.234.21.106:22-139.178.89.65:40622.service - OpenSSH per-connection server daemon (139.178.89.65:40622). Aug 13 00:35:10.369340 sshd[1657]: Accepted publickey for core from 139.178.89.65 port 40622 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:35:10.373486 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:10.378990 systemd-logind[1462]: New session 5 of user core. Aug 13 00:35:10.389857 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:35:10.580764 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:35:10.581127 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:35:10.602516 sudo[1660]: pam_unix(sudo:session): session closed for user root Aug 13 00:35:10.655415 sshd[1659]: Connection closed by 139.178.89.65 port 40622 Aug 13 00:35:10.656214 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:10.661688 systemd[1]: sshd@4-172.234.21.106:22-139.178.89.65:40622.service: Deactivated successfully. Aug 13 00:35:10.664243 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:35:10.665218 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:35:10.666250 systemd-logind[1462]: Removed session 5. Aug 13 00:35:10.721989 systemd[1]: Started sshd@5-172.234.21.106:22-139.178.89.65:40634.service - OpenSSH per-connection server daemon (139.178.89.65:40634). Aug 13 00:35:11.075206 sshd[1666]: Accepted publickey for core from 139.178.89.65 port 40634 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:35:11.077519 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:11.082327 systemd-logind[1462]: New session 6 of user core. Aug 13 00:35:11.092857 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:35:11.277444 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:35:11.277864 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:35:11.281820 sudo[1670]: pam_unix(sudo:session): session closed for user root Aug 13 00:35:11.287460 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:35:11.287782 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:35:11.301977 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:35:11.328996 augenrules[1692]: No rules Aug 13 00:35:11.331018 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:35:11.331333 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:35:11.332837 sudo[1669]: pam_unix(sudo:session): session closed for user root Aug 13 00:35:11.384868 sshd[1668]: Connection closed by 139.178.89.65 port 40634 Aug 13 00:35:11.385372 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:11.389693 systemd[1]: sshd@5-172.234.21.106:22-139.178.89.65:40634.service: Deactivated successfully. Aug 13 00:35:11.391675 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:35:11.392361 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:35:11.393273 systemd-logind[1462]: Removed session 6. Aug 13 00:35:11.443117 systemd[1]: Started sshd@6-172.234.21.106:22-139.178.89.65:40638.service - OpenSSH per-connection server daemon (139.178.89.65:40638). Aug 13 00:35:11.768825 sshd[1701]: Accepted publickey for core from 139.178.89.65 port 40638 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:35:11.770419 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:35:11.775492 systemd-logind[1462]: New session 7 of user core. Aug 13 00:35:11.784863 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:35:11.965568 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:35:11.965921 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:35:12.218923 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:35:12.221394 (dockerd)[1721]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:35:12.472178 dockerd[1721]: time="2025-08-13T00:35:12.471998405Z" level=info msg="Starting up" Aug 13 00:35:12.559409 dockerd[1721]: time="2025-08-13T00:35:12.559378890Z" level=info msg="Loading containers: start." Aug 13 00:35:12.717756 kernel: Initializing XFRM netlink socket Aug 13 00:35:12.738557 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Aug 13 00:35:12.787608 systemd-networkd[1391]: docker0: Link UP Aug 13 00:35:12.816050 dockerd[1721]: time="2025-08-13T00:35:12.816014353Z" level=info msg="Loading containers: done." Aug 13 00:35:12.830758 dockerd[1721]: time="2025-08-13T00:35:12.829763560Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:35:12.830758 dockerd[1721]: time="2025-08-13T00:35:12.829830051Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 00:35:12.830758 dockerd[1721]: time="2025-08-13T00:35:12.829926361Z" level=info msg="Daemon has completed initialization" Aug 13 00:35:12.830872 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1827157449-merged.mount: Deactivated successfully. Aug 13 00:35:12.857120 dockerd[1721]: time="2025-08-13T00:35:12.857055565Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:35:12.857240 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:35:13.567883 systemd-timesyncd[1394]: Contacted time server [2600:3c01::f03c:91ff:fe96:2932]:123 (2.flatcar.pool.ntp.org). Aug 13 00:35:13.567924 systemd-timesyncd[1394]: Initial clock synchronization to Wed 2025-08-13 00:35:13.567695 UTC. Aug 13 00:35:13.568262 systemd-resolved[1392]: Clock change detected. Flushing caches. Aug 13 00:35:14.053603 containerd[1479]: time="2025-08-13T00:35:14.053567340Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:35:14.953741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963686060.mount: Deactivated successfully. Aug 13 00:35:16.088696 containerd[1479]: time="2025-08-13T00:35:16.087687607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:16.088696 containerd[1479]: time="2025-08-13T00:35:16.088651829Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 00:35:16.089331 containerd[1479]: time="2025-08-13T00:35:16.089271190Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:16.091345 containerd[1479]: time="2025-08-13T00:35:16.091320434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:16.092386 containerd[1479]: time="2025-08-13T00:35:16.092363676Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 2.038760066s" Aug 13 00:35:16.092456 containerd[1479]: time="2025-08-13T00:35:16.092440476Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 00:35:16.093355 containerd[1479]: time="2025-08-13T00:35:16.093326688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:35:16.454845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:35:16.460128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:16.629962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:16.630234 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:35:16.677718 kubelet[1970]: E0813 00:35:16.677659 1970 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:35:16.683754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:35:16.684036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:35:16.684682 systemd[1]: kubelet.service: Consumed 192ms CPU time, 111.2M memory peak. Aug 13 00:35:17.562673 containerd[1479]: time="2025-08-13T00:35:17.562592376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:17.563827 containerd[1479]: time="2025-08-13T00:35:17.563558908Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 00:35:17.565837 containerd[1479]: time="2025-08-13T00:35:17.564306670Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:17.566775 containerd[1479]: time="2025-08-13T00:35:17.566726574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:17.568432 containerd[1479]: time="2025-08-13T00:35:17.567688836Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.474328598s" Aug 13 00:35:17.568432 containerd[1479]: time="2025-08-13T00:35:17.567739416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 00:35:17.568889 containerd[1479]: time="2025-08-13T00:35:17.568857009Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:35:19.100076 containerd[1479]: time="2025-08-13T00:35:19.099998530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:19.101107 containerd[1479]: time="2025-08-13T00:35:19.101063602Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 00:35:19.101844 containerd[1479]: time="2025-08-13T00:35:19.101773234Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:19.104301 containerd[1479]: time="2025-08-13T00:35:19.104244379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:19.105505 containerd[1479]: time="2025-08-13T00:35:19.105297581Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.536405722s" Aug 13 00:35:19.105505 containerd[1479]: time="2025-08-13T00:35:19.105325501Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 00:35:19.106019 containerd[1479]: time="2025-08-13T00:35:19.106000162Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:35:20.280066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817182078.mount: Deactivated successfully. Aug 13 00:35:20.571286 containerd[1479]: time="2025-08-13T00:35:20.570949001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:20.571990 containerd[1479]: time="2025-08-13T00:35:20.571946223Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 00:35:20.572659 containerd[1479]: time="2025-08-13T00:35:20.572635895Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:20.574149 containerd[1479]: time="2025-08-13T00:35:20.574128928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:20.574786 containerd[1479]: time="2025-08-13T00:35:20.574746329Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.468544296s" Aug 13 00:35:20.574849 containerd[1479]: time="2025-08-13T00:35:20.574784189Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 00:35:20.575770 containerd[1479]: time="2025-08-13T00:35:20.575726081Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:35:21.335491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1699547309.mount: Deactivated successfully. Aug 13 00:35:21.973838 containerd[1479]: time="2025-08-13T00:35:21.971991973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:21.973838 containerd[1479]: time="2025-08-13T00:35:21.972897975Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 00:35:21.973838 containerd[1479]: time="2025-08-13T00:35:21.972935565Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:21.975840 containerd[1479]: time="2025-08-13T00:35:21.975441350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:21.976570 containerd[1479]: time="2025-08-13T00:35:21.976546402Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.400791061s" Aug 13 00:35:21.976644 containerd[1479]: time="2025-08-13T00:35:21.976628982Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:35:21.977612 containerd[1479]: time="2025-08-13T00:35:21.977578764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:35:22.684298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854340339.mount: Deactivated successfully. Aug 13 00:35:22.688729 containerd[1479]: time="2025-08-13T00:35:22.688083595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:22.688729 containerd[1479]: time="2025-08-13T00:35:22.688616646Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 00:35:22.690204 containerd[1479]: time="2025-08-13T00:35:22.689164277Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:22.691492 containerd[1479]: time="2025-08-13T00:35:22.690706710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:22.691492 containerd[1479]: time="2025-08-13T00:35:22.691386071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 713.777147ms" Aug 13 00:35:22.691492 containerd[1479]: time="2025-08-13T00:35:22.691408431Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:35:22.692125 containerd[1479]: time="2025-08-13T00:35:22.692107893Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:35:23.466182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101778251.mount: Deactivated successfully. Aug 13 00:35:24.999925 containerd[1479]: time="2025-08-13T00:35:24.998620725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:24.999925 containerd[1479]: time="2025-08-13T00:35:24.999146726Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 00:35:25.001886 containerd[1479]: time="2025-08-13T00:35:25.001855741Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:25.004665 containerd[1479]: time="2025-08-13T00:35:25.004639277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:25.005887 containerd[1479]: time="2025-08-13T00:35:25.005830849Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.313678666s" Aug 13 00:35:25.006103 containerd[1479]: time="2025-08-13T00:35:25.006084980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:35:26.934500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:35:26.942350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:27.105068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:27.107953 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:35:27.139837 kubelet[2131]: E0813 00:35:27.139709 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:35:27.143086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:35:27.143239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:35:27.143539 systemd[1]: kubelet.service: Consumed 165ms CPU time, 110.1M memory peak. Aug 13 00:35:27.577113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:27.577224 systemd[1]: kubelet.service: Consumed 165ms CPU time, 110.1M memory peak. Aug 13 00:35:27.583955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:27.611892 systemd[1]: Reload requested from client PID 2145 ('systemctl') (unit session-7.scope)... Aug 13 00:35:27.611907 systemd[1]: Reloading... Aug 13 00:35:27.739858 zram_generator::config[2188]: No configuration found. Aug 13 00:35:27.839288 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:35:27.933995 systemd[1]: Reloading finished in 321 ms. Aug 13 00:35:27.977454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:27.981059 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:35:27.983860 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:27.984627 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:35:27.984879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:27.984908 systemd[1]: kubelet.service: Consumed 127ms CPU time, 99.3M memory peak. Aug 13 00:35:27.989076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:28.126913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:28.134080 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:35:28.164500 kubelet[2247]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:35:28.164500 kubelet[2247]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:35:28.164500 kubelet[2247]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:35:28.164754 kubelet[2247]: I0813 00:35:28.164548 2247 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:35:28.328777 kubelet[2247]: I0813 00:35:28.328757 2247 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:35:28.329175 kubelet[2247]: I0813 00:35:28.328831 2247 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:35:28.329175 kubelet[2247]: I0813 00:35:28.329115 2247 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:35:28.352507 kubelet[2247]: E0813 00:35:28.352486 2247 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.21.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.21.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:35:28.353665 kubelet[2247]: I0813 00:35:28.353545 2247 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:35:28.360443 kubelet[2247]: E0813 00:35:28.360418 2247 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:35:28.360719 kubelet[2247]: I0813 00:35:28.360560 2247 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:35:28.363963 kubelet[2247]: I0813 00:35:28.363937 2247 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:35:28.366485 kubelet[2247]: I0813 00:35:28.366174 2247 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:35:28.366485 kubelet[2247]: I0813 00:35:28.366204 2247 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-21-106","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:35:28.366485 kubelet[2247]: I0813 00:35:28.366330 2247 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:35:28.366485 kubelet[2247]: I0813 00:35:28.366337 2247 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:35:28.366660 kubelet[2247]: I0813 00:35:28.366435 2247 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:35:28.369723 kubelet[2247]: I0813 00:35:28.369710 2247 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:35:28.369963 kubelet[2247]: I0813 00:35:28.369951 2247 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:35:28.370051 kubelet[2247]: I0813 00:35:28.370020 2247 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:35:28.370051 kubelet[2247]: I0813 00:35:28.370038 2247 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:35:28.373328 kubelet[2247]: I0813 00:35:28.372880 2247 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:35:28.373328 kubelet[2247]: I0813 00:35:28.373192 2247 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:35:28.373934 kubelet[2247]: W0813 00:35:28.373732 2247 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:35:28.375603 kubelet[2247]: I0813 00:35:28.375293 2247 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:35:28.375603 kubelet[2247]: I0813 00:35:28.375323 2247 server.go:1287] "Started kubelet" Aug 13 00:35:28.375603 kubelet[2247]: W0813 00:35:28.375417 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.21.106:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-21-106&limit=500&resourceVersion=0": dial tcp 172.234.21.106:6443: connect: connection refused Aug 13 00:35:28.375603 kubelet[2247]: E0813 00:35:28.375452 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.21.106:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-21-106&limit=500&resourceVersion=0\": dial tcp 172.234.21.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:35:28.383798 kubelet[2247]: I0813 00:35:28.383625 2247 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:35:28.385199 kubelet[2247]: I0813 00:35:28.384880 2247 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:35:28.385788 kubelet[2247]: I0813 00:35:28.385462 2247 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:35:28.385788 kubelet[2247]: W0813 00:35:28.385603 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.21.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.21.106:6443: connect: connection refused Aug 13 00:35:28.385788 kubelet[2247]: E0813 00:35:28.385635 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.21.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.21.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:35:28.387427 kubelet[2247]: I0813 00:35:28.386760 2247 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:35:28.387427 kubelet[2247]: E0813 00:35:28.386862 2247 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-21-106\" not found" Aug 13 00:35:28.387427 kubelet[2247]: I0813 00:35:28.387137 2247 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:35:28.387427 kubelet[2247]: I0813 00:35:28.387170 2247 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:35:28.388072 kubelet[2247]: I0813 00:35:28.388052 2247 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:35:28.388854 kubelet[2247]: W0813 00:35:28.388441 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.21.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.21.106:6443: connect: connection refused Aug 13 00:35:28.388854 kubelet[2247]: E0813 00:35:28.388476 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.21.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.21.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:35:28.388854 kubelet[2247]: E0813 00:35:28.388523 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.21.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-21-106?timeout=10s\": dial tcp 172.234.21.106:6443: connect: connection refused" interval="200ms" Aug 13 00:35:28.388854 kubelet[2247]: I0813 00:35:28.388033 2247 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:35:28.389065 kubelet[2247]: I0813 00:35:28.389051 2247 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:35:28.390741 kubelet[2247]: E0813 00:35:28.389696 2247 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.21.106:6443/api/v1/namespaces/default/events\": dial tcp 172.234.21.106:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-21-106.185b2c6d7284e2bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-21-106,UID:172-234-21-106,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-21-106,},FirstTimestamp:2025-08-13 00:35:28.375308987 +0000 UTC m=+0.238203248,LastTimestamp:2025-08-13 00:35:28.375308987 +0000 UTC m=+0.238203248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-21-106,}" Aug 13 00:35:28.390874 kubelet[2247]: I0813 00:35:28.390820 2247 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:35:28.390900 kubelet[2247]: I0813 00:35:28.390883 2247 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:35:28.392141 kubelet[2247]: I0813 00:35:28.392127 2247 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:35:28.403091 kubelet[2247]: I0813 00:35:28.403059 2247 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:35:28.404936 kubelet[2247]: I0813 00:35:28.404913 2247 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:35:28.404936 kubelet[2247]: I0813 00:35:28.404933 2247 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:35:28.405004 kubelet[2247]: I0813 00:35:28.404946 2247 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:35:28.405004 kubelet[2247]: I0813 00:35:28.404952 2247 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:35:28.405004 kubelet[2247]: E0813 00:35:28.404993 2247 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:35:28.413313 kubelet[2247]: E0813 00:35:28.413246 2247 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:35:28.413413 kubelet[2247]: W0813 00:35:28.413388 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.21.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.21.106:6443: connect: connection refused Aug 13 00:35:28.413541 kubelet[2247]: E0813 00:35:28.413459 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.21.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.21.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:35:28.418067 kubelet[2247]: I0813 00:35:28.418050 2247 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:35:28.418067 kubelet[2247]: I0813 00:35:28.418064 2247 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:35:28.418187 kubelet[2247]: I0813 00:35:28.418077 2247 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:35:28.419957 kubelet[2247]: I0813 00:35:28.419943 2247 policy_none.go:49] "None policy: Start" Aug 13 00:35:28.419957 kubelet[2247]: I0813 00:35:28.419959 2247 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:35:28.420016 kubelet[2247]: I0813 00:35:28.419970 2247 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:35:28.425420 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:35:28.434440 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:35:28.437365 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:35:28.447540 kubelet[2247]: I0813 00:35:28.447503 2247 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:35:28.447916 kubelet[2247]: I0813 00:35:28.447656 2247 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:35:28.447916 kubelet[2247]: I0813 00:35:28.447671 2247 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:35:28.447916 kubelet[2247]: I0813 00:35:28.447837 2247 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:35:28.448621 kubelet[2247]: E0813 00:35:28.448575 2247 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:35:28.448721 kubelet[2247]: E0813 00:35:28.448710 2247 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-21-106\" not found" Aug 13 00:35:28.512450 systemd[1]: Created slice kubepods-burstable-pod83cfa807fbcf74d70f707816b9e345df.slice - libcontainer container kubepods-burstable-pod83cfa807fbcf74d70f707816b9e345df.slice. Aug 13 00:35:28.523900 kubelet[2247]: E0813 00:35:28.523876 2247 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-21-106\" not found" node="172-234-21-106" Aug 13 00:35:28.526689 systemd[1]: Created slice kubepods-burstable-podd0113a95571209fdb49d8c44cb5e74e9.slice - libcontainer container kubepods-burstable-podd0113a95571209fdb49d8c44cb5e74e9.slice. Aug 13 00:35:28.528565 kubelet[2247]: E0813 00:35:28.528546 2247 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-21-106\" not found" node="172-234-21-106" Aug 13 00:35:28.530595 systemd[1]: Created slice kubepods-burstable-pod4aa71c4abc9f96c1b4f90b7128033a12.slice - libcontainer container kubepods-burstable-pod4aa71c4abc9f96c1b4f90b7128033a12.slice. Aug 13 00:35:28.532475 kubelet[2247]: E0813 00:35:28.532459 2247 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-21-106\" not found" node="172-234-21-106" Aug 13 00:35:28.549307 kubelet[2247]: I0813 00:35:28.549293 2247 kubelet_node_status.go:75] "Attempting to register node" node="172-234-21-106" Aug 13 00:35:28.549507 kubelet[2247]: E0813 00:35:28.549492 2247 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.21.106:6443/api/v1/nodes\": dial tcp 172.234.21.106:6443: connect: connection refused" node="172-234-21-106" Aug 13 00:35:28.589031 kubelet[2247]: E0813 00:35:28.589007 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.21.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-21-106?timeout=10s\": dial tcp 172.234.21.106:6443: connect: connection refused" interval="400ms" Aug 13 00:35:28.687904 kubelet[2247]: I0813 00:35:28.687800 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-k8s-certs\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:28.687904 kubelet[2247]: I0813 00:35:28.687845 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-kubeconfig\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:28.687904 kubelet[2247]: I0813 00:35:28.687863 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4aa71c4abc9f96c1b4f90b7128033a12-kubeconfig\") pod \"kube-scheduler-172-234-21-106\" (UID: \"4aa71c4abc9f96c1b4f90b7128033a12\") " pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:28.687904 kubelet[2247]: I0813 00:35:28.687877 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83cfa807fbcf74d70f707816b9e345df-ca-certs\") pod \"kube-apiserver-172-234-21-106\" (UID: \"83cfa807fbcf74d70f707816b9e345df\") " pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:28.687904 kubelet[2247]: I0813 00:35:28.687891 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83cfa807fbcf74d70f707816b9e345df-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-21-106\" (UID: \"83cfa807fbcf74d70f707816b9e345df\") " pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:28.688051 kubelet[2247]: I0813 00:35:28.687905 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-flexvolume-dir\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:28.688051 kubelet[2247]: I0813 00:35:28.687917 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83cfa807fbcf74d70f707816b9e345df-k8s-certs\") pod \"kube-apiserver-172-234-21-106\" (UID: \"83cfa807fbcf74d70f707816b9e345df\") " pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:28.688051 kubelet[2247]: I0813 00:35:28.687930 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-ca-certs\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:28.688051 kubelet[2247]: I0813 00:35:28.687944 2247 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:28.752399 kubelet[2247]: I0813 00:35:28.752347 2247 kubelet_node_status.go:75] "Attempting to register node" node="172-234-21-106" Aug 13 00:35:28.752830 kubelet[2247]: E0813 00:35:28.752787 2247 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.21.106:6443/api/v1/nodes\": dial tcp 172.234.21.106:6443: connect: connection refused" node="172-234-21-106" Aug 13 00:35:28.824341 kubelet[2247]: E0813 00:35:28.824270 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:28.825121 containerd[1479]: time="2025-08-13T00:35:28.825070896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-21-106,Uid:83cfa807fbcf74d70f707816b9e345df,Namespace:kube-system,Attempt:0,}" Aug 13 00:35:28.829311 kubelet[2247]: E0813 00:35:28.829285 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:28.829702 containerd[1479]: time="2025-08-13T00:35:28.829661865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-21-106,Uid:d0113a95571209fdb49d8c44cb5e74e9,Namespace:kube-system,Attempt:0,}" Aug 13 00:35:28.833613 kubelet[2247]: E0813 00:35:28.833582 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:28.834380 containerd[1479]: time="2025-08-13T00:35:28.834134784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-21-106,Uid:4aa71c4abc9f96c1b4f90b7128033a12,Namespace:kube-system,Attempt:0,}" Aug 13 00:35:28.990456 kubelet[2247]: E0813 00:35:28.990267 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.21.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-21-106?timeout=10s\": dial tcp 172.234.21.106:6443: connect: connection refused" interval="800ms" Aug 13 00:35:29.154967 kubelet[2247]: I0813 00:35:29.154903 2247 kubelet_node_status.go:75] "Attempting to register node" node="172-234-21-106" Aug 13 00:35:29.155449 kubelet[2247]: E0813 00:35:29.155389 2247 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.21.106:6443/api/v1/nodes\": dial tcp 172.234.21.106:6443: connect: connection refused" node="172-234-21-106" Aug 13 00:35:29.356538 kubelet[2247]: W0813 00:35:29.356362 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.21.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.21.106:6443: connect: connection refused Aug 13 00:35:29.356538 kubelet[2247]: E0813 00:35:29.356433 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.21.106:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.21.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:35:29.457381 kubelet[2247]: W0813 00:35:29.457310 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.21.106:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-21-106&limit=500&resourceVersion=0": dial tcp 172.234.21.106:6443: connect: connection refused Aug 13 00:35:29.457381 kubelet[2247]: E0813 00:35:29.457386 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.21.106:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-21-106&limit=500&resourceVersion=0\": dial tcp 172.234.21.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:35:29.582495 kubelet[2247]: W0813 00:35:29.582402 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.21.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.21.106:6443: connect: connection refused Aug 13 00:35:29.582495 kubelet[2247]: E0813 00:35:29.582470 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.21.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.21.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:35:29.585097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191165198.mount: Deactivated successfully. Aug 13 00:35:29.589606 containerd[1479]: time="2025-08-13T00:35:29.589575295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:35:29.590652 containerd[1479]: time="2025-08-13T00:35:29.590630207Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:35:29.591941 containerd[1479]: time="2025-08-13T00:35:29.591903769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 00:35:29.592421 containerd[1479]: time="2025-08-13T00:35:29.592393750Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:35:29.593153 containerd[1479]: time="2025-08-13T00:35:29.593127352Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:35:29.594250 containerd[1479]: time="2025-08-13T00:35:29.594169554Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:35:29.594672 containerd[1479]: time="2025-08-13T00:35:29.594641485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:35:29.597936 containerd[1479]: time="2025-08-13T00:35:29.596998799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:35:29.597936 containerd[1479]: time="2025-08-13T00:35:29.597548481Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 767.764666ms" Aug 13 00:35:29.599851 containerd[1479]: time="2025-08-13T00:35:29.599795665Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 774.606579ms" Aug 13 00:35:29.601281 containerd[1479]: time="2025-08-13T00:35:29.601258988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 767.042494ms" Aug 13 00:35:29.677432 containerd[1479]: time="2025-08-13T00:35:29.676114688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:35:29.677432 containerd[1479]: time="2025-08-13T00:35:29.677200930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:35:29.677432 containerd[1479]: time="2025-08-13T00:35:29.677216690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:29.678109 containerd[1479]: time="2025-08-13T00:35:29.677299700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:29.678826 containerd[1479]: time="2025-08-13T00:35:29.678668473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:35:29.678826 containerd[1479]: time="2025-08-13T00:35:29.678717223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:35:29.678826 containerd[1479]: time="2025-08-13T00:35:29.678726203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:29.682654 containerd[1479]: time="2025-08-13T00:35:29.682567781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:29.684098 containerd[1479]: time="2025-08-13T00:35:29.683898333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:35:29.684098 containerd[1479]: time="2025-08-13T00:35:29.683940733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:35:29.684098 containerd[1479]: time="2025-08-13T00:35:29.683952273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:29.684098 containerd[1479]: time="2025-08-13T00:35:29.684008883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:29.709946 systemd[1]: Started cri-containerd-1ae3838890a0947f1063fb473cdf0468c7622b8ef8182b85f3e0a713228a4311.scope - libcontainer container 1ae3838890a0947f1063fb473cdf0468c7622b8ef8182b85f3e0a713228a4311. Aug 13 00:35:29.711513 systemd[1]: Started cri-containerd-437648ff848b895ba0ae94d58a49116f2dfe7ed899c5e4d16349ed2ea1be016d.scope - libcontainer container 437648ff848b895ba0ae94d58a49116f2dfe7ed899c5e4d16349ed2ea1be016d. Aug 13 00:35:29.715406 systemd[1]: Started cri-containerd-c14ab53b825a05861f3009ef5b71753011ab08fc853197a2fd301f37a4467a02.scope - libcontainer container c14ab53b825a05861f3009ef5b71753011ab08fc853197a2fd301f37a4467a02. Aug 13 00:35:29.735725 kubelet[2247]: W0813 00:35:29.735672 2247 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.21.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.21.106:6443: connect: connection refused Aug 13 00:35:29.735789 kubelet[2247]: E0813 00:35:29.735738 2247 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.21.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.21.106:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:35:29.761056 containerd[1479]: time="2025-08-13T00:35:29.760991427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-21-106,Uid:d0113a95571209fdb49d8c44cb5e74e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ae3838890a0947f1063fb473cdf0468c7622b8ef8182b85f3e0a713228a4311\"" Aug 13 00:35:29.762118 kubelet[2247]: E0813 00:35:29.762098 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:29.769627 containerd[1479]: time="2025-08-13T00:35:29.769577644Z" level=info msg="CreateContainer within sandbox \"1ae3838890a0947f1063fb473cdf0468c7622b8ef8182b85f3e0a713228a4311\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:35:29.777121 containerd[1479]: time="2025-08-13T00:35:29.777098760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-21-106,Uid:83cfa807fbcf74d70f707816b9e345df,Namespace:kube-system,Attempt:0,} returns sandbox id \"437648ff848b895ba0ae94d58a49116f2dfe7ed899c5e4d16349ed2ea1be016d\"" Aug 13 00:35:29.777911 kubelet[2247]: E0813 00:35:29.777895 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:29.781670 containerd[1479]: time="2025-08-13T00:35:29.781499028Z" level=info msg="CreateContainer within sandbox \"437648ff848b895ba0ae94d58a49116f2dfe7ed899c5e4d16349ed2ea1be016d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:35:29.785080 containerd[1479]: time="2025-08-13T00:35:29.785053785Z" level=info msg="CreateContainer within sandbox \"1ae3838890a0947f1063fb473cdf0468c7622b8ef8182b85f3e0a713228a4311\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b6a05e937f5978d3c7c90e7c2ef000bbf3aaf8b01d32a6756181aa42188e3410\"" Aug 13 00:35:29.785513 containerd[1479]: time="2025-08-13T00:35:29.785492496Z" level=info msg="StartContainer for \"b6a05e937f5978d3c7c90e7c2ef000bbf3aaf8b01d32a6756181aa42188e3410\"" Aug 13 00:35:29.791710 kubelet[2247]: E0813 00:35:29.791684 2247 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.21.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-21-106?timeout=10s\": dial tcp 172.234.21.106:6443: connect: connection refused" interval="1.6s" Aug 13 00:35:29.793685 containerd[1479]: time="2025-08-13T00:35:29.793642293Z" level=info msg="CreateContainer within sandbox \"437648ff848b895ba0ae94d58a49116f2dfe7ed899c5e4d16349ed2ea1be016d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"38e67ad9c1939dd2a8a1b6205838f2814636c7a048bfcf937a7dc2e88e462ea8\"" Aug 13 00:35:29.794404 containerd[1479]: time="2025-08-13T00:35:29.794344984Z" level=info msg="StartContainer for \"38e67ad9c1939dd2a8a1b6205838f2814636c7a048bfcf937a7dc2e88e462ea8\"" Aug 13 00:35:29.812419 containerd[1479]: time="2025-08-13T00:35:29.812300640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-21-106,Uid:4aa71c4abc9f96c1b4f90b7128033a12,Namespace:kube-system,Attempt:0,} returns sandbox id \"c14ab53b825a05861f3009ef5b71753011ab08fc853197a2fd301f37a4467a02\"" Aug 13 00:35:29.813408 kubelet[2247]: E0813 00:35:29.812990 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:29.817728 containerd[1479]: time="2025-08-13T00:35:29.817678991Z" level=info msg="CreateContainer within sandbox \"c14ab53b825a05861f3009ef5b71753011ab08fc853197a2fd301f37a4467a02\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:35:29.818930 systemd[1]: Started cri-containerd-b6a05e937f5978d3c7c90e7c2ef000bbf3aaf8b01d32a6756181aa42188e3410.scope - libcontainer container b6a05e937f5978d3c7c90e7c2ef000bbf3aaf8b01d32a6756181aa42188e3410. Aug 13 00:35:29.830063 systemd[1]: Started cri-containerd-38e67ad9c1939dd2a8a1b6205838f2814636c7a048bfcf937a7dc2e88e462ea8.scope - libcontainer container 38e67ad9c1939dd2a8a1b6205838f2814636c7a048bfcf937a7dc2e88e462ea8. Aug 13 00:35:29.833926 containerd[1479]: time="2025-08-13T00:35:29.833618943Z" level=info msg="CreateContainer within sandbox \"c14ab53b825a05861f3009ef5b71753011ab08fc853197a2fd301f37a4467a02\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"53abce846ca2fb115fe8822bb47abbdc7b7055a2f0ca15726f3167f28692948c\"" Aug 13 00:35:29.834873 containerd[1479]: time="2025-08-13T00:35:29.834405634Z" level=info msg="StartContainer for \"53abce846ca2fb115fe8822bb47abbdc7b7055a2f0ca15726f3167f28692948c\"" Aug 13 00:35:29.864954 systemd[1]: Started cri-containerd-53abce846ca2fb115fe8822bb47abbdc7b7055a2f0ca15726f3167f28692948c.scope - libcontainer container 53abce846ca2fb115fe8822bb47abbdc7b7055a2f0ca15726f3167f28692948c. Aug 13 00:35:29.878504 containerd[1479]: time="2025-08-13T00:35:29.878481592Z" level=info msg="StartContainer for \"b6a05e937f5978d3c7c90e7c2ef000bbf3aaf8b01d32a6756181aa42188e3410\" returns successfully" Aug 13 00:35:29.904872 containerd[1479]: time="2025-08-13T00:35:29.904800255Z" level=info msg="StartContainer for \"38e67ad9c1939dd2a8a1b6205838f2814636c7a048bfcf937a7dc2e88e462ea8\" returns successfully" Aug 13 00:35:29.928012 containerd[1479]: time="2025-08-13T00:35:29.927611850Z" level=info msg="StartContainer for \"53abce846ca2fb115fe8822bb47abbdc7b7055a2f0ca15726f3167f28692948c\" returns successfully" Aug 13 00:35:29.958574 kubelet[2247]: I0813 00:35:29.958252 2247 kubelet_node_status.go:75] "Attempting to register node" node="172-234-21-106" Aug 13 00:35:29.958574 kubelet[2247]: E0813 00:35:29.958520 2247 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.21.106:6443/api/v1/nodes\": dial tcp 172.234.21.106:6443: connect: connection refused" node="172-234-21-106" Aug 13 00:35:30.429513 kubelet[2247]: E0813 00:35:30.429496 2247 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-21-106\" not found" node="172-234-21-106" Aug 13 00:35:30.431177 kubelet[2247]: E0813 00:35:30.430672 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:30.433291 kubelet[2247]: E0813 00:35:30.433121 2247 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-21-106\" not found" node="172-234-21-106" Aug 13 00:35:30.433291 kubelet[2247]: E0813 00:35:30.433191 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:30.434157 kubelet[2247]: E0813 00:35:30.434051 2247 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-21-106\" not found" node="172-234-21-106" Aug 13 00:35:30.434157 kubelet[2247]: E0813 00:35:30.434122 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:31.395749 kubelet[2247]: E0813 00:35:31.395673 2247 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-21-106\" not found" node="172-234-21-106" Aug 13 00:35:31.437167 kubelet[2247]: E0813 00:35:31.437100 2247 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172-234-21-106" not found Aug 13 00:35:31.442093 kubelet[2247]: E0813 00:35:31.442078 2247 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-21-106\" not found" node="172-234-21-106" Aug 13 00:35:31.442196 kubelet[2247]: E0813 00:35:31.442185 2247 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-21-106\" not found" node="172-234-21-106" Aug 13 00:35:31.442266 kubelet[2247]: E0813 00:35:31.442254 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:31.442341 kubelet[2247]: E0813 00:35:31.442329 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:31.560409 kubelet[2247]: I0813 00:35:31.560376 2247 kubelet_node_status.go:75] "Attempting to register node" node="172-234-21-106" Aug 13 00:35:31.568247 kubelet[2247]: I0813 00:35:31.568221 2247 kubelet_node_status.go:78] "Successfully registered node" node="172-234-21-106" Aug 13 00:35:31.568247 kubelet[2247]: E0813 00:35:31.568243 2247 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-234-21-106\": node \"172-234-21-106\" not found" Aug 13 00:35:31.578260 kubelet[2247]: E0813 00:35:31.578232 2247 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-21-106\" not found" Aug 13 00:35:31.678941 kubelet[2247]: E0813 00:35:31.678835 2247 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-21-106\" not found" Aug 13 00:35:31.779090 kubelet[2247]: E0813 00:35:31.779047 2247 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-21-106\" not found" Aug 13 00:35:31.879844 kubelet[2247]: E0813 00:35:31.879776 2247 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-21-106\" not found" Aug 13 00:35:31.980849 kubelet[2247]: E0813 00:35:31.980753 2247 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-21-106\" not found" Aug 13 00:35:32.081650 kubelet[2247]: E0813 00:35:32.081604 2247 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-21-106\" not found" Aug 13 00:35:32.187967 kubelet[2247]: I0813 00:35:32.187934 2247 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:32.193837 kubelet[2247]: I0813 00:35:32.193671 2247 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:32.196754 kubelet[2247]: I0813 00:35:32.196715 2247 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:32.385120 kubelet[2247]: I0813 00:35:32.384969 2247 apiserver.go:52] "Watching apiserver" Aug 13 00:35:32.387170 kubelet[2247]: E0813 00:35:32.387050 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:32.387295 kubelet[2247]: I0813 00:35:32.387282 2247 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:35:32.439472 kubelet[2247]: I0813 00:35:32.439453 2247 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:32.440106 kubelet[2247]: I0813 00:35:32.439731 2247 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:32.444723 kubelet[2247]: E0813 00:35:32.444665 2247 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-21-106\" already exists" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:32.444986 kubelet[2247]: E0813 00:35:32.444967 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:32.445361 kubelet[2247]: E0813 00:35:32.445338 2247 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-21-106\" already exists" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:32.445453 kubelet[2247]: E0813 00:35:32.445424 2247 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:33.076359 systemd[1]: Reload requested from client PID 2514 ('systemctl') (unit session-7.scope)... Aug 13 00:35:33.076379 systemd[1]: Reloading... Aug 13 00:35:33.161889 zram_generator::config[2559]: No configuration found. Aug 13 00:35:33.268077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:35:33.365397 systemd[1]: Reloading finished in 288 ms. Aug 13 00:35:33.383556 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:33.400800 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:35:33.401095 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:33.401145 systemd[1]: kubelet.service: Consumed 596ms CPU time, 131.4M memory peak. Aug 13 00:35:33.407316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:35:33.571906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:35:33.576126 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:35:33.609944 kubelet[2610]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:35:33.609944 kubelet[2610]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:35:33.609944 kubelet[2610]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:35:33.610284 kubelet[2610]: I0813 00:35:33.609951 2610 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:35:33.615829 kubelet[2610]: I0813 00:35:33.615584 2610 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:35:33.615829 kubelet[2610]: I0813 00:35:33.615600 2610 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:35:33.615829 kubelet[2610]: I0813 00:35:33.615736 2610 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:35:33.617164 kubelet[2610]: I0813 00:35:33.617130 2610 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:35:33.619572 kubelet[2610]: I0813 00:35:33.619157 2610 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:35:33.622175 kubelet[2610]: E0813 00:35:33.622148 2610 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:35:33.622207 kubelet[2610]: I0813 00:35:33.622171 2610 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:35:33.626227 kubelet[2610]: I0813 00:35:33.626118 2610 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:35:33.626301 kubelet[2610]: I0813 00:35:33.626284 2610 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:35:33.626410 kubelet[2610]: I0813 00:35:33.626301 2610 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-21-106","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:35:33.626479 kubelet[2610]: I0813 00:35:33.626415 2610 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:35:33.626479 kubelet[2610]: I0813 00:35:33.626421 2610 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:35:33.626479 kubelet[2610]: I0813 00:35:33.626457 2610 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:35:33.626586 kubelet[2610]: I0813 00:35:33.626575 2610 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:35:33.626615 kubelet[2610]: I0813 00:35:33.626595 2610 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:35:33.631019 kubelet[2610]: I0813 00:35:33.628837 2610 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:35:33.631019 kubelet[2610]: I0813 00:35:33.628860 2610 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:35:33.631261 kubelet[2610]: I0813 00:35:33.631171 2610 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:35:33.631874 kubelet[2610]: I0813 00:35:33.631581 2610 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:35:33.634327 kubelet[2610]: I0813 00:35:33.634312 2610 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:35:33.634415 kubelet[2610]: I0813 00:35:33.634405 2610 server.go:1287] "Started kubelet" Aug 13 00:35:33.638677 kubelet[2610]: I0813 00:35:33.638662 2610 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:35:33.642183 kubelet[2610]: E0813 00:35:33.642162 2610 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:35:33.645191 kubelet[2610]: I0813 00:35:33.645155 2610 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:35:33.646384 kubelet[2610]: I0813 00:35:33.646370 2610 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:35:33.647128 kubelet[2610]: I0813 00:35:33.645204 2610 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:35:33.647249 kubelet[2610]: I0813 00:35:33.647217 2610 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:35:33.647451 kubelet[2610]: I0813 00:35:33.647438 2610 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:35:33.647637 kubelet[2610]: I0813 00:35:33.647622 2610 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:35:33.648578 kubelet[2610]: I0813 00:35:33.645191 2610 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:35:33.648578 kubelet[2610]: I0813 00:35:33.648027 2610 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:35:33.649653 kubelet[2610]: I0813 00:35:33.649566 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:35:33.650661 kubelet[2610]: I0813 00:35:33.650247 2610 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:35:33.650794 kubelet[2610]: I0813 00:35:33.650776 2610 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:35:33.651045 kubelet[2610]: I0813 00:35:33.651024 2610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:35:33.651089 kubelet[2610]: I0813 00:35:33.651051 2610 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:35:33.651089 kubelet[2610]: I0813 00:35:33.651065 2610 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:35:33.651089 kubelet[2610]: I0813 00:35:33.651071 2610 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:35:33.651157 kubelet[2610]: E0813 00:35:33.651110 2610 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:35:33.653722 kubelet[2610]: I0813 00:35:33.653707 2610 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:35:33.694313 kubelet[2610]: I0813 00:35:33.694293 2610 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:35:33.694313 kubelet[2610]: I0813 00:35:33.694308 2610 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:35:33.694436 kubelet[2610]: I0813 00:35:33.694323 2610 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:35:33.694457 kubelet[2610]: I0813 00:35:33.694443 2610 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:35:33.694476 kubelet[2610]: I0813 00:35:33.694452 2610 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:35:33.694476 kubelet[2610]: I0813 00:35:33.694469 2610 policy_none.go:49] "None policy: Start" Aug 13 00:35:33.694514 kubelet[2610]: I0813 00:35:33.694477 2610 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:35:33.694514 kubelet[2610]: I0813 00:35:33.694487 2610 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:35:33.694604 kubelet[2610]: I0813 00:35:33.694566 2610 state_mem.go:75] "Updated machine memory state" Aug 13 00:35:33.698605 kubelet[2610]: I0813 00:35:33.698580 2610 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:35:33.698740 kubelet[2610]: I0813 00:35:33.698720 2610 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:35:33.698775 kubelet[2610]: I0813 00:35:33.698736 2610 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:35:33.702732 kubelet[2610]: I0813 00:35:33.702705 2610 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:35:33.704643 kubelet[2610]: E0813 00:35:33.704236 2610 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:35:33.752613 kubelet[2610]: I0813 00:35:33.752257 2610 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:33.752613 kubelet[2610]: I0813 00:35:33.752297 2610 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:33.752613 kubelet[2610]: I0813 00:35:33.752470 2610 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:33.756671 kubelet[2610]: E0813 00:35:33.756642 2610 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-21-106\" already exists" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:33.757794 kubelet[2610]: E0813 00:35:33.757775 2610 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-21-106\" already exists" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:33.757906 kubelet[2610]: E0813 00:35:33.757842 2610 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-21-106\" already exists" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:33.811067 kubelet[2610]: I0813 00:35:33.811043 2610 kubelet_node_status.go:75] "Attempting to register node" node="172-234-21-106" Aug 13 00:35:33.818892 kubelet[2610]: I0813 00:35:33.818867 2610 kubelet_node_status.go:124] "Node was previously registered" node="172-234-21-106" Aug 13 00:35:33.818933 kubelet[2610]: I0813 00:35:33.818908 2610 kubelet_node_status.go:78] "Successfully registered node" node="172-234-21-106" Aug 13 00:35:33.849930 kubelet[2610]: I0813 00:35:33.849911 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83cfa807fbcf74d70f707816b9e345df-ca-certs\") pod \"kube-apiserver-172-234-21-106\" (UID: \"83cfa807fbcf74d70f707816b9e345df\") " pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:33.850040 kubelet[2610]: I0813 00:35:33.849939 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83cfa807fbcf74d70f707816b9e345df-k8s-certs\") pod \"kube-apiserver-172-234-21-106\" (UID: \"83cfa807fbcf74d70f707816b9e345df\") " pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:33.850040 kubelet[2610]: I0813 00:35:33.849957 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83cfa807fbcf74d70f707816b9e345df-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-21-106\" (UID: \"83cfa807fbcf74d70f707816b9e345df\") " pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:33.850040 kubelet[2610]: I0813 00:35:33.849975 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-flexvolume-dir\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:33.850040 kubelet[2610]: I0813 00:35:33.849991 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4aa71c4abc9f96c1b4f90b7128033a12-kubeconfig\") pod \"kube-scheduler-172-234-21-106\" (UID: \"4aa71c4abc9f96c1b4f90b7128033a12\") " pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:33.850040 kubelet[2610]: I0813 00:35:33.850008 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-ca-certs\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:33.850196 kubelet[2610]: I0813 00:35:33.850024 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-k8s-certs\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:33.850196 kubelet[2610]: I0813 00:35:33.850038 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-kubeconfig\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:33.850196 kubelet[2610]: I0813 00:35:33.850055 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0113a95571209fdb49d8c44cb5e74e9-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-21-106\" (UID: \"d0113a95571209fdb49d8c44cb5e74e9\") " pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:34.058866 kubelet[2610]: E0813 00:35:34.057636 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:34.058866 kubelet[2610]: E0813 00:35:34.057956 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:34.058866 kubelet[2610]: E0813 00:35:34.058106 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:34.085270 sudo[2643]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:35:34.085645 sudo[2643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:35:34.240650 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:35:34.562461 sudo[2643]: pam_unix(sudo:session): session closed for user root Aug 13 00:35:34.629270 kubelet[2610]: I0813 00:35:34.629208 2610 apiserver.go:52] "Watching apiserver" Aug 13 00:35:34.647280 kubelet[2610]: I0813 00:35:34.647234 2610 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:35:34.683033 kubelet[2610]: I0813 00:35:34.682429 2610 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:34.683340 kubelet[2610]: E0813 00:35:34.683315 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:34.683907 kubelet[2610]: I0813 00:35:34.683559 2610 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:34.689909 kubelet[2610]: E0813 00:35:34.689879 2610 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-21-106\" already exists" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:34.690114 kubelet[2610]: E0813 00:35:34.690101 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:34.691584 kubelet[2610]: E0813 00:35:34.691560 2610 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-21-106\" already exists" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:34.692315 kubelet[2610]: E0813 00:35:34.692253 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:34.716537 kubelet[2610]: I0813 00:35:34.716432 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-21-106" podStartSLOduration=2.716423836 podStartE2EDuration="2.716423836s" podCreationTimestamp="2025-08-13 00:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:35:34.710710145 +0000 UTC m=+1.130193111" watchObservedRunningTime="2025-08-13 00:35:34.716423836 +0000 UTC m=+1.135906792" Aug 13 00:35:34.723529 kubelet[2610]: I0813 00:35:34.723403 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-21-106" podStartSLOduration=2.72339514 podStartE2EDuration="2.72339514s" podCreationTimestamp="2025-08-13 00:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:35:34.717018047 +0000 UTC m=+1.136522703" watchObservedRunningTime="2025-08-13 00:35:34.72339514 +0000 UTC m=+1.142878106" Aug 13 00:35:34.723866 kubelet[2610]: I0813 00:35:34.723657 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-21-106" podStartSLOduration=2.72364928 podStartE2EDuration="2.72364928s" podCreationTimestamp="2025-08-13 00:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:35:34.72336075 +0000 UTC m=+1.142843716" watchObservedRunningTime="2025-08-13 00:35:34.72364928 +0000 UTC m=+1.143132246" Aug 13 00:35:35.685211 kubelet[2610]: E0813 00:35:35.684456 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:35.685211 kubelet[2610]: E0813 00:35:35.684925 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:35.685211 kubelet[2610]: E0813 00:35:35.685151 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:35.858195 sudo[1704]: pam_unix(sudo:session): session closed for user root Aug 13 00:35:35.908191 sshd[1703]: Connection closed by 139.178.89.65 port 40638 Aug 13 00:35:35.909058 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Aug 13 00:35:35.914617 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:35:35.915126 systemd[1]: sshd@6-172.234.21.106:22-139.178.89.65:40638.service: Deactivated successfully. Aug 13 00:35:35.917768 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:35:35.918061 systemd[1]: session-7.scope: Consumed 4.380s CPU time, 261.2M memory peak. Aug 13 00:35:35.919412 systemd-logind[1462]: Removed session 7. Aug 13 00:35:36.686197 kubelet[2610]: E0813 00:35:36.686123 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:39.894609 kubelet[2610]: E0813 00:35:39.894254 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:40.122143 kubelet[2610]: I0813 00:35:40.122003 2610 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:35:40.122621 containerd[1479]: time="2025-08-13T00:35:40.122560046Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:35:40.123617 kubelet[2610]: I0813 00:35:40.123343 2610 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:35:40.692306 kubelet[2610]: E0813 00:35:40.692269 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:40.814205 systemd[1]: Created slice kubepods-besteffort-pod46a2226d_3bc8_48d0_8639_3d3893002e44.slice - libcontainer container kubepods-besteffort-pod46a2226d_3bc8_48d0_8639_3d3893002e44.slice. Aug 13 00:35:40.829380 systemd[1]: Created slice kubepods-burstable-pod478772a7_5d49_49d8_8ca6_a2ff4e5373c1.slice - libcontainer container kubepods-burstable-pod478772a7_5d49_49d8_8ca6_a2ff4e5373c1.slice. Aug 13 00:35:40.890374 kubelet[2610]: I0813 00:35:40.890330 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46a2226d-3bc8-48d0-8639-3d3893002e44-xtables-lock\") pod \"kube-proxy-thk97\" (UID: \"46a2226d-3bc8-48d0-8639-3d3893002e44\") " pod="kube-system/kube-proxy-thk97" Aug 13 00:35:40.890643 kubelet[2610]: I0813 00:35:40.890626 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnw7\" (UniqueName: \"kubernetes.io/projected/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-kube-api-access-5bnw7\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891106 kubelet[2610]: I0813 00:35:40.890692 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-run\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891106 kubelet[2610]: I0813 00:35:40.890711 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-xtables-lock\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891106 kubelet[2610]: I0813 00:35:40.890729 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-hostproc\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891106 kubelet[2610]: I0813 00:35:40.890743 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-etc-cni-netd\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891106 kubelet[2610]: I0813 00:35:40.890759 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-clustermesh-secrets\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891106 kubelet[2610]: I0813 00:35:40.890778 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46a2226d-3bc8-48d0-8639-3d3893002e44-kube-proxy\") pod \"kube-proxy-thk97\" (UID: \"46a2226d-3bc8-48d0-8639-3d3893002e44\") " pod="kube-system/kube-proxy-thk97" Aug 13 00:35:40.891277 kubelet[2610]: I0813 00:35:40.890794 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46a2226d-3bc8-48d0-8639-3d3893002e44-lib-modules\") pod \"kube-proxy-thk97\" (UID: \"46a2226d-3bc8-48d0-8639-3d3893002e44\") " pod="kube-system/kube-proxy-thk97" Aug 13 00:35:40.891277 kubelet[2610]: I0813 00:35:40.890833 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-host-proc-sys-kernel\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891277 kubelet[2610]: I0813 00:35:40.890849 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-hubble-tls\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891277 kubelet[2610]: I0813 00:35:40.890865 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-lib-modules\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891277 kubelet[2610]: I0813 00:35:40.890882 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmzsq\" (UniqueName: \"kubernetes.io/projected/46a2226d-3bc8-48d0-8639-3d3893002e44-kube-api-access-tmzsq\") pod \"kube-proxy-thk97\" (UID: \"46a2226d-3bc8-48d0-8639-3d3893002e44\") " pod="kube-system/kube-proxy-thk97" Aug 13 00:35:40.891427 kubelet[2610]: I0813 00:35:40.890898 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-host-proc-sys-net\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891427 kubelet[2610]: I0813 00:35:40.890915 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-config-path\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891427 kubelet[2610]: I0813 00:35:40.890933 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-bpf-maps\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891427 kubelet[2610]: I0813 00:35:40.890949 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cni-path\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:40.891427 kubelet[2610]: I0813 00:35:40.890963 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-cgroup\") pod \"cilium-rn4ll\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " pod="kube-system/cilium-rn4ll" Aug 13 00:35:41.121690 kubelet[2610]: E0813 00:35:41.121656 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:41.123933 containerd[1479]: time="2025-08-13T00:35:41.122506845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-thk97,Uid:46a2226d-3bc8-48d0-8639-3d3893002e44,Namespace:kube-system,Attempt:0,}" Aug 13 00:35:41.136336 kubelet[2610]: E0813 00:35:41.134296 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:41.136512 containerd[1479]: time="2025-08-13T00:35:41.134634269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rn4ll,Uid:478772a7-5d49-49d8-8ca6-a2ff4e5373c1,Namespace:kube-system,Attempt:0,}" Aug 13 00:35:41.163014 containerd[1479]: time="2025-08-13T00:35:41.161343913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:35:41.163014 containerd[1479]: time="2025-08-13T00:35:41.161397553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:35:41.163014 containerd[1479]: time="2025-08-13T00:35:41.161434953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:41.163014 containerd[1479]: time="2025-08-13T00:35:41.161543023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:41.199958 systemd[1]: Started cri-containerd-28f49ceec51d590f3c768ffcfa51cf331a045241ca891ca47ca0e35a69ddcbf4.scope - libcontainer container 28f49ceec51d590f3c768ffcfa51cf331a045241ca891ca47ca0e35a69ddcbf4. Aug 13 00:35:41.203529 containerd[1479]: time="2025-08-13T00:35:41.201105242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:35:41.203529 containerd[1479]: time="2025-08-13T00:35:41.202059114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:35:41.203529 containerd[1479]: time="2025-08-13T00:35:41.202072254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:41.203529 containerd[1479]: time="2025-08-13T00:35:41.202160274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:41.241241 systemd[1]: Created slice kubepods-besteffort-pod77c4921b_346d_4f4e_9172_cbd21b4d587d.slice - libcontainer container kubepods-besteffort-pod77c4921b_346d_4f4e_9172_cbd21b4d587d.slice. Aug 13 00:35:41.256944 systemd[1]: Started cri-containerd-e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb.scope - libcontainer container e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb. Aug 13 00:35:41.282772 containerd[1479]: time="2025-08-13T00:35:41.282710846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-thk97,Uid:46a2226d-3bc8-48d0-8639-3d3893002e44,Namespace:kube-system,Attempt:0,} returns sandbox id \"28f49ceec51d590f3c768ffcfa51cf331a045241ca891ca47ca0e35a69ddcbf4\"" Aug 13 00:35:41.284002 kubelet[2610]: E0813 00:35:41.283784 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:41.286699 containerd[1479]: time="2025-08-13T00:35:41.286656123Z" level=info msg="CreateContainer within sandbox \"28f49ceec51d590f3c768ffcfa51cf331a045241ca891ca47ca0e35a69ddcbf4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:35:41.295036 kubelet[2610]: I0813 00:35:41.294956 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2946c\" (UniqueName: \"kubernetes.io/projected/77c4921b-346d-4f4e-9172-cbd21b4d587d-kube-api-access-2946c\") pod \"cilium-operator-6c4d7847fc-628vf\" (UID: \"77c4921b-346d-4f4e-9172-cbd21b4d587d\") " pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:35:41.295036 kubelet[2610]: I0813 00:35:41.295002 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77c4921b-346d-4f4e-9172-cbd21b4d587d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-628vf\" (UID: \"77c4921b-346d-4f4e-9172-cbd21b4d587d\") " pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:35:41.297369 containerd[1479]: time="2025-08-13T00:35:41.297153324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rn4ll,Uid:478772a7-5d49-49d8-8ca6-a2ff4e5373c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\"" Aug 13 00:35:41.298339 kubelet[2610]: E0813 00:35:41.298321 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:41.300447 containerd[1479]: time="2025-08-13T00:35:41.300252251Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:35:41.304001 containerd[1479]: time="2025-08-13T00:35:41.303978668Z" level=info msg="CreateContainer within sandbox \"28f49ceec51d590f3c768ffcfa51cf331a045241ca891ca47ca0e35a69ddcbf4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d5e4e10277b85eadef25c95528f68c87a5a8b665129bf3c10d9915d680976d81\"" Aug 13 00:35:41.305703 containerd[1479]: time="2025-08-13T00:35:41.304539979Z" level=info msg="StartContainer for \"d5e4e10277b85eadef25c95528f68c87a5a8b665129bf3c10d9915d680976d81\"" Aug 13 00:35:41.334937 systemd[1]: Started cri-containerd-d5e4e10277b85eadef25c95528f68c87a5a8b665129bf3c10d9915d680976d81.scope - libcontainer container d5e4e10277b85eadef25c95528f68c87a5a8b665129bf3c10d9915d680976d81. Aug 13 00:35:41.367562 containerd[1479]: time="2025-08-13T00:35:41.367439475Z" level=info msg="StartContainer for \"d5e4e10277b85eadef25c95528f68c87a5a8b665129bf3c10d9915d680976d81\" returns successfully" Aug 13 00:35:41.544754 kubelet[2610]: E0813 00:35:41.544031 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:41.544899 containerd[1479]: time="2025-08-13T00:35:41.544371049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-628vf,Uid:77c4921b-346d-4f4e-9172-cbd21b4d587d,Namespace:kube-system,Attempt:0,}" Aug 13 00:35:41.570472 containerd[1479]: time="2025-08-13T00:35:41.570200840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:35:41.570472 containerd[1479]: time="2025-08-13T00:35:41.570252070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:35:41.570472 containerd[1479]: time="2025-08-13T00:35:41.570372091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:41.571394 containerd[1479]: time="2025-08-13T00:35:41.571275613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:41.594966 systemd[1]: Started cri-containerd-b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b.scope - libcontainer container b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b. Aug 13 00:35:41.638275 containerd[1479]: time="2025-08-13T00:35:41.638180226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-628vf,Uid:77c4921b-346d-4f4e-9172-cbd21b4d587d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\"" Aug 13 00:35:41.641983 kubelet[2610]: E0813 00:35:41.638983 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:41.696541 kubelet[2610]: E0813 00:35:41.696501 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:41.706686 kubelet[2610]: I0813 00:35:41.706625 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-thk97" podStartSLOduration=1.706611893 podStartE2EDuration="1.706611893s" podCreationTimestamp="2025-08-13 00:35:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:35:41.706561633 +0000 UTC m=+8.126044609" watchObservedRunningTime="2025-08-13 00:35:41.706611893 +0000 UTC m=+8.126094869" Aug 13 00:35:44.214735 kubelet[2610]: E0813 00:35:44.214604 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:44.701549 kubelet[2610]: E0813 00:35:44.701528 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:44.736282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1534139219.mount: Deactivated successfully. Aug 13 00:35:45.526425 kubelet[2610]: E0813 00:35:45.526080 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:46.152388 containerd[1479]: time="2025-08-13T00:35:46.152337211Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:46.153259 containerd[1479]: time="2025-08-13T00:35:46.153217252Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:35:46.153855 containerd[1479]: time="2025-08-13T00:35:46.153797669Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:46.155212 containerd[1479]: time="2025-08-13T00:35:46.155182830Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.854887379s" Aug 13 00:35:46.155300 containerd[1479]: time="2025-08-13T00:35:46.155211549Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:35:46.156835 containerd[1479]: time="2025-08-13T00:35:46.156681968Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:35:46.158370 containerd[1479]: time="2025-08-13T00:35:46.158335843Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:35:46.175255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471505746.mount: Deactivated successfully. Aug 13 00:35:46.176520 containerd[1479]: time="2025-08-13T00:35:46.176472177Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\"" Aug 13 00:35:46.177371 containerd[1479]: time="2025-08-13T00:35:46.177244950Z" level=info msg="StartContainer for \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\"" Aug 13 00:35:46.206934 systemd[1]: Started cri-containerd-b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36.scope - libcontainer container b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36. Aug 13 00:35:46.239689 containerd[1479]: time="2025-08-13T00:35:46.239632371Z" level=info msg="StartContainer for \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\" returns successfully" Aug 13 00:35:46.282247 systemd[1]: cri-containerd-b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36.scope: Deactivated successfully. Aug 13 00:35:46.283236 systemd[1]: cri-containerd-b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36.scope: Consumed 27ms CPU time, 6.6M memory peak, 2M written to disk. Aug 13 00:35:46.364948 containerd[1479]: time="2025-08-13T00:35:46.364762344Z" level=info msg="shim disconnected" id=b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36 namespace=k8s.io Aug 13 00:35:46.364948 containerd[1479]: time="2025-08-13T00:35:46.364856753Z" level=warning msg="cleaning up after shim disconnected" id=b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36 namespace=k8s.io Aug 13 00:35:46.364948 containerd[1479]: time="2025-08-13T00:35:46.364866713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:35:46.705119 kubelet[2610]: E0813 00:35:46.704725 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:46.707721 containerd[1479]: time="2025-08-13T00:35:46.707672249Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:35:46.719488 containerd[1479]: time="2025-08-13T00:35:46.719388949Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\"" Aug 13 00:35:46.719992 containerd[1479]: time="2025-08-13T00:35:46.719961997Z" level=info msg="StartContainer for \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\"" Aug 13 00:35:46.754109 systemd[1]: Started cri-containerd-f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45.scope - libcontainer container f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45. Aug 13 00:35:46.789225 containerd[1479]: time="2025-08-13T00:35:46.789193922Z" level=info msg="StartContainer for \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\" returns successfully" Aug 13 00:35:46.804844 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:35:46.805524 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:35:46.805873 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:35:46.814126 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:35:46.814340 systemd[1]: cri-containerd-f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45.scope: Deactivated successfully. Aug 13 00:35:46.833573 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:35:46.843330 containerd[1479]: time="2025-08-13T00:35:46.843148632Z" level=info msg="shim disconnected" id=f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45 namespace=k8s.io Aug 13 00:35:46.843330 containerd[1479]: time="2025-08-13T00:35:46.843199071Z" level=warning msg="cleaning up after shim disconnected" id=f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45 namespace=k8s.io Aug 13 00:35:46.843330 containerd[1479]: time="2025-08-13T00:35:46.843206941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:35:47.171392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36-rootfs.mount: Deactivated successfully. Aug 13 00:35:47.707165 kubelet[2610]: E0813 00:35:47.707097 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:47.712608 containerd[1479]: time="2025-08-13T00:35:47.711406587Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:35:47.734121 containerd[1479]: time="2025-08-13T00:35:47.734076867Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\"" Aug 13 00:35:47.735902 containerd[1479]: time="2025-08-13T00:35:47.734672436Z" level=info msg="StartContainer for \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\"" Aug 13 00:35:47.769979 systemd[1]: Started cri-containerd-a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c.scope - libcontainer container a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c. Aug 13 00:35:47.812742 containerd[1479]: time="2025-08-13T00:35:47.812612788Z" level=info msg="StartContainer for \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\" returns successfully" Aug 13 00:35:47.814933 systemd[1]: cri-containerd-a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c.scope: Deactivated successfully. Aug 13 00:35:47.842505 containerd[1479]: time="2025-08-13T00:35:47.842404587Z" level=info msg="shim disconnected" id=a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c namespace=k8s.io Aug 13 00:35:47.842505 containerd[1479]: time="2025-08-13T00:35:47.842495045Z" level=warning msg="cleaning up after shim disconnected" id=a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c namespace=k8s.io Aug 13 00:35:47.842505 containerd[1479]: time="2025-08-13T00:35:47.842506015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:35:48.172589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c-rootfs.mount: Deactivated successfully. Aug 13 00:35:48.383651 containerd[1479]: time="2025-08-13T00:35:48.383605807Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:48.384304 containerd[1479]: time="2025-08-13T00:35:48.384270365Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:35:48.384801 containerd[1479]: time="2025-08-13T00:35:48.384750915Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:35:48.386114 containerd[1479]: time="2025-08-13T00:35:48.386023192Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.229313815s" Aug 13 00:35:48.386114 containerd[1479]: time="2025-08-13T00:35:48.386051272Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:35:48.389258 containerd[1479]: time="2025-08-13T00:35:48.389228942Z" level=info msg="CreateContainer within sandbox \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:35:48.401842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469381217.mount: Deactivated successfully. Aug 13 00:35:48.404707 containerd[1479]: time="2025-08-13T00:35:48.404625258Z" level=info msg="CreateContainer within sandbox \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\"" Aug 13 00:35:48.405583 containerd[1479]: time="2025-08-13T00:35:48.405255516Z" level=info msg="StartContainer for \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\"" Aug 13 00:35:48.437933 systemd[1]: Started cri-containerd-e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d.scope - libcontainer container e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d. Aug 13 00:35:48.466325 containerd[1479]: time="2025-08-13T00:35:48.466276289Z" level=info msg="StartContainer for \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\" returns successfully" Aug 13 00:35:48.524839 update_engine[1463]: I20250813 00:35:48.524277 1463 update_attempter.cc:509] Updating boot flags... Aug 13 00:35:48.591888 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3246) Aug 13 00:35:48.714109 kubelet[2610]: E0813 00:35:48.713716 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:48.720000 kubelet[2610]: E0813 00:35:48.719350 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:48.725642 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3247) Aug 13 00:35:48.726547 containerd[1479]: time="2025-08-13T00:35:48.726504579Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:35:48.751831 containerd[1479]: time="2025-08-13T00:35:48.750126172Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\"" Aug 13 00:35:48.764740 containerd[1479]: time="2025-08-13T00:35:48.764698863Z" level=info msg="StartContainer for \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\"" Aug 13 00:35:48.846549 kubelet[2610]: I0813 00:35:48.846476 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-628vf" podStartSLOduration=1.101657948 podStartE2EDuration="7.845788333s" podCreationTimestamp="2025-08-13 00:35:41 +0000 UTC" firstStartedPulling="2025-08-13 00:35:41.642526975 +0000 UTC m=+8.062009941" lastFinishedPulling="2025-08-13 00:35:48.38665736 +0000 UTC m=+14.806140326" observedRunningTime="2025-08-13 00:35:48.787255115 +0000 UTC m=+15.206738081" watchObservedRunningTime="2025-08-13 00:35:48.845788333 +0000 UTC m=+15.265271299" Aug 13 00:35:48.865632 systemd[1]: Started cri-containerd-4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c.scope - libcontainer container 4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c. Aug 13 00:35:48.879790 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3247) Aug 13 00:35:48.967099 systemd[1]: cri-containerd-4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c.scope: Deactivated successfully. Aug 13 00:35:48.972012 containerd[1479]: time="2025-08-13T00:35:48.971955442Z" level=info msg="StartContainer for \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\" returns successfully" Aug 13 00:35:49.045905 containerd[1479]: time="2025-08-13T00:35:49.045838873Z" level=info msg="shim disconnected" id=4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c namespace=k8s.io Aug 13 00:35:49.046330 containerd[1479]: time="2025-08-13T00:35:49.046136958Z" level=warning msg="cleaning up after shim disconnected" id=4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c namespace=k8s.io Aug 13 00:35:49.046330 containerd[1479]: time="2025-08-13T00:35:49.046152908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:35:49.173986 systemd[1]: run-containerd-runc-k8s.io-e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d-runc.9It4nN.mount: Deactivated successfully. Aug 13 00:35:49.727424 kubelet[2610]: E0813 00:35:49.727356 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:49.728156 kubelet[2610]: E0813 00:35:49.728035 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:49.732759 containerd[1479]: time="2025-08-13T00:35:49.732697687Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:35:49.752879 containerd[1479]: time="2025-08-13T00:35:49.752642634Z" level=info msg="CreateContainer within sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\"" Aug 13 00:35:49.754962 containerd[1479]: time="2025-08-13T00:35:49.754929235Z" level=info msg="StartContainer for \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\"" Aug 13 00:35:49.800961 systemd[1]: Started cri-containerd-1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65.scope - libcontainer container 1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65. Aug 13 00:35:49.841676 containerd[1479]: time="2025-08-13T00:35:49.841606594Z" level=info msg="StartContainer for \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\" returns successfully" Aug 13 00:35:50.002180 kubelet[2610]: I0813 00:35:50.002014 2610 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:35:50.058459 systemd[1]: Created slice kubepods-burstable-pod92d40f0a_2cbd_4952_b95a_62cf2f643256.slice - libcontainer container kubepods-burstable-pod92d40f0a_2cbd_4952_b95a_62cf2f643256.slice. Aug 13 00:35:50.060698 kubelet[2610]: I0813 00:35:50.059515 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92d40f0a-2cbd-4952-b95a-62cf2f643256-config-volume\") pod \"coredns-668d6bf9bc-qvgll\" (UID: \"92d40f0a-2cbd-4952-b95a-62cf2f643256\") " pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:35:50.060698 kubelet[2610]: I0813 00:35:50.059583 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtk7w\" (UniqueName: \"kubernetes.io/projected/92d40f0a-2cbd-4952-b95a-62cf2f643256-kube-api-access-rtk7w\") pod \"coredns-668d6bf9bc-qvgll\" (UID: \"92d40f0a-2cbd-4952-b95a-62cf2f643256\") " pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:35:50.069844 systemd[1]: Created slice kubepods-burstable-pod66d05e9b_5741_4d57_8eb5_dc4f12e8ac9e.slice - libcontainer container kubepods-burstable-pod66d05e9b_5741_4d57_8eb5_dc4f12e8ac9e.slice. Aug 13 00:35:50.160115 kubelet[2610]: I0813 00:35:50.160055 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6ds7\" (UniqueName: \"kubernetes.io/projected/66d05e9b-5741-4d57-8eb5-dc4f12e8ac9e-kube-api-access-f6ds7\") pod \"coredns-668d6bf9bc-hmcbw\" (UID: \"66d05e9b-5741-4d57-8eb5-dc4f12e8ac9e\") " pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:35:50.161100 kubelet[2610]: I0813 00:35:50.160375 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66d05e9b-5741-4d57-8eb5-dc4f12e8ac9e-config-volume\") pod \"coredns-668d6bf9bc-hmcbw\" (UID: \"66d05e9b-5741-4d57-8eb5-dc4f12e8ac9e\") " pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:35:50.365952 kubelet[2610]: E0813 00:35:50.365743 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:50.368321 containerd[1479]: time="2025-08-13T00:35:50.368246095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvgll,Uid:92d40f0a-2cbd-4952-b95a-62cf2f643256,Namespace:kube-system,Attempt:0,}" Aug 13 00:35:50.375433 kubelet[2610]: E0813 00:35:50.375382 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:50.378063 containerd[1479]: time="2025-08-13T00:35:50.378007028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hmcbw,Uid:66d05e9b-5741-4d57-8eb5-dc4f12e8ac9e,Namespace:kube-system,Attempt:0,}" Aug 13 00:35:50.740410 kubelet[2610]: E0813 00:35:50.740330 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:50.755297 kubelet[2610]: I0813 00:35:50.754969 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rn4ll" podStartSLOduration=5.898339667 podStartE2EDuration="10.754945676s" podCreationTimestamp="2025-08-13 00:35:40 +0000 UTC" firstStartedPulling="2025-08-13 00:35:41.299581479 +0000 UTC m=+7.719064455" lastFinishedPulling="2025-08-13 00:35:46.156187498 +0000 UTC m=+12.575670464" observedRunningTime="2025-08-13 00:35:50.753664596 +0000 UTC m=+17.173147572" watchObservedRunningTime="2025-08-13 00:35:50.754945676 +0000 UTC m=+17.174428642" Aug 13 00:35:51.741184 kubelet[2610]: E0813 00:35:51.741133 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:52.276153 systemd-networkd[1391]: cilium_host: Link UP Aug 13 00:35:52.276346 systemd-networkd[1391]: cilium_net: Link UP Aug 13 00:35:52.276591 systemd-networkd[1391]: cilium_net: Gained carrier Aug 13 00:35:52.276779 systemd-networkd[1391]: cilium_host: Gained carrier Aug 13 00:35:52.276974 systemd-networkd[1391]: cilium_net: Gained IPv6LL Aug 13 00:35:52.277142 systemd-networkd[1391]: cilium_host: Gained IPv6LL Aug 13 00:35:52.394871 systemd-networkd[1391]: cilium_vxlan: Link UP Aug 13 00:35:52.394883 systemd-networkd[1391]: cilium_vxlan: Gained carrier Aug 13 00:35:52.621834 kernel: NET: Registered PF_ALG protocol family Aug 13 00:35:52.747256 kubelet[2610]: E0813 00:35:52.747177 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:53.306692 systemd-networkd[1391]: lxc_health: Link UP Aug 13 00:35:53.328619 systemd-networkd[1391]: lxc_health: Gained carrier Aug 13 00:35:53.449215 kernel: eth0: renamed from tmp76027 Aug 13 00:35:53.456980 systemd-networkd[1391]: lxc4c0fda326e1d: Link UP Aug 13 00:35:53.464866 kernel: eth0: renamed from tmpc30c2 Aug 13 00:35:53.474498 systemd-networkd[1391]: lxce956f66aea81: Link UP Aug 13 00:35:53.477459 systemd-networkd[1391]: lxc4c0fda326e1d: Gained carrier Aug 13 00:35:53.480538 systemd-networkd[1391]: lxce956f66aea81: Gained carrier Aug 13 00:35:53.782380 kubelet[2610]: I0813 00:35:53.782316 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:35:53.782380 kubelet[2610]: I0813 00:35:53.782379 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:35:53.786974 kubelet[2610]: I0813 00:35:53.786685 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:35:53.801545 kubelet[2610]: I0813 00:35:53.801505 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:35:53.801660 kubelet[2610]: I0813 00:35:53.801624 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/cilium-rn4ll","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:35:53.801693 kubelet[2610]: E0813 00:35:53.801682 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:35:53.801716 kubelet[2610]: E0813 00:35:53.801696 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:35:53.801716 kubelet[2610]: E0813 00:35:53.801708 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:35:53.801762 kubelet[2610]: E0813 00:35:53.801719 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:35:53.801762 kubelet[2610]: E0813 00:35:53.801729 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:35:53.801762 kubelet[2610]: E0813 00:35:53.801738 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:35:53.801762 kubelet[2610]: E0813 00:35:53.801747 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:35:53.801762 kubelet[2610]: E0813 00:35:53.801755 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:35:53.801762 kubelet[2610]: I0813 00:35:53.801765 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:35:54.295959 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Aug 13 00:35:54.684278 systemd-networkd[1391]: lxce956f66aea81: Gained IPv6LL Aug 13 00:35:55.000429 systemd-networkd[1391]: lxc_health: Gained IPv6LL Aug 13 00:35:55.064881 systemd-networkd[1391]: lxc4c0fda326e1d: Gained IPv6LL Aug 13 00:35:55.144953 kubelet[2610]: E0813 00:35:55.144905 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:56.482864 containerd[1479]: time="2025-08-13T00:35:56.482675540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:35:56.482864 containerd[1479]: time="2025-08-13T00:35:56.482749729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:35:56.482864 containerd[1479]: time="2025-08-13T00:35:56.482772969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:56.485880 containerd[1479]: time="2025-08-13T00:35:56.484986097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:56.494095 containerd[1479]: time="2025-08-13T00:35:56.494014674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:35:56.495172 containerd[1479]: time="2025-08-13T00:35:56.494161603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:35:56.495172 containerd[1479]: time="2025-08-13T00:35:56.494198092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:56.495737 containerd[1479]: time="2025-08-13T00:35:56.495351360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:35:56.531968 systemd[1]: Started cri-containerd-76027291e26860d3723984f94387104b856343a7f5f6f781e8622055a5565e83.scope - libcontainer container 76027291e26860d3723984f94387104b856343a7f5f6f781e8622055a5565e83. Aug 13 00:35:56.543980 systemd[1]: Started cri-containerd-c30c24be6154c42d5c8727eb15c9f9e59d678b7c33fd34006d50379cb08daa9c.scope - libcontainer container c30c24be6154c42d5c8727eb15c9f9e59d678b7c33fd34006d50379cb08daa9c. Aug 13 00:35:56.589391 containerd[1479]: time="2025-08-13T00:35:56.589274250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvgll,Uid:92d40f0a-2cbd-4952-b95a-62cf2f643256,Namespace:kube-system,Attempt:0,} returns sandbox id \"c30c24be6154c42d5c8727eb15c9f9e59d678b7c33fd34006d50379cb08daa9c\"" Aug 13 00:35:56.590937 kubelet[2610]: E0813 00:35:56.590433 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:56.594745 containerd[1479]: time="2025-08-13T00:35:56.594567006Z" level=info msg="CreateContainer within sandbox \"c30c24be6154c42d5c8727eb15c9f9e59d678b7c33fd34006d50379cb08daa9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:35:56.617494 containerd[1479]: time="2025-08-13T00:35:56.617462952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hmcbw,Uid:66d05e9b-5741-4d57-8eb5-dc4f12e8ac9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"76027291e26860d3723984f94387104b856343a7f5f6f781e8622055a5565e83\"" Aug 13 00:35:56.617913 containerd[1479]: time="2025-08-13T00:35:56.617841178Z" level=info msg="CreateContainer within sandbox \"c30c24be6154c42d5c8727eb15c9f9e59d678b7c33fd34006d50379cb08daa9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ba1a83245dd6983bf2e5165b04846c735603346aebd66b2ca0afef01eb9e05b\"" Aug 13 00:35:56.619046 kubelet[2610]: E0813 00:35:56.619006 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:56.619838 containerd[1479]: time="2025-08-13T00:35:56.619801298Z" level=info msg="StartContainer for \"2ba1a83245dd6983bf2e5165b04846c735603346aebd66b2ca0afef01eb9e05b\"" Aug 13 00:35:56.622222 containerd[1479]: time="2025-08-13T00:35:56.622153144Z" level=info msg="CreateContainer within sandbox \"76027291e26860d3723984f94387104b856343a7f5f6f781e8622055a5565e83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:35:56.636784 containerd[1479]: time="2025-08-13T00:35:56.636713315Z" level=info msg="CreateContainer within sandbox \"76027291e26860d3723984f94387104b856343a7f5f6f781e8622055a5565e83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47bdd6ee8fbd9dce40547ae5e40f331955dac2bf514e4a19aca162a3a931bf03\"" Aug 13 00:35:56.638575 containerd[1479]: time="2025-08-13T00:35:56.637912643Z" level=info msg="StartContainer for \"47bdd6ee8fbd9dce40547ae5e40f331955dac2bf514e4a19aca162a3a931bf03\"" Aug 13 00:35:56.667022 systemd[1]: Started cri-containerd-2ba1a83245dd6983bf2e5165b04846c735603346aebd66b2ca0afef01eb9e05b.scope - libcontainer container 2ba1a83245dd6983bf2e5165b04846c735603346aebd66b2ca0afef01eb9e05b. Aug 13 00:35:56.692782 systemd[1]: Started cri-containerd-47bdd6ee8fbd9dce40547ae5e40f331955dac2bf514e4a19aca162a3a931bf03.scope - libcontainer container 47bdd6ee8fbd9dce40547ae5e40f331955dac2bf514e4a19aca162a3a931bf03. Aug 13 00:35:56.708138 containerd[1479]: time="2025-08-13T00:35:56.708071846Z" level=info msg="StartContainer for \"2ba1a83245dd6983bf2e5165b04846c735603346aebd66b2ca0afef01eb9e05b\" returns successfully" Aug 13 00:35:56.735894 containerd[1479]: time="2025-08-13T00:35:56.735152629Z" level=info msg="StartContainer for \"47bdd6ee8fbd9dce40547ae5e40f331955dac2bf514e4a19aca162a3a931bf03\" returns successfully" Aug 13 00:35:56.755788 kubelet[2610]: E0813 00:35:56.755262 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:56.757866 kubelet[2610]: E0813 00:35:56.757851 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:56.798470 kubelet[2610]: I0813 00:35:56.798413 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qvgll" podStartSLOduration=15.798391752 podStartE2EDuration="15.798391752s" podCreationTimestamp="2025-08-13 00:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:35:56.798024816 +0000 UTC m=+23.217507772" watchObservedRunningTime="2025-08-13 00:35:56.798391752 +0000 UTC m=+23.217874718" Aug 13 00:35:56.798571 kubelet[2610]: I0813 00:35:56.798501 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hmcbw" podStartSLOduration=15.798492911 podStartE2EDuration="15.798492911s" podCreationTimestamp="2025-08-13 00:35:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:35:56.777513115 +0000 UTC m=+23.196996081" watchObservedRunningTime="2025-08-13 00:35:56.798492911 +0000 UTC m=+23.217975877" Aug 13 00:35:57.759244 kubelet[2610]: E0813 00:35:57.759209 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:57.760178 kubelet[2610]: E0813 00:35:57.759694 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:58.761451 kubelet[2610]: E0813 00:35:58.761410 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:35:58.762734 kubelet[2610]: E0813 00:35:58.761565 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:36:01.938186 kubelet[2610]: I0813 00:36:01.937750 2610 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:36:01.938823 kubelet[2610]: E0813 00:36:01.938335 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:36:02.768316 kubelet[2610]: E0813 00:36:02.768262 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:36:03.821785 kubelet[2610]: I0813 00:36:03.821749 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:03.821785 kubelet[2610]: I0813 00:36:03.821792 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:36:03.824311 kubelet[2610]: I0813 00:36:03.823902 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:36:03.840262 kubelet[2610]: I0813 00:36:03.840229 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:03.840699 kubelet[2610]: I0813 00:36:03.840520 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:36:03.840699 kubelet[2610]: E0813 00:36:03.840580 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:36:03.840699 kubelet[2610]: E0813 00:36:03.840594 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:36:03.840699 kubelet[2610]: E0813 00:36:03.840626 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:36:03.840699 kubelet[2610]: E0813 00:36:03.840641 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:36:03.840699 kubelet[2610]: E0813 00:36:03.840651 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:36:03.840699 kubelet[2610]: E0813 00:36:03.840661 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:36:03.840699 kubelet[2610]: E0813 00:36:03.840674 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:36:03.841170 kubelet[2610]: E0813 00:36:03.840684 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:36:03.841170 kubelet[2610]: I0813 00:36:03.840962 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:36:13.857970 kubelet[2610]: I0813 00:36:13.857896 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:13.857970 kubelet[2610]: I0813 00:36:13.857966 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:36:13.861105 kubelet[2610]: I0813 00:36:13.860700 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:36:13.874149 kubelet[2610]: I0813 00:36:13.874112 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:13.874359 kubelet[2610]: I0813 00:36:13.874265 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:36:13.874359 kubelet[2610]: E0813 00:36:13.874302 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:36:13.874359 kubelet[2610]: E0813 00:36:13.874315 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:36:13.874359 kubelet[2610]: E0813 00:36:13.874324 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:36:13.874359 kubelet[2610]: E0813 00:36:13.874336 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:36:13.874359 kubelet[2610]: E0813 00:36:13.874346 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:36:13.874359 kubelet[2610]: E0813 00:36:13.874355 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:36:13.874359 kubelet[2610]: E0813 00:36:13.874364 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:36:13.874539 kubelet[2610]: E0813 00:36:13.874376 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:36:13.874539 kubelet[2610]: I0813 00:36:13.874387 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:36:23.893598 kubelet[2610]: I0813 00:36:23.893542 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:23.893598 kubelet[2610]: I0813 00:36:23.893596 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:36:23.901831 kubelet[2610]: I0813 00:36:23.901107 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:36:23.916298 kubelet[2610]: I0813 00:36:23.916260 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:23.916520 kubelet[2610]: I0813 00:36:23.916392 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:36:23.916520 kubelet[2610]: E0813 00:36:23.916430 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:36:23.916520 kubelet[2610]: E0813 00:36:23.916444 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:36:23.916520 kubelet[2610]: E0813 00:36:23.916453 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:36:23.916520 kubelet[2610]: E0813 00:36:23.916463 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:36:23.916520 kubelet[2610]: E0813 00:36:23.916473 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:36:23.916520 kubelet[2610]: E0813 00:36:23.916482 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:36:23.916520 kubelet[2610]: E0813 00:36:23.916490 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:36:23.916520 kubelet[2610]: E0813 00:36:23.916499 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:36:23.916520 kubelet[2610]: I0813 00:36:23.916508 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:36:33.935183 kubelet[2610]: I0813 00:36:33.934180 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:33.935183 kubelet[2610]: I0813 00:36:33.934230 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:36:33.937595 kubelet[2610]: I0813 00:36:33.937432 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:36:33.949230 kubelet[2610]: I0813 00:36:33.949200 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:33.949455 kubelet[2610]: I0813 00:36:33.949409 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:36:33.949537 kubelet[2610]: E0813 00:36:33.949519 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:36:33.949576 kubelet[2610]: E0813 00:36:33.949566 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:36:33.949617 kubelet[2610]: E0813 00:36:33.949581 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:36:33.949617 kubelet[2610]: E0813 00:36:33.949593 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:36:33.949617 kubelet[2610]: E0813 00:36:33.949603 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:36:33.949617 kubelet[2610]: E0813 00:36:33.949612 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:36:33.949715 kubelet[2610]: E0813 00:36:33.949648 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:36:33.949715 kubelet[2610]: E0813 00:36:33.949659 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:36:33.949715 kubelet[2610]: I0813 00:36:33.949670 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:36:43.964879 kubelet[2610]: I0813 00:36:43.964834 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:43.964879 kubelet[2610]: I0813 00:36:43.964879 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:36:43.968082 kubelet[2610]: I0813 00:36:43.967639 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:36:43.979572 kubelet[2610]: I0813 00:36:43.979545 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:43.979682 kubelet[2610]: I0813 00:36:43.979659 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-proxy-thk97","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:36:43.979741 kubelet[2610]: E0813 00:36:43.979697 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:36:43.979741 kubelet[2610]: E0813 00:36:43.979709 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:36:43.979741 kubelet[2610]: E0813 00:36:43.979717 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:36:43.979741 kubelet[2610]: E0813 00:36:43.979726 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:36:43.979741 kubelet[2610]: E0813 00:36:43.979733 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:36:43.979741 kubelet[2610]: E0813 00:36:43.979741 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:36:43.979913 kubelet[2610]: E0813 00:36:43.979749 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:36:43.979913 kubelet[2610]: E0813 00:36:43.979757 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:36:43.979913 kubelet[2610]: I0813 00:36:43.979766 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:36:46.651760 kubelet[2610]: E0813 00:36:46.651698 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:36:53.998275 kubelet[2610]: I0813 00:36:53.998230 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:53.998275 kubelet[2610]: I0813 00:36:53.998278 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:36:54.001421 kubelet[2610]: I0813 00:36:54.001400 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:36:54.017173 kubelet[2610]: I0813 00:36:54.017148 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:36:54.017320 kubelet[2610]: I0813 00:36:54.017294 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/coredns-668d6bf9bc-qvgll","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:36:54.017351 kubelet[2610]: E0813 00:36:54.017332 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:36:54.017351 kubelet[2610]: E0813 00:36:54.017346 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:36:54.017402 kubelet[2610]: E0813 00:36:54.017356 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:36:54.017402 kubelet[2610]: E0813 00:36:54.017365 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:36:54.017402 kubelet[2610]: E0813 00:36:54.017374 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:36:54.017402 kubelet[2610]: E0813 00:36:54.017382 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:36:54.017402 kubelet[2610]: E0813 00:36:54.017391 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:36:54.017402 kubelet[2610]: E0813 00:36:54.017399 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:36:54.017531 kubelet[2610]: I0813 00:36:54.017409 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:36:56.652043 kubelet[2610]: E0813 00:36:56.651961 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:36:59.652925 kubelet[2610]: E0813 00:36:59.652201 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:37:04.034503 kubelet[2610]: I0813 00:37:04.034463 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:04.034503 kubelet[2610]: I0813 00:37:04.034510 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:04.036688 kubelet[2610]: I0813 00:37:04.036619 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:04.046528 kubelet[2610]: I0813 00:37:04.046496 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:04.046621 kubelet[2610]: I0813 00:37:04.046603 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-proxy-thk97","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:37:04.046675 kubelet[2610]: E0813 00:37:04.046640 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:37:04.046675 kubelet[2610]: E0813 00:37:04.046651 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:37:04.046675 kubelet[2610]: E0813 00:37:04.046659 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:37:04.046675 kubelet[2610]: E0813 00:37:04.046668 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:37:04.046675 kubelet[2610]: E0813 00:37:04.046677 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:37:04.046774 kubelet[2610]: E0813 00:37:04.046686 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:37:04.046774 kubelet[2610]: E0813 00:37:04.046694 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:37:04.046774 kubelet[2610]: E0813 00:37:04.046701 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:37:04.046774 kubelet[2610]: I0813 00:37:04.046710 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:04.652187 kubelet[2610]: E0813 00:37:04.652097 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:37:05.653029 kubelet[2610]: E0813 00:37:05.652375 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:37:14.070288 kubelet[2610]: I0813 00:37:14.070237 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:14.070288 kubelet[2610]: I0813 00:37:14.070285 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:14.072985 kubelet[2610]: I0813 00:37:14.072960 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:14.085627 kubelet[2610]: I0813 00:37:14.085591 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:14.085734 kubelet[2610]: I0813 00:37:14.085714 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-proxy-thk97","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:37:14.085770 kubelet[2610]: E0813 00:37:14.085750 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:37:14.085770 kubelet[2610]: E0813 00:37:14.085763 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:37:14.085834 kubelet[2610]: E0813 00:37:14.085772 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:37:14.085834 kubelet[2610]: E0813 00:37:14.085782 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:37:14.085834 kubelet[2610]: E0813 00:37:14.085791 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:37:14.085834 kubelet[2610]: E0813 00:37:14.085800 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:37:14.085834 kubelet[2610]: E0813 00:37:14.085827 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:37:14.085834 kubelet[2610]: E0813 00:37:14.085836 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:37:14.085964 kubelet[2610]: I0813 00:37:14.085846 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:23.653289 kubelet[2610]: E0813 00:37:23.652163 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:37:24.102230 kubelet[2610]: I0813 00:37:24.102118 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:24.102230 kubelet[2610]: I0813 00:37:24.102163 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:24.104467 kubelet[2610]: I0813 00:37:24.104447 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:24.114217 kubelet[2610]: I0813 00:37:24.114195 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:24.114307 kubelet[2610]: I0813 00:37:24.114290 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:37:24.114344 kubelet[2610]: E0813 00:37:24.114326 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:37:24.114344 kubelet[2610]: E0813 00:37:24.114338 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:37:24.114457 kubelet[2610]: E0813 00:37:24.114347 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:37:24.114457 kubelet[2610]: E0813 00:37:24.114356 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:37:24.114457 kubelet[2610]: E0813 00:37:24.114365 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:37:24.114457 kubelet[2610]: E0813 00:37:24.114373 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:37:24.114457 kubelet[2610]: E0813 00:37:24.114381 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:37:24.114457 kubelet[2610]: E0813 00:37:24.114388 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:37:24.114457 kubelet[2610]: I0813 00:37:24.114397 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:28.652765 kubelet[2610]: E0813 00:37:28.652685 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:37:29.652866 kubelet[2610]: E0813 00:37:29.652260 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:37:29.929080 systemd[1]: Started sshd@7-172.234.21.106:22-139.178.89.65:50018.service - OpenSSH per-connection server daemon (139.178.89.65:50018). Aug 13 00:37:30.256094 sshd[3999]: Accepted publickey for core from 139.178.89.65 port 50018 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:30.258225 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:30.263764 systemd-logind[1462]: New session 8 of user core. Aug 13 00:37:30.269931 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:37:30.571684 sshd[4001]: Connection closed by 139.178.89.65 port 50018 Aug 13 00:37:30.572694 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:30.577100 systemd[1]: sshd@7-172.234.21.106:22-139.178.89.65:50018.service: Deactivated successfully. Aug 13 00:37:30.579573 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:37:30.580547 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:37:30.581388 systemd-logind[1462]: Removed session 8. Aug 13 00:37:34.134744 kubelet[2610]: I0813 00:37:34.134683 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:34.134744 kubelet[2610]: I0813 00:37:34.134731 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:34.137384 kubelet[2610]: I0813 00:37:34.137335 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:34.140022 kubelet[2610]: I0813 00:37:34.139990 2610 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" size=320368 runtimeHandler="" Aug 13 00:37:34.140461 containerd[1479]: time="2025-08-13T00:37:34.140269187Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:37:34.141706 containerd[1479]: time="2025-08-13T00:37:34.141665567Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.10\"" Aug 13 00:37:34.142626 containerd[1479]: time="2025-08-13T00:37:34.142588815Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\"" Aug 13 00:37:34.143231 containerd[1479]: time="2025-08-13T00:37:34.143127007Z" level=info msg="ImageDelete event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:37:34.149026 containerd[1479]: time="2025-08-13T00:37:34.148980758Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" returns successfully" Aug 13 00:37:34.149437 kubelet[2610]: I0813 00:37:34.149362 2610 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" Aug 13 00:37:34.149679 containerd[1479]: time="2025-08-13T00:37:34.149650619Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:37:34.150444 containerd[1479]: time="2025-08-13T00:37:34.150391689Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:37:34.150996 containerd[1479]: time="2025-08-13T00:37:34.150962671Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" Aug 13 00:37:34.151412 containerd[1479]: time="2025-08-13T00:37:34.151379986Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:37:34.219947 containerd[1479]: time="2025-08-13T00:37:34.219902409Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" Aug 13 00:37:34.232114 kubelet[2610]: I0813 00:37:34.232071 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:34.232215 kubelet[2610]: I0813 00:37:34.232192 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:37:34.232274 kubelet[2610]: E0813 00:37:34.232235 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:37:34.232274 kubelet[2610]: E0813 00:37:34.232249 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:37:34.232274 kubelet[2610]: E0813 00:37:34.232258 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:37:34.232274 kubelet[2610]: E0813 00:37:34.232269 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:37:34.232274 kubelet[2610]: E0813 00:37:34.232279 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:37:34.232410 kubelet[2610]: E0813 00:37:34.232290 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:37:34.232410 kubelet[2610]: E0813 00:37:34.232300 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:37:34.232410 kubelet[2610]: E0813 00:37:34.232308 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:37:34.232410 kubelet[2610]: I0813 00:37:34.232318 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:35.642325 systemd[1]: Started sshd@8-172.234.21.106:22-139.178.89.65:50030.service - OpenSSH per-connection server daemon (139.178.89.65:50030). Aug 13 00:37:35.969434 sshd[4016]: Accepted publickey for core from 139.178.89.65 port 50030 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:35.971593 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:35.977897 systemd-logind[1462]: New session 9 of user core. Aug 13 00:37:35.982956 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:37:36.279458 sshd[4018]: Connection closed by 139.178.89.65 port 50030 Aug 13 00:37:36.280334 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:36.285272 systemd[1]: sshd@8-172.234.21.106:22-139.178.89.65:50030.service: Deactivated successfully. Aug 13 00:37:36.288052 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:37:36.288931 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:37:36.290449 systemd-logind[1462]: Removed session 9. Aug 13 00:37:41.356576 systemd[1]: Started sshd@9-172.234.21.106:22-139.178.89.65:41736.service - OpenSSH per-connection server daemon (139.178.89.65:41736). Aug 13 00:37:41.701830 sshd[4031]: Accepted publickey for core from 139.178.89.65 port 41736 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:41.703659 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:41.708885 systemd-logind[1462]: New session 10 of user core. Aug 13 00:37:41.712947 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:37:42.021504 sshd[4035]: Connection closed by 139.178.89.65 port 41736 Aug 13 00:37:42.022722 sshd-session[4031]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:42.027717 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:37:42.029072 systemd[1]: sshd@9-172.234.21.106:22-139.178.89.65:41736.service: Deactivated successfully. Aug 13 00:37:42.031450 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:37:42.033871 systemd-logind[1462]: Removed session 10. Aug 13 00:37:44.255869 kubelet[2610]: I0813 00:37:44.255787 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:44.255869 kubelet[2610]: I0813 00:37:44.255872 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:44.259648 kubelet[2610]: I0813 00:37:44.259586 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:44.271693 kubelet[2610]: I0813 00:37:44.271670 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:44.271834 kubelet[2610]: I0813 00:37:44.271799 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:37:44.271880 kubelet[2610]: E0813 00:37:44.271854 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:37:44.271880 kubelet[2610]: E0813 00:37:44.271869 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:37:44.271880 kubelet[2610]: E0813 00:37:44.271879 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:37:44.271941 kubelet[2610]: E0813 00:37:44.271890 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:37:44.271941 kubelet[2610]: E0813 00:37:44.271899 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:37:44.271941 kubelet[2610]: E0813 00:37:44.271908 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:37:44.271941 kubelet[2610]: E0813 00:37:44.271920 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:37:44.271941 kubelet[2610]: E0813 00:37:44.271929 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:37:44.271941 kubelet[2610]: I0813 00:37:44.271938 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:47.087024 systemd[1]: Started sshd@10-172.234.21.106:22-139.178.89.65:41748.service - OpenSSH per-connection server daemon (139.178.89.65:41748). Aug 13 00:37:47.415057 sshd[4048]: Accepted publickey for core from 139.178.89.65 port 41748 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:47.417111 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:47.421869 systemd-logind[1462]: New session 11 of user core. Aug 13 00:37:47.430953 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:37:47.715314 sshd[4050]: Connection closed by 139.178.89.65 port 41748 Aug 13 00:37:47.716209 sshd-session[4048]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:47.720778 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:37:47.721523 systemd[1]: sshd@10-172.234.21.106:22-139.178.89.65:41748.service: Deactivated successfully. Aug 13 00:37:47.724156 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:37:47.725169 systemd-logind[1462]: Removed session 11. Aug 13 00:37:47.773157 systemd[1]: Started sshd@11-172.234.21.106:22-139.178.89.65:41752.service - OpenSSH per-connection server daemon (139.178.89.65:41752). Aug 13 00:37:48.107832 sshd[4063]: Accepted publickey for core from 139.178.89.65 port 41752 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:48.109768 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:48.116132 systemd-logind[1462]: New session 12 of user core. Aug 13 00:37:48.121964 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:37:48.432317 sshd[4065]: Connection closed by 139.178.89.65 port 41752 Aug 13 00:37:48.433480 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:48.438420 systemd[1]: sshd@11-172.234.21.106:22-139.178.89.65:41752.service: Deactivated successfully. Aug 13 00:37:48.441846 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:37:48.442786 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:37:48.444129 systemd-logind[1462]: Removed session 12. Aug 13 00:37:48.500278 systemd[1]: Started sshd@12-172.234.21.106:22-139.178.89.65:41766.service - OpenSSH per-connection server daemon (139.178.89.65:41766). Aug 13 00:37:48.833656 sshd[4075]: Accepted publickey for core from 139.178.89.65 port 41766 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:48.835487 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:48.841109 systemd-logind[1462]: New session 13 of user core. Aug 13 00:37:48.843932 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:37:49.150917 sshd[4077]: Connection closed by 139.178.89.65 port 41766 Aug 13 00:37:49.151700 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:49.157340 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:37:49.158424 systemd[1]: sshd@12-172.234.21.106:22-139.178.89.65:41766.service: Deactivated successfully. Aug 13 00:37:49.161944 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:37:49.162963 systemd-logind[1462]: Removed session 13. Aug 13 00:37:54.224027 systemd[1]: Started sshd@13-172.234.21.106:22-139.178.89.65:39814.service - OpenSSH per-connection server daemon (139.178.89.65:39814). Aug 13 00:37:54.291566 kubelet[2610]: I0813 00:37:54.289690 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:54.291566 kubelet[2610]: I0813 00:37:54.289726 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:37:54.306004 kubelet[2610]: I0813 00:37:54.305978 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:37:54.319618 kubelet[2610]: I0813 00:37:54.319591 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:37:54.319755 kubelet[2610]: I0813 00:37:54.319737 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:37:54.319782 kubelet[2610]: E0813 00:37:54.319773 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:37:54.319854 kubelet[2610]: E0813 00:37:54.319784 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:37:54.319854 kubelet[2610]: E0813 00:37:54.319796 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:37:54.319854 kubelet[2610]: E0813 00:37:54.319833 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:37:54.319854 kubelet[2610]: E0813 00:37:54.319844 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:37:54.319854 kubelet[2610]: E0813 00:37:54.319852 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:37:54.319961 kubelet[2610]: E0813 00:37:54.319861 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:37:54.319961 kubelet[2610]: E0813 00:37:54.319869 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:37:54.319961 kubelet[2610]: I0813 00:37:54.319879 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:37:54.553564 sshd[4089]: Accepted publickey for core from 139.178.89.65 port 39814 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:54.555269 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:54.560580 systemd-logind[1462]: New session 14 of user core. Aug 13 00:37:54.567942 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:37:54.879633 sshd[4091]: Connection closed by 139.178.89.65 port 39814 Aug 13 00:37:54.880100 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:54.885383 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:37:54.886489 systemd[1]: sshd@13-172.234.21.106:22-139.178.89.65:39814.service: Deactivated successfully. Aug 13 00:37:54.889094 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:37:54.890321 systemd-logind[1462]: Removed session 14. Aug 13 00:37:54.945333 systemd[1]: Started sshd@14-172.234.21.106:22-139.178.89.65:39826.service - OpenSSH per-connection server daemon (139.178.89.65:39826). Aug 13 00:37:55.266423 sshd[4103]: Accepted publickey for core from 139.178.89.65 port 39826 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:55.268279 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:55.273850 systemd-logind[1462]: New session 15 of user core. Aug 13 00:37:55.283950 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:37:55.595599 sshd[4105]: Connection closed by 139.178.89.65 port 39826 Aug 13 00:37:55.596905 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:55.601106 systemd[1]: sshd@14-172.234.21.106:22-139.178.89.65:39826.service: Deactivated successfully. Aug 13 00:37:55.604043 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:37:55.606746 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:37:55.608581 systemd-logind[1462]: Removed session 15. Aug 13 00:37:55.659146 systemd[1]: Started sshd@15-172.234.21.106:22-139.178.89.65:39842.service - OpenSSH per-connection server daemon (139.178.89.65:39842). Aug 13 00:37:55.986003 sshd[4115]: Accepted publickey for core from 139.178.89.65 port 39842 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:55.988148 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:55.993653 systemd-logind[1462]: New session 16 of user core. Aug 13 00:37:55.999946 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:37:56.790841 sshd[4117]: Connection closed by 139.178.89.65 port 39842 Aug 13 00:37:56.791983 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:56.796959 systemd[1]: sshd@15-172.234.21.106:22-139.178.89.65:39842.service: Deactivated successfully. Aug 13 00:37:56.799632 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:37:56.801690 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:37:56.803318 systemd-logind[1462]: Removed session 16. Aug 13 00:37:56.856060 systemd[1]: Started sshd@16-172.234.21.106:22-139.178.89.65:39856.service - OpenSSH per-connection server daemon (139.178.89.65:39856). Aug 13 00:37:57.192568 sshd[4135]: Accepted publickey for core from 139.178.89.65 port 39856 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:57.194628 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:57.199787 systemd-logind[1462]: New session 17 of user core. Aug 13 00:37:57.208947 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:37:57.614966 sshd[4137]: Connection closed by 139.178.89.65 port 39856 Aug 13 00:37:57.616772 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:57.622192 systemd[1]: sshd@16-172.234.21.106:22-139.178.89.65:39856.service: Deactivated successfully. Aug 13 00:37:57.625382 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:37:57.626331 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:37:57.627687 systemd-logind[1462]: Removed session 17. Aug 13 00:37:57.674009 systemd[1]: Started sshd@17-172.234.21.106:22-139.178.89.65:39866.service - OpenSSH per-connection server daemon (139.178.89.65:39866). Aug 13 00:37:57.994926 sshd[4147]: Accepted publickey for core from 139.178.89.65 port 39866 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:37:57.995574 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:37:58.001335 systemd-logind[1462]: New session 18 of user core. Aug 13 00:37:58.006954 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:37:58.298689 sshd[4149]: Connection closed by 139.178.89.65 port 39866 Aug 13 00:37:58.299492 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Aug 13 00:37:58.304941 systemd[1]: sshd@17-172.234.21.106:22-139.178.89.65:39866.service: Deactivated successfully. Aug 13 00:37:58.307495 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:37:58.308528 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:37:58.310391 systemd-logind[1462]: Removed session 18. Aug 13 00:38:03.373217 systemd[1]: Started sshd@18-172.234.21.106:22-139.178.89.65:36908.service - OpenSSH per-connection server daemon (139.178.89.65:36908). Aug 13 00:38:03.708878 sshd[4163]: Accepted publickey for core from 139.178.89.65 port 36908 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:03.710387 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:03.715552 systemd-logind[1462]: New session 19 of user core. Aug 13 00:38:03.722974 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:38:04.014924 sshd[4165]: Connection closed by 139.178.89.65 port 36908 Aug 13 00:38:04.015867 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:04.021124 systemd[1]: sshd@18-172.234.21.106:22-139.178.89.65:36908.service: Deactivated successfully. Aug 13 00:38:04.024176 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:38:04.025652 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:38:04.026755 systemd-logind[1462]: Removed session 19. Aug 13 00:38:04.341933 kubelet[2610]: I0813 00:38:04.341743 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:04.341933 kubelet[2610]: I0813 00:38:04.341790 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:38:04.344045 kubelet[2610]: I0813 00:38:04.344030 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:38:04.355069 kubelet[2610]: I0813 00:38:04.355052 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:04.355163 kubelet[2610]: I0813 00:38:04.355146 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:38:04.355195 kubelet[2610]: E0813 00:38:04.355179 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:38:04.355195 kubelet[2610]: E0813 00:38:04.355191 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:38:04.355243 kubelet[2610]: E0813 00:38:04.355199 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:38:04.355243 kubelet[2610]: E0813 00:38:04.355210 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:38:04.355243 kubelet[2610]: E0813 00:38:04.355218 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:38:04.355243 kubelet[2610]: E0813 00:38:04.355226 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:38:04.355243 kubelet[2610]: E0813 00:38:04.355234 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:38:04.355243 kubelet[2610]: E0813 00:38:04.355242 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:38:04.355363 kubelet[2610]: I0813 00:38:04.355251 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:38:07.655440 kubelet[2610]: E0813 00:38:07.655281 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:38:09.083026 systemd[1]: Started sshd@19-172.234.21.106:22-139.178.89.65:39532.service - OpenSSH per-connection server daemon (139.178.89.65:39532). Aug 13 00:38:09.419945 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 39532 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:09.421901 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:09.427010 systemd-logind[1462]: New session 20 of user core. Aug 13 00:38:09.433935 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:38:09.771836 sshd[4179]: Connection closed by 139.178.89.65 port 39532 Aug 13 00:38:09.772222 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:09.778530 systemd[1]: sshd@19-172.234.21.106:22-139.178.89.65:39532.service: Deactivated successfully. Aug 13 00:38:09.781012 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:38:09.782908 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:38:09.784278 systemd-logind[1462]: Removed session 20. Aug 13 00:38:11.655310 kubelet[2610]: E0813 00:38:11.654923 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:38:13.652569 kubelet[2610]: E0813 00:38:13.652022 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:38:14.371642 kubelet[2610]: I0813 00:38:14.371602 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:14.371642 kubelet[2610]: I0813 00:38:14.371646 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:38:14.373231 kubelet[2610]: I0813 00:38:14.372926 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:38:14.382910 kubelet[2610]: I0813 00:38:14.382877 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:14.383068 kubelet[2610]: I0813 00:38:14.383035 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:38:14.383098 kubelet[2610]: E0813 00:38:14.383069 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:38:14.383098 kubelet[2610]: E0813 00:38:14.383082 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:38:14.383098 kubelet[2610]: E0813 00:38:14.383090 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:38:14.383196 kubelet[2610]: E0813 00:38:14.383100 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:38:14.383196 kubelet[2610]: E0813 00:38:14.383109 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:38:14.383196 kubelet[2610]: E0813 00:38:14.383118 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:38:14.383196 kubelet[2610]: E0813 00:38:14.383126 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:38:14.383196 kubelet[2610]: E0813 00:38:14.383134 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:38:14.383196 kubelet[2610]: I0813 00:38:14.383143 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:38:14.844063 systemd[1]: Started sshd@20-172.234.21.106:22-139.178.89.65:39542.service - OpenSSH per-connection server daemon (139.178.89.65:39542). Aug 13 00:38:15.180624 sshd[4193]: Accepted publickey for core from 139.178.89.65 port 39542 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:15.182332 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:15.187076 systemd-logind[1462]: New session 21 of user core. Aug 13 00:38:15.191039 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:38:15.487856 sshd[4195]: Connection closed by 139.178.89.65 port 39542 Aug 13 00:38:15.488553 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:15.493676 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:38:15.494616 systemd[1]: sshd@20-172.234.21.106:22-139.178.89.65:39542.service: Deactivated successfully. Aug 13 00:38:15.496740 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:38:15.497768 systemd-logind[1462]: Removed session 21. Aug 13 00:38:20.557043 systemd[1]: Started sshd@21-172.234.21.106:22-139.178.89.65:44774.service - OpenSSH per-connection server daemon (139.178.89.65:44774). Aug 13 00:38:20.893422 sshd[4207]: Accepted publickey for core from 139.178.89.65 port 44774 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:20.894951 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:20.899696 systemd-logind[1462]: New session 22 of user core. Aug 13 00:38:20.909009 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:38:21.197950 sshd[4211]: Connection closed by 139.178.89.65 port 44774 Aug 13 00:38:21.198796 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:21.202203 systemd[1]: sshd@21-172.234.21.106:22-139.178.89.65:44774.service: Deactivated successfully. Aug 13 00:38:21.204466 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:38:21.207462 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:38:21.209173 systemd-logind[1462]: Removed session 22. Aug 13 00:38:21.652576 kubelet[2610]: E0813 00:38:21.652511 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:38:23.653520 kubelet[2610]: E0813 00:38:23.653467 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:38:24.398581 kubelet[2610]: I0813 00:38:24.398548 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:24.398581 kubelet[2610]: I0813 00:38:24.398587 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:38:24.400416 kubelet[2610]: I0813 00:38:24.400080 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:38:24.409691 kubelet[2610]: I0813 00:38:24.409666 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:24.409993 kubelet[2610]: I0813 00:38:24.409969 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:38:24.410023 kubelet[2610]: E0813 00:38:24.410003 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:38:24.410023 kubelet[2610]: E0813 00:38:24.410016 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:38:24.410078 kubelet[2610]: E0813 00:38:24.410024 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:38:24.410078 kubelet[2610]: E0813 00:38:24.410035 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:38:24.410078 kubelet[2610]: E0813 00:38:24.410044 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:38:24.410078 kubelet[2610]: E0813 00:38:24.410051 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:38:24.410078 kubelet[2610]: E0813 00:38:24.410059 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:38:24.410078 kubelet[2610]: E0813 00:38:24.410066 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:38:24.410078 kubelet[2610]: I0813 00:38:24.410076 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:38:26.265022 systemd[1]: Started sshd@22-172.234.21.106:22-139.178.89.65:44782.service - OpenSSH per-connection server daemon (139.178.89.65:44782). Aug 13 00:38:26.598047 sshd[4223]: Accepted publickey for core from 139.178.89.65 port 44782 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:26.599582 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:26.604645 systemd-logind[1462]: New session 23 of user core. Aug 13 00:38:26.611929 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:38:26.905280 sshd[4225]: Connection closed by 139.178.89.65 port 44782 Aug 13 00:38:26.905913 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:26.909199 systemd[1]: sshd@22-172.234.21.106:22-139.178.89.65:44782.service: Deactivated successfully. Aug 13 00:38:26.912015 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:38:26.913936 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:38:26.915310 systemd-logind[1462]: Removed session 23. Aug 13 00:38:31.968030 systemd[1]: Started sshd@23-172.234.21.106:22-139.178.89.65:48164.service - OpenSSH per-connection server daemon (139.178.89.65:48164). Aug 13 00:38:32.302670 sshd[4237]: Accepted publickey for core from 139.178.89.65 port 48164 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:32.304144 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:32.308428 systemd-logind[1462]: New session 24 of user core. Aug 13 00:38:32.313929 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:38:32.591707 sshd[4239]: Connection closed by 139.178.89.65 port 48164 Aug 13 00:38:32.592571 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:32.596412 systemd[1]: sshd@23-172.234.21.106:22-139.178.89.65:48164.service: Deactivated successfully. Aug 13 00:38:32.598243 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:38:32.599126 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:38:32.600400 systemd-logind[1462]: Removed session 24. Aug 13 00:38:32.652344 kubelet[2610]: E0813 00:38:32.652319 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:38:34.425255 kubelet[2610]: I0813 00:38:34.425219 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:34.425255 kubelet[2610]: I0813 00:38:34.425257 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:38:34.427262 kubelet[2610]: I0813 00:38:34.427248 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:38:34.437455 kubelet[2610]: I0813 00:38:34.437429 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:34.437563 kubelet[2610]: I0813 00:38:34.437546 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:38:34.437609 kubelet[2610]: E0813 00:38:34.437577 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:38:34.437609 kubelet[2610]: E0813 00:38:34.437588 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:38:34.437609 kubelet[2610]: E0813 00:38:34.437597 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:38:34.437609 kubelet[2610]: E0813 00:38:34.437607 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:38:34.437701 kubelet[2610]: E0813 00:38:34.437616 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:38:34.437701 kubelet[2610]: E0813 00:38:34.437625 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:38:34.437701 kubelet[2610]: E0813 00:38:34.437632 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:38:34.437701 kubelet[2610]: E0813 00:38:34.437639 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:38:34.437701 kubelet[2610]: I0813 00:38:34.437648 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:38:34.526767 update_engine[1463]: I20250813 00:38:34.526710 1463 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:38:34.527068 update_engine[1463]: I20250813 00:38:34.526765 1463 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:38:34.527068 update_engine[1463]: I20250813 00:38:34.527005 1463 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:38:34.527666 update_engine[1463]: I20250813 00:38:34.527635 1463 omaha_request_params.cc:62] Current group set to stable Aug 13 00:38:34.528318 update_engine[1463]: I20250813 00:38:34.527752 1463 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:38:34.528318 update_engine[1463]: I20250813 00:38:34.527774 1463 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:38:34.528318 update_engine[1463]: I20250813 00:38:34.527797 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:38:34.528318 update_engine[1463]: I20250813 00:38:34.528040 1463 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:38:34.528318 update_engine[1463]: I20250813 00:38:34.528113 1463 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:38:34.528318 update_engine[1463]: I20250813 00:38:34.528129 1463 omaha_request_action.cc:272] Request: Aug 13 00:38:34.528318 update_engine[1463]: Aug 13 00:38:34.528318 update_engine[1463]: Aug 13 00:38:34.528318 update_engine[1463]: Aug 13 00:38:34.528318 update_engine[1463]: Aug 13 00:38:34.528318 update_engine[1463]: Aug 13 00:38:34.528318 update_engine[1463]: Aug 13 00:38:34.528318 update_engine[1463]: Aug 13 00:38:34.528318 update_engine[1463]: Aug 13 00:38:34.528318 update_engine[1463]: I20250813 00:38:34.528141 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:38:34.528889 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:38:34.529739 update_engine[1463]: I20250813 00:38:34.529709 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:38:34.530325 update_engine[1463]: I20250813 00:38:34.530287 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:38:34.552594 update_engine[1463]: E20250813 00:38:34.552557 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:38:34.552643 update_engine[1463]: I20250813 00:38:34.552623 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:38:36.652033 kubelet[2610]: E0813 00:38:36.652001 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:38:37.657158 systemd[1]: Started sshd@24-172.234.21.106:22-139.178.89.65:48168.service - OpenSSH per-connection server daemon (139.178.89.65:48168). Aug 13 00:38:37.992593 sshd[4253]: Accepted publickey for core from 139.178.89.65 port 48168 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:37.994522 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:37.999506 systemd-logind[1462]: New session 25 of user core. Aug 13 00:38:38.008944 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:38:38.297018 sshd[4255]: Connection closed by 139.178.89.65 port 48168 Aug 13 00:38:38.297836 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:38.301799 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:38:38.302607 systemd[1]: sshd@24-172.234.21.106:22-139.178.89.65:48168.service: Deactivated successfully. Aug 13 00:38:38.304943 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:38:38.306227 systemd-logind[1462]: Removed session 25. Aug 13 00:38:43.368239 systemd[1]: Started sshd@25-172.234.21.106:22-139.178.89.65:51736.service - OpenSSH per-connection server daemon (139.178.89.65:51736). Aug 13 00:38:43.690842 sshd[4268]: Accepted publickey for core from 139.178.89.65 port 51736 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:43.692712 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:43.696821 systemd-logind[1462]: New session 26 of user core. Aug 13 00:38:43.701931 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:38:43.984339 sshd[4270]: Connection closed by 139.178.89.65 port 51736 Aug 13 00:38:43.985377 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:43.989454 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:38:43.990271 systemd[1]: sshd@25-172.234.21.106:22-139.178.89.65:51736.service: Deactivated successfully. Aug 13 00:38:43.992415 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:38:43.994358 systemd-logind[1462]: Removed session 26. Aug 13 00:38:44.453692 kubelet[2610]: I0813 00:38:44.453653 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:44.453692 kubelet[2610]: I0813 00:38:44.453686 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:38:44.457092 kubelet[2610]: I0813 00:38:44.456023 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:38:44.466215 kubelet[2610]: I0813 00:38:44.466198 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:44.466305 kubelet[2610]: I0813 00:38:44.466289 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:38:44.466342 kubelet[2610]: E0813 00:38:44.466318 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:38:44.466342 kubelet[2610]: E0813 00:38:44.466328 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:38:44.466342 kubelet[2610]: E0813 00:38:44.466336 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:38:44.466414 kubelet[2610]: E0813 00:38:44.466351 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:38:44.466414 kubelet[2610]: E0813 00:38:44.466359 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:38:44.466414 kubelet[2610]: E0813 00:38:44.466367 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:38:44.466414 kubelet[2610]: E0813 00:38:44.466374 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:38:44.466414 kubelet[2610]: E0813 00:38:44.466382 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:38:44.466414 kubelet[2610]: I0813 00:38:44.466391 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:38:44.524723 update_engine[1463]: I20250813 00:38:44.524665 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:38:44.525031 update_engine[1463]: I20250813 00:38:44.524982 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:38:44.525237 update_engine[1463]: I20250813 00:38:44.525205 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:38:44.525891 update_engine[1463]: E20250813 00:38:44.525861 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:38:44.525933 update_engine[1463]: I20250813 00:38:44.525914 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 00:38:49.049230 systemd[1]: Started sshd@26-172.234.21.106:22-139.178.89.65:48816.service - OpenSSH per-connection server daemon (139.178.89.65:48816). Aug 13 00:38:49.369647 sshd[4282]: Accepted publickey for core from 139.178.89.65 port 48816 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:49.371325 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:49.376620 systemd-logind[1462]: New session 27 of user core. Aug 13 00:38:49.384001 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:38:49.654632 kubelet[2610]: E0813 00:38:49.654510 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:38:49.661707 sshd[4284]: Connection closed by 139.178.89.65 port 48816 Aug 13 00:38:49.663210 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:49.667454 systemd[1]: sshd@26-172.234.21.106:22-139.178.89.65:48816.service: Deactivated successfully. Aug 13 00:38:49.670104 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:38:49.671410 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:38:49.672656 systemd-logind[1462]: Removed session 27. Aug 13 00:38:54.484665 kubelet[2610]: I0813 00:38:54.484449 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:54.484665 kubelet[2610]: I0813 00:38:54.484486 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:38:54.487169 kubelet[2610]: I0813 00:38:54.487144 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:38:54.498722 kubelet[2610]: I0813 00:38:54.498706 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:38:54.498871 kubelet[2610]: I0813 00:38:54.498854 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:38:54.498929 kubelet[2610]: E0813 00:38:54.498884 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:38:54.498929 kubelet[2610]: E0813 00:38:54.498895 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:38:54.498929 kubelet[2610]: E0813 00:38:54.498903 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:38:54.498929 kubelet[2610]: E0813 00:38:54.498912 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:38:54.498929 kubelet[2610]: E0813 00:38:54.498919 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:38:54.498929 kubelet[2610]: E0813 00:38:54.498927 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:38:54.498929 kubelet[2610]: E0813 00:38:54.498935 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:38:54.499102 kubelet[2610]: E0813 00:38:54.498942 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:38:54.499102 kubelet[2610]: I0813 00:38:54.498951 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:38:54.528638 update_engine[1463]: I20250813 00:38:54.528586 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:38:54.529195 update_engine[1463]: I20250813 00:38:54.529093 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:38:54.529339 update_engine[1463]: I20250813 00:38:54.529294 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:38:54.529903 update_engine[1463]: E20250813 00:38:54.529878 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:38:54.529973 update_engine[1463]: I20250813 00:38:54.529929 1463 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 00:38:54.726217 systemd[1]: Started sshd@27-172.234.21.106:22-139.178.89.65:48832.service - OpenSSH per-connection server daemon (139.178.89.65:48832). Aug 13 00:38:55.045977 sshd[4296]: Accepted publickey for core from 139.178.89.65 port 48832 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:38:55.047575 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:38:55.052964 systemd-logind[1462]: New session 28 of user core. Aug 13 00:38:55.056000 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:38:55.339689 sshd[4298]: Connection closed by 139.178.89.65 port 48832 Aug 13 00:38:55.340357 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Aug 13 00:38:55.344237 systemd-logind[1462]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:38:55.345371 systemd[1]: sshd@27-172.234.21.106:22-139.178.89.65:48832.service: Deactivated successfully. Aug 13 00:38:55.347500 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:38:55.348861 systemd-logind[1462]: Removed session 28. Aug 13 00:39:00.405039 systemd[1]: Started sshd@28-172.234.21.106:22-139.178.89.65:49898.service - OpenSSH per-connection server daemon (139.178.89.65:49898). Aug 13 00:39:00.724390 sshd[4310]: Accepted publickey for core from 139.178.89.65 port 49898 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:00.725697 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:00.730232 systemd-logind[1462]: New session 29 of user core. Aug 13 00:39:00.738926 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:39:01.022223 sshd[4312]: Connection closed by 139.178.89.65 port 49898 Aug 13 00:39:01.022859 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:01.027452 systemd[1]: sshd@28-172.234.21.106:22-139.178.89.65:49898.service: Deactivated successfully. Aug 13 00:39:01.030439 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:39:01.031501 systemd-logind[1462]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:39:01.032669 systemd-logind[1462]: Removed session 29. Aug 13 00:39:04.514304 kubelet[2610]: I0813 00:39:04.514267 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:04.514304 kubelet[2610]: I0813 00:39:04.514307 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:39:04.516675 kubelet[2610]: I0813 00:39:04.516630 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:39:04.527360 update_engine[1463]: I20250813 00:39:04.526857 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:39:04.527360 update_engine[1463]: I20250813 00:39:04.527098 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:39:04.527360 update_engine[1463]: I20250813 00:39:04.527321 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:39:04.527917 kubelet[2610]: I0813 00:39:04.527785 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:04.527958 kubelet[2610]: I0813 00:39:04.527931 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:39:04.527991 kubelet[2610]: E0813 00:39:04.527967 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:39:04.527991 kubelet[2610]: E0813 00:39:04.527982 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:39:04.528033 kubelet[2610]: E0813 00:39:04.527995 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:39:04.528033 kubelet[2610]: E0813 00:39:04.528009 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:39:04.528033 kubelet[2610]: E0813 00:39:04.528024 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:39:04.528131 kubelet[2610]: E0813 00:39:04.528037 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:39:04.528131 kubelet[2610]: E0813 00:39:04.528047 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:39:04.528131 kubelet[2610]: E0813 00:39:04.528058 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:39:04.528131 kubelet[2610]: I0813 00:39:04.528072 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:39:04.528303 update_engine[1463]: E20250813 00:39:04.528262 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:39:04.528367 update_engine[1463]: I20250813 00:39:04.528330 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:39:04.528367 update_engine[1463]: I20250813 00:39:04.528345 1463 omaha_request_action.cc:617] Omaha request response: Aug 13 00:39:04.528463 update_engine[1463]: E20250813 00:39:04.528431 1463 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 00:39:04.528574 update_engine[1463]: I20250813 00:39:04.528465 1463 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 00:39:04.528574 update_engine[1463]: I20250813 00:39:04.528472 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:39:04.528574 update_engine[1463]: I20250813 00:39:04.528478 1463 update_attempter.cc:306] Processing Done. Aug 13 00:39:04.528574 update_engine[1463]: E20250813 00:39:04.528491 1463 update_attempter.cc:619] Update failed. Aug 13 00:39:04.528574 update_engine[1463]: I20250813 00:39:04.528498 1463 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 00:39:04.528574 update_engine[1463]: I20250813 00:39:04.528507 1463 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 00:39:04.528574 update_engine[1463]: I20250813 00:39:04.528515 1463 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 00:39:04.528896 update_engine[1463]: I20250813 00:39:04.528869 1463 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:39:04.528939 update_engine[1463]: I20250813 00:39:04.528900 1463 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:39:04.528939 update_engine[1463]: I20250813 00:39:04.528908 1463 omaha_request_action.cc:272] Request: Aug 13 00:39:04.528939 update_engine[1463]: Aug 13 00:39:04.528939 update_engine[1463]: Aug 13 00:39:04.528939 update_engine[1463]: Aug 13 00:39:04.528939 update_engine[1463]: Aug 13 00:39:04.528939 update_engine[1463]: Aug 13 00:39:04.528939 update_engine[1463]: Aug 13 00:39:04.528939 update_engine[1463]: I20250813 00:39:04.528914 1463 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:39:04.529199 update_engine[1463]: I20250813 00:39:04.529059 1463 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:39:04.529244 update_engine[1463]: I20250813 00:39:04.529218 1463 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:39:04.529348 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 00:39:04.530009 update_engine[1463]: E20250813 00:39:04.529963 1463 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:39:04.530049 update_engine[1463]: I20250813 00:39:04.530025 1463 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:39:04.530049 update_engine[1463]: I20250813 00:39:04.530036 1463 omaha_request_action.cc:617] Omaha request response: Aug 13 00:39:04.530098 update_engine[1463]: I20250813 00:39:04.530046 1463 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:39:04.530098 update_engine[1463]: I20250813 00:39:04.530055 1463 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:39:04.530098 update_engine[1463]: I20250813 00:39:04.530062 1463 update_attempter.cc:306] Processing Done. Aug 13 00:39:04.530098 update_engine[1463]: I20250813 00:39:04.530071 1463 update_attempter.cc:310] Error event sent. Aug 13 00:39:04.530098 update_engine[1463]: I20250813 00:39:04.530084 1463 update_check_scheduler.cc:74] Next update check in 44m38s Aug 13 00:39:04.530346 locksmithd[1489]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 00:39:06.085030 systemd[1]: Started sshd@29-172.234.21.106:22-139.178.89.65:49902.service - OpenSSH per-connection server daemon (139.178.89.65:49902). Aug 13 00:39:06.404106 sshd[4324]: Accepted publickey for core from 139.178.89.65 port 49902 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:06.405567 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:06.409971 systemd-logind[1462]: New session 30 of user core. Aug 13 00:39:06.416925 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:39:06.689998 sshd[4326]: Connection closed by 139.178.89.65 port 49902 Aug 13 00:39:06.690857 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:06.694198 systemd-logind[1462]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:39:06.694931 systemd[1]: sshd@29-172.234.21.106:22-139.178.89.65:49902.service: Deactivated successfully. Aug 13 00:39:06.696699 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:39:06.697727 systemd-logind[1462]: Removed session 30. Aug 13 00:39:11.763046 systemd[1]: Started sshd@30-172.234.21.106:22-139.178.89.65:39030.service - OpenSSH per-connection server daemon (139.178.89.65:39030). Aug 13 00:39:12.098704 sshd[4340]: Accepted publickey for core from 139.178.89.65 port 39030 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:12.100225 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:12.104972 systemd-logind[1462]: New session 31 of user core. Aug 13 00:39:12.107942 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 00:39:12.406131 sshd[4342]: Connection closed by 139.178.89.65 port 39030 Aug 13 00:39:12.406721 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:12.410409 systemd[1]: sshd@30-172.234.21.106:22-139.178.89.65:39030.service: Deactivated successfully. Aug 13 00:39:12.413005 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 00:39:12.414602 systemd-logind[1462]: Session 31 logged out. Waiting for processes to exit. Aug 13 00:39:12.415717 systemd-logind[1462]: Removed session 31. Aug 13 00:39:14.549232 kubelet[2610]: I0813 00:39:14.549176 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:14.549232 kubelet[2610]: I0813 00:39:14.549217 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:39:14.551982 kubelet[2610]: I0813 00:39:14.551123 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:39:14.565738 kubelet[2610]: I0813 00:39:14.565709 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:14.565846 kubelet[2610]: I0813 00:39:14.565828 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:39:14.565909 kubelet[2610]: E0813 00:39:14.565858 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:39:14.565909 kubelet[2610]: E0813 00:39:14.565870 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:39:14.565909 kubelet[2610]: E0813 00:39:14.565878 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:39:14.565909 kubelet[2610]: E0813 00:39:14.565887 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:39:14.565909 kubelet[2610]: E0813 00:39:14.565896 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:39:14.565909 kubelet[2610]: E0813 00:39:14.565904 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:39:14.565909 kubelet[2610]: E0813 00:39:14.565913 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:39:14.565909 kubelet[2610]: E0813 00:39:14.565922 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:39:14.566128 kubelet[2610]: I0813 00:39:14.565931 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:39:15.653035 kubelet[2610]: E0813 00:39:15.652312 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:39:16.652169 kubelet[2610]: E0813 00:39:16.652137 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:39:17.476012 systemd[1]: Started sshd@31-172.234.21.106:22-139.178.89.65:39040.service - OpenSSH per-connection server daemon (139.178.89.65:39040). Aug 13 00:39:17.807256 sshd[4355]: Accepted publickey for core from 139.178.89.65 port 39040 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:17.808702 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:17.813832 systemd-logind[1462]: New session 32 of user core. Aug 13 00:39:17.818972 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 00:39:18.107899 sshd[4358]: Connection closed by 139.178.89.65 port 39040 Aug 13 00:39:18.108655 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:18.113093 systemd-logind[1462]: Session 32 logged out. Waiting for processes to exit. Aug 13 00:39:18.114090 systemd[1]: sshd@31-172.234.21.106:22-139.178.89.65:39040.service: Deactivated successfully. Aug 13 00:39:18.116420 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 00:39:18.117696 systemd-logind[1462]: Removed session 32. Aug 13 00:39:23.174030 systemd[1]: Started sshd@32-172.234.21.106:22-139.178.89.65:40690.service - OpenSSH per-connection server daemon (139.178.89.65:40690). Aug 13 00:39:23.506922 sshd[4370]: Accepted publickey for core from 139.178.89.65 port 40690 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:23.508372 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:23.513478 systemd-logind[1462]: New session 33 of user core. Aug 13 00:39:23.524137 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 00:39:23.810244 sshd[4372]: Connection closed by 139.178.89.65 port 40690 Aug 13 00:39:23.811006 sshd-session[4370]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:23.814563 systemd-logind[1462]: Session 33 logged out. Waiting for processes to exit. Aug 13 00:39:23.815368 systemd[1]: sshd@32-172.234.21.106:22-139.178.89.65:40690.service: Deactivated successfully. Aug 13 00:39:23.817310 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 00:39:23.818654 systemd-logind[1462]: Removed session 33. Aug 13 00:39:24.582085 kubelet[2610]: I0813 00:39:24.582050 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:24.582085 kubelet[2610]: I0813 00:39:24.582085 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:39:24.583775 kubelet[2610]: I0813 00:39:24.583755 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:39:24.597289 kubelet[2610]: I0813 00:39:24.597269 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:24.597390 kubelet[2610]: I0813 00:39:24.597374 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:39:24.597450 kubelet[2610]: E0813 00:39:24.597404 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:39:24.597450 kubelet[2610]: E0813 00:39:24.597415 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:39:24.597450 kubelet[2610]: E0813 00:39:24.597423 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:39:24.597450 kubelet[2610]: E0813 00:39:24.597431 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:39:24.597450 kubelet[2610]: E0813 00:39:24.597439 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:39:24.597450 kubelet[2610]: E0813 00:39:24.597446 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:39:24.597450 kubelet[2610]: E0813 00:39:24.597454 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:39:24.597590 kubelet[2610]: E0813 00:39:24.597461 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:39:24.597590 kubelet[2610]: I0813 00:39:24.597470 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:39:28.876025 systemd[1]: Started sshd@33-172.234.21.106:22-139.178.89.65:40698.service - OpenSSH per-connection server daemon (139.178.89.65:40698). Aug 13 00:39:29.195879 sshd[4384]: Accepted publickey for core from 139.178.89.65 port 40698 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:29.197060 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:29.201299 systemd-logind[1462]: New session 34 of user core. Aug 13 00:39:29.208929 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 00:39:29.481894 sshd[4386]: Connection closed by 139.178.89.65 port 40698 Aug 13 00:39:29.482501 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:29.487203 systemd-logind[1462]: Session 34 logged out. Waiting for processes to exit. Aug 13 00:39:29.488018 systemd[1]: sshd@33-172.234.21.106:22-139.178.89.65:40698.service: Deactivated successfully. Aug 13 00:39:29.489978 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 00:39:29.490914 systemd-logind[1462]: Removed session 34. Aug 13 00:39:29.653278 kubelet[2610]: E0813 00:39:29.651799 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:39:29.653278 kubelet[2610]: E0813 00:39:29.653268 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:39:34.549070 systemd[1]: Started sshd@34-172.234.21.106:22-139.178.89.65:50260.service - OpenSSH per-connection server daemon (139.178.89.65:50260). Aug 13 00:39:34.615105 kubelet[2610]: I0813 00:39:34.615078 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:34.615105 kubelet[2610]: I0813 00:39:34.615117 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:39:34.619082 kubelet[2610]: I0813 00:39:34.617728 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:39:34.633838 kubelet[2610]: I0813 00:39:34.632670 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:34.633838 kubelet[2610]: I0813 00:39:34.633733 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:39:34.633838 kubelet[2610]: E0813 00:39:34.633783 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:39:34.633838 kubelet[2610]: E0813 00:39:34.633828 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:39:34.633838 kubelet[2610]: E0813 00:39:34.633840 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:39:34.634044 kubelet[2610]: E0813 00:39:34.633879 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:39:34.634044 kubelet[2610]: E0813 00:39:34.633908 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:39:34.634044 kubelet[2610]: E0813 00:39:34.633957 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:39:34.634044 kubelet[2610]: E0813 00:39:34.633983 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:39:34.634044 kubelet[2610]: E0813 00:39:34.633993 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:39:34.634044 kubelet[2610]: I0813 00:39:34.634005 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:39:34.876595 sshd[4400]: Accepted publickey for core from 139.178.89.65 port 50260 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:34.878077 sshd-session[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:34.882208 systemd-logind[1462]: New session 35 of user core. Aug 13 00:39:34.886939 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 00:39:35.174165 sshd[4402]: Connection closed by 139.178.89.65 port 50260 Aug 13 00:39:35.174920 sshd-session[4400]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:35.182435 systemd-logind[1462]: Session 35 logged out. Waiting for processes to exit. Aug 13 00:39:35.182662 systemd[1]: sshd@34-172.234.21.106:22-139.178.89.65:50260.service: Deactivated successfully. Aug 13 00:39:35.185409 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 00:39:35.186772 systemd-logind[1462]: Removed session 35. Aug 13 00:39:35.653201 kubelet[2610]: E0813 00:39:35.652488 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:39:40.239950 systemd[1]: Started sshd@35-172.234.21.106:22-139.178.89.65:37930.service - OpenSSH per-connection server daemon (139.178.89.65:37930). Aug 13 00:39:40.581273 sshd[4414]: Accepted publickey for core from 139.178.89.65 port 37930 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:40.582667 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:40.588189 systemd-logind[1462]: New session 36 of user core. Aug 13 00:39:40.601947 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 00:39:40.891199 sshd[4416]: Connection closed by 139.178.89.65 port 37930 Aug 13 00:39:40.891867 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:40.895619 systemd-logind[1462]: Session 36 logged out. Waiting for processes to exit. Aug 13 00:39:40.896583 systemd[1]: sshd@35-172.234.21.106:22-139.178.89.65:37930.service: Deactivated successfully. Aug 13 00:39:40.898513 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 00:39:40.899595 systemd-logind[1462]: Removed session 36. Aug 13 00:39:43.653110 kubelet[2610]: E0813 00:39:43.652538 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:39:44.649464 kubelet[2610]: I0813 00:39:44.649427 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:44.649464 kubelet[2610]: I0813 00:39:44.649466 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:39:44.651179 kubelet[2610]: I0813 00:39:44.650764 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:39:44.660763 kubelet[2610]: I0813 00:39:44.660744 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:44.661181 kubelet[2610]: I0813 00:39:44.660905 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:39:44.661181 kubelet[2610]: E0813 00:39:44.660935 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:39:44.661181 kubelet[2610]: E0813 00:39:44.660946 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:39:44.661181 kubelet[2610]: E0813 00:39:44.660954 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:39:44.661181 kubelet[2610]: E0813 00:39:44.660964 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:39:44.661181 kubelet[2610]: E0813 00:39:44.660972 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:39:44.661181 kubelet[2610]: E0813 00:39:44.660980 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:39:44.661181 kubelet[2610]: E0813 00:39:44.660988 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:39:44.661181 kubelet[2610]: E0813 00:39:44.660996 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:39:44.661181 kubelet[2610]: I0813 00:39:44.661006 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:39:45.958712 systemd[1]: Started sshd@36-172.234.21.106:22-139.178.89.65:37936.service - OpenSSH per-connection server daemon (139.178.89.65:37936). Aug 13 00:39:46.279277 sshd[4430]: Accepted publickey for core from 139.178.89.65 port 37936 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:46.280637 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:46.285911 systemd-logind[1462]: New session 37 of user core. Aug 13 00:39:46.292924 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 00:39:46.572898 sshd[4432]: Connection closed by 139.178.89.65 port 37936 Aug 13 00:39:46.573752 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:46.577615 systemd[1]: sshd@36-172.234.21.106:22-139.178.89.65:37936.service: Deactivated successfully. Aug 13 00:39:46.579687 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 00:39:46.580381 systemd-logind[1462]: Session 37 logged out. Waiting for processes to exit. Aug 13 00:39:46.581588 systemd-logind[1462]: Removed session 37. Aug 13 00:39:47.652877 kubelet[2610]: E0813 00:39:47.652279 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:39:51.639174 systemd[1]: Started sshd@37-172.234.21.106:22-139.178.89.65:38932.service - OpenSSH per-connection server daemon (139.178.89.65:38932). Aug 13 00:39:51.970630 sshd[4444]: Accepted publickey for core from 139.178.89.65 port 38932 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:51.971891 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:51.976134 systemd-logind[1462]: New session 38 of user core. Aug 13 00:39:51.983958 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 00:39:52.269546 sshd[4446]: Connection closed by 139.178.89.65 port 38932 Aug 13 00:39:52.270207 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:52.274121 systemd-logind[1462]: Session 38 logged out. Waiting for processes to exit. Aug 13 00:39:52.275120 systemd[1]: sshd@37-172.234.21.106:22-139.178.89.65:38932.service: Deactivated successfully. Aug 13 00:39:52.277653 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 00:39:52.278833 systemd-logind[1462]: Removed session 38. Aug 13 00:39:54.675988 kubelet[2610]: I0813 00:39:54.675948 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:54.675988 kubelet[2610]: I0813 00:39:54.675986 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:39:54.677274 kubelet[2610]: I0813 00:39:54.677259 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:39:54.685781 kubelet[2610]: I0813 00:39:54.685765 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:39:54.685881 kubelet[2610]: I0813 00:39:54.685868 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:39:54.685950 kubelet[2610]: E0813 00:39:54.685897 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:39:54.685950 kubelet[2610]: E0813 00:39:54.685907 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:39:54.685950 kubelet[2610]: E0813 00:39:54.685916 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:39:54.685950 kubelet[2610]: E0813 00:39:54.685925 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:39:54.685950 kubelet[2610]: E0813 00:39:54.685933 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:39:54.685950 kubelet[2610]: E0813 00:39:54.685942 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:39:54.685950 kubelet[2610]: E0813 00:39:54.685950 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:39:54.686136 kubelet[2610]: E0813 00:39:54.685957 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:39:54.686136 kubelet[2610]: I0813 00:39:54.685967 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:39:57.333004 systemd[1]: Started sshd@38-172.234.21.106:22-139.178.89.65:38946.service - OpenSSH per-connection server daemon (139.178.89.65:38946). Aug 13 00:39:57.652840 kubelet[2610]: E0813 00:39:57.652770 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:39:57.658164 sshd[4457]: Accepted publickey for core from 139.178.89.65 port 38946 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:39:57.663341 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:39:57.672685 systemd-logind[1462]: New session 39 of user core. Aug 13 00:39:57.676938 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 00:39:57.950362 sshd[4459]: Connection closed by 139.178.89.65 port 38946 Aug 13 00:39:57.951172 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Aug 13 00:39:57.955070 systemd[1]: sshd@38-172.234.21.106:22-139.178.89.65:38946.service: Deactivated successfully. Aug 13 00:39:57.957518 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 00:39:57.958255 systemd-logind[1462]: Session 39 logged out. Waiting for processes to exit. Aug 13 00:39:57.959277 systemd-logind[1462]: Removed session 39. Aug 13 00:40:03.015998 systemd[1]: Started sshd@39-172.234.21.106:22-139.178.89.65:34214.service - OpenSSH per-connection server daemon (139.178.89.65:34214). Aug 13 00:40:03.345316 sshd[4473]: Accepted publickey for core from 139.178.89.65 port 34214 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:03.346693 sshd-session[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:03.350629 systemd-logind[1462]: New session 40 of user core. Aug 13 00:40:03.354962 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 00:40:03.647198 sshd[4475]: Connection closed by 139.178.89.65 port 34214 Aug 13 00:40:03.647895 sshd-session[4473]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:03.652384 systemd[1]: sshd@39-172.234.21.106:22-139.178.89.65:34214.service: Deactivated successfully. Aug 13 00:40:03.655088 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 00:40:03.657048 systemd-logind[1462]: Session 40 logged out. Waiting for processes to exit. Aug 13 00:40:03.657925 systemd-logind[1462]: Removed session 40. Aug 13 00:40:04.701723 kubelet[2610]: I0813 00:40:04.701679 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:04.701723 kubelet[2610]: I0813 00:40:04.701716 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:40:04.702897 kubelet[2610]: I0813 00:40:04.702879 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:40:04.713457 kubelet[2610]: I0813 00:40:04.713438 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:04.713567 kubelet[2610]: I0813 00:40:04.713549 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:40:04.713615 kubelet[2610]: E0813 00:40:04.713581 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:40:04.713615 kubelet[2610]: E0813 00:40:04.713592 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:40:04.713615 kubelet[2610]: E0813 00:40:04.713600 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:40:04.713615 kubelet[2610]: E0813 00:40:04.713610 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:40:04.713615 kubelet[2610]: E0813 00:40:04.713617 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:40:04.713615 kubelet[2610]: E0813 00:40:04.713625 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:40:04.713615 kubelet[2610]: E0813 00:40:04.713633 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:40:04.713951 kubelet[2610]: E0813 00:40:04.713640 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:40:04.713951 kubelet[2610]: I0813 00:40:04.713649 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:40:08.706498 systemd[1]: Started sshd@40-172.234.21.106:22-139.178.89.65:34226.service - OpenSSH per-connection server daemon (139.178.89.65:34226). Aug 13 00:40:09.032647 sshd[4487]: Accepted publickey for core from 139.178.89.65 port 34226 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:09.034148 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:09.038677 systemd-logind[1462]: New session 41 of user core. Aug 13 00:40:09.044081 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 00:40:09.338609 sshd[4489]: Connection closed by 139.178.89.65 port 34226 Aug 13 00:40:09.339382 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:09.342335 systemd[1]: sshd@40-172.234.21.106:22-139.178.89.65:34226.service: Deactivated successfully. Aug 13 00:40:09.344638 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 00:40:09.346428 systemd-logind[1462]: Session 41 logged out. Waiting for processes to exit. Aug 13 00:40:09.347578 systemd-logind[1462]: Removed session 41. Aug 13 00:40:14.406017 systemd[1]: Started sshd@41-172.234.21.106:22-139.178.89.65:36564.service - OpenSSH per-connection server daemon (139.178.89.65:36564). Aug 13 00:40:14.729872 kubelet[2610]: I0813 00:40:14.729753 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:14.730278 kubelet[2610]: I0813 00:40:14.730261 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:40:14.731265 kubelet[2610]: I0813 00:40:14.731248 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:40:14.738459 sshd[4503]: Accepted publickey for core from 139.178.89.65 port 36564 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:14.740120 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:14.741065 kubelet[2610]: I0813 00:40:14.740472 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:14.741065 kubelet[2610]: I0813 00:40:14.740566 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:40:14.741065 kubelet[2610]: E0813 00:40:14.740593 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:40:14.741065 kubelet[2610]: E0813 00:40:14.740604 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:40:14.741065 kubelet[2610]: E0813 00:40:14.740612 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:40:14.741065 kubelet[2610]: E0813 00:40:14.740620 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:40:14.741065 kubelet[2610]: E0813 00:40:14.740629 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:40:14.741065 kubelet[2610]: E0813 00:40:14.740637 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:40:14.741065 kubelet[2610]: E0813 00:40:14.740645 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:40:14.741065 kubelet[2610]: E0813 00:40:14.740652 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:40:14.741065 kubelet[2610]: I0813 00:40:14.740661 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:40:14.745451 systemd-logind[1462]: New session 42 of user core. Aug 13 00:40:14.749368 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 00:40:15.033783 sshd[4505]: Connection closed by 139.178.89.65 port 36564 Aug 13 00:40:15.035205 sshd-session[4503]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:15.039542 systemd[1]: sshd@41-172.234.21.106:22-139.178.89.65:36564.service: Deactivated successfully. Aug 13 00:40:15.041643 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 00:40:15.042576 systemd-logind[1462]: Session 42 logged out. Waiting for processes to exit. Aug 13 00:40:15.043628 systemd-logind[1462]: Removed session 42. Aug 13 00:40:20.103048 systemd[1]: Started sshd@42-172.234.21.106:22-139.178.89.65:58746.service - OpenSSH per-connection server daemon (139.178.89.65:58746). Aug 13 00:40:20.436852 sshd[4517]: Accepted publickey for core from 139.178.89.65 port 58746 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:20.438263 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:20.442566 systemd-logind[1462]: New session 43 of user core. Aug 13 00:40:20.447929 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 00:40:20.737520 sshd[4519]: Connection closed by 139.178.89.65 port 58746 Aug 13 00:40:20.738228 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:20.742170 systemd[1]: sshd@42-172.234.21.106:22-139.178.89.65:58746.service: Deactivated successfully. Aug 13 00:40:20.744422 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 00:40:20.745582 systemd-logind[1462]: Session 43 logged out. Waiting for processes to exit. Aug 13 00:40:20.746755 systemd-logind[1462]: Removed session 43. Aug 13 00:40:24.756215 kubelet[2610]: I0813 00:40:24.756176 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:24.756215 kubelet[2610]: I0813 00:40:24.756214 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:40:24.757428 kubelet[2610]: I0813 00:40:24.757408 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:40:24.766635 kubelet[2610]: I0813 00:40:24.766618 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:24.766738 kubelet[2610]: I0813 00:40:24.766722 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:40:24.766767 kubelet[2610]: E0813 00:40:24.766750 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:40:24.766767 kubelet[2610]: E0813 00:40:24.766762 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:40:24.766767 kubelet[2610]: E0813 00:40:24.766769 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:40:24.766862 kubelet[2610]: E0813 00:40:24.766784 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:40:24.766862 kubelet[2610]: E0813 00:40:24.766792 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:40:24.766862 kubelet[2610]: E0813 00:40:24.766799 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:40:24.766862 kubelet[2610]: E0813 00:40:24.766829 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:40:24.766862 kubelet[2610]: E0813 00:40:24.766837 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:40:24.766862 kubelet[2610]: I0813 00:40:24.766847 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:40:25.808026 systemd[1]: Started sshd@43-172.234.21.106:22-139.178.89.65:58762.service - OpenSSH per-connection server daemon (139.178.89.65:58762). Aug 13 00:40:26.141684 sshd[4531]: Accepted publickey for core from 139.178.89.65 port 58762 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:26.143230 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:26.147687 systemd-logind[1462]: New session 44 of user core. Aug 13 00:40:26.154922 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 00:40:26.442312 sshd[4533]: Connection closed by 139.178.89.65 port 58762 Aug 13 00:40:26.443297 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:26.446966 systemd-logind[1462]: Session 44 logged out. Waiting for processes to exit. Aug 13 00:40:26.447734 systemd[1]: sshd@43-172.234.21.106:22-139.178.89.65:58762.service: Deactivated successfully. Aug 13 00:40:26.450033 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 00:40:26.451071 systemd-logind[1462]: Removed session 44. Aug 13 00:40:31.511007 systemd[1]: Started sshd@44-172.234.21.106:22-139.178.89.65:35086.service - OpenSSH per-connection server daemon (139.178.89.65:35086). Aug 13 00:40:31.844120 sshd[4545]: Accepted publickey for core from 139.178.89.65 port 35086 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:31.845244 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:31.849431 systemd-logind[1462]: New session 45 of user core. Aug 13 00:40:31.857929 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 00:40:32.139466 sshd[4547]: Connection closed by 139.178.89.65 port 35086 Aug 13 00:40:32.140080 sshd-session[4545]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:32.144863 systemd[1]: sshd@44-172.234.21.106:22-139.178.89.65:35086.service: Deactivated successfully. Aug 13 00:40:32.145121 systemd-logind[1462]: Session 45 logged out. Waiting for processes to exit. Aug 13 00:40:32.147249 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 00:40:32.148290 systemd-logind[1462]: Removed session 45. Aug 13 00:40:33.644161 kubelet[2610]: I0813 00:40:33.644108 2610 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=88 highThreshold=85 amountToFree=155949465 lowThreshold=80 Aug 13 00:40:33.644161 kubelet[2610]: E0813 00:40:33.644153 2610 kubelet.go:1551] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 155949465 bytes, but only found 0 bytes eligible to free." Aug 13 00:40:33.652502 kubelet[2610]: E0813 00:40:33.651854 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:40:34.783777 kubelet[2610]: I0813 00:40:34.783739 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:34.783777 kubelet[2610]: I0813 00:40:34.783773 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:40:34.785095 kubelet[2610]: I0813 00:40:34.785075 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:40:34.798701 kubelet[2610]: I0813 00:40:34.798667 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:34.798768 kubelet[2610]: I0813 00:40:34.798756 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:40:34.798826 kubelet[2610]: E0813 00:40:34.798782 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:40:34.798826 kubelet[2610]: E0813 00:40:34.798792 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:40:34.798826 kubelet[2610]: E0813 00:40:34.798800 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:40:34.798896 kubelet[2610]: E0813 00:40:34.798836 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:40:34.798896 kubelet[2610]: E0813 00:40:34.798845 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:40:34.798896 kubelet[2610]: E0813 00:40:34.798852 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:40:34.798896 kubelet[2610]: E0813 00:40:34.798859 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:40:34.798896 kubelet[2610]: E0813 00:40:34.798867 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:40:34.798896 kubelet[2610]: I0813 00:40:34.798876 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:40:37.202009 systemd[1]: Started sshd@45-172.234.21.106:22-139.178.89.65:35088.service - OpenSSH per-connection server daemon (139.178.89.65:35088). Aug 13 00:40:37.520692 sshd[4561]: Accepted publickey for core from 139.178.89.65 port 35088 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:37.522007 sshd-session[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:37.526601 systemd-logind[1462]: New session 46 of user core. Aug 13 00:40:37.531122 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 00:40:37.820493 sshd[4563]: Connection closed by 139.178.89.65 port 35088 Aug 13 00:40:37.821287 sshd-session[4561]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:37.825016 systemd[1]: sshd@45-172.234.21.106:22-139.178.89.65:35088.service: Deactivated successfully. Aug 13 00:40:37.827059 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 00:40:37.827673 systemd-logind[1462]: Session 46 logged out. Waiting for processes to exit. Aug 13 00:40:37.828721 systemd-logind[1462]: Removed session 46. Aug 13 00:40:42.885032 systemd[1]: Started sshd@46-172.234.21.106:22-139.178.89.65:53610.service - OpenSSH per-connection server daemon (139.178.89.65:53610). Aug 13 00:40:43.208581 sshd[4577]: Accepted publickey for core from 139.178.89.65 port 53610 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:43.209905 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:43.214955 systemd-logind[1462]: New session 47 of user core. Aug 13 00:40:43.220927 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 00:40:43.502167 sshd[4579]: Connection closed by 139.178.89.65 port 53610 Aug 13 00:40:43.502930 sshd-session[4577]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:43.506281 systemd-logind[1462]: Session 47 logged out. Waiting for processes to exit. Aug 13 00:40:43.506984 systemd[1]: sshd@46-172.234.21.106:22-139.178.89.65:53610.service: Deactivated successfully. Aug 13 00:40:43.508944 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 00:40:43.510190 systemd-logind[1462]: Removed session 47. Aug 13 00:40:44.652077 kubelet[2610]: E0813 00:40:44.652039 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:40:44.814387 kubelet[2610]: I0813 00:40:44.814347 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:44.814387 kubelet[2610]: I0813 00:40:44.814384 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:40:44.815421 kubelet[2610]: I0813 00:40:44.815396 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:40:44.824360 kubelet[2610]: I0813 00:40:44.824338 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:44.824435 kubelet[2610]: I0813 00:40:44.824419 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:40:44.824479 kubelet[2610]: E0813 00:40:44.824449 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:40:44.824479 kubelet[2610]: E0813 00:40:44.824460 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:40:44.824479 kubelet[2610]: E0813 00:40:44.824468 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:40:44.824479 kubelet[2610]: E0813 00:40:44.824476 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:40:44.824566 kubelet[2610]: E0813 00:40:44.824484 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:40:44.824566 kubelet[2610]: E0813 00:40:44.824492 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:40:44.824566 kubelet[2610]: E0813 00:40:44.824500 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:40:44.824566 kubelet[2610]: E0813 00:40:44.824508 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:40:44.824566 kubelet[2610]: I0813 00:40:44.824516 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:40:47.652592 kubelet[2610]: E0813 00:40:47.651878 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:40:48.567291 systemd[1]: Started sshd@47-172.234.21.106:22-139.178.89.65:53626.service - OpenSSH per-connection server daemon (139.178.89.65:53626). Aug 13 00:40:48.652152 kubelet[2610]: E0813 00:40:48.652126 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:40:48.902708 sshd[4592]: Accepted publickey for core from 139.178.89.65 port 53626 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:48.904168 sshd-session[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:48.908788 systemd-logind[1462]: New session 48 of user core. Aug 13 00:40:48.913935 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 00:40:49.194160 sshd[4594]: Connection closed by 139.178.89.65 port 53626 Aug 13 00:40:49.195146 sshd-session[4592]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:49.198187 systemd[1]: sshd@47-172.234.21.106:22-139.178.89.65:53626.service: Deactivated successfully. Aug 13 00:40:49.200178 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 00:40:49.202488 systemd-logind[1462]: Session 48 logged out. Waiting for processes to exit. Aug 13 00:40:49.204596 systemd-logind[1462]: Removed session 48. Aug 13 00:40:54.261031 systemd[1]: Started sshd@48-172.234.21.106:22-139.178.89.65:44052.service - OpenSSH per-connection server daemon (139.178.89.65:44052). Aug 13 00:40:54.578224 sshd[4606]: Accepted publickey for core from 139.178.89.65 port 44052 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:40:54.579691 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:40:54.584106 systemd-logind[1462]: New session 49 of user core. Aug 13 00:40:54.588945 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 00:40:54.840885 kubelet[2610]: I0813 00:40:54.840629 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:54.840885 kubelet[2610]: I0813 00:40:54.840666 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:40:54.842392 kubelet[2610]: I0813 00:40:54.842362 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:40:54.852367 kubelet[2610]: I0813 00:40:54.852341 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:40:54.852515 kubelet[2610]: I0813 00:40:54.852460 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:40:54.852515 kubelet[2610]: E0813 00:40:54.852492 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:40:54.852515 kubelet[2610]: E0813 00:40:54.852503 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:40:54.852515 kubelet[2610]: E0813 00:40:54.852512 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:40:54.852614 kubelet[2610]: E0813 00:40:54.852521 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:40:54.852614 kubelet[2610]: E0813 00:40:54.852530 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:40:54.852614 kubelet[2610]: E0813 00:40:54.852538 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:40:54.852614 kubelet[2610]: E0813 00:40:54.852546 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:40:54.852614 kubelet[2610]: E0813 00:40:54.852554 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:40:54.852614 kubelet[2610]: I0813 00:40:54.852563 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:40:54.871035 sshd[4608]: Connection closed by 139.178.89.65 port 44052 Aug 13 00:40:54.871490 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Aug 13 00:40:54.875671 systemd[1]: sshd@48-172.234.21.106:22-139.178.89.65:44052.service: Deactivated successfully. Aug 13 00:40:54.878047 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 00:40:54.878786 systemd-logind[1462]: Session 49 logged out. Waiting for processes to exit. Aug 13 00:40:54.880426 systemd-logind[1462]: Removed session 49. Aug 13 00:40:57.652702 kubelet[2610]: E0813 00:40:57.652290 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:40:58.651950 kubelet[2610]: E0813 00:40:58.651921 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:40:59.938019 systemd[1]: Started sshd@49-172.234.21.106:22-139.178.89.65:54742.service - OpenSSH per-connection server daemon (139.178.89.65:54742). Aug 13 00:41:00.261285 sshd[4620]: Accepted publickey for core from 139.178.89.65 port 54742 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:41:00.262470 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:00.266826 systemd-logind[1462]: New session 50 of user core. Aug 13 00:41:00.274923 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 00:41:00.553009 sshd[4622]: Connection closed by 139.178.89.65 port 54742 Aug 13 00:41:00.553877 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:00.557138 systemd-logind[1462]: Session 50 logged out. Waiting for processes to exit. Aug 13 00:41:00.557898 systemd[1]: sshd@49-172.234.21.106:22-139.178.89.65:54742.service: Deactivated successfully. Aug 13 00:41:00.560024 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 00:41:00.561378 systemd-logind[1462]: Removed session 50. Aug 13 00:41:04.868409 kubelet[2610]: I0813 00:41:04.868333 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:41:04.868409 kubelet[2610]: I0813 00:41:04.868370 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:41:04.869597 kubelet[2610]: I0813 00:41:04.869577 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:41:04.879305 kubelet[2610]: I0813 00:41:04.879279 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:41:04.879398 kubelet[2610]: I0813 00:41:04.879377 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:41:04.879428 kubelet[2610]: E0813 00:41:04.879408 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:41:04.879428 kubelet[2610]: E0813 00:41:04.879420 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:41:04.879480 kubelet[2610]: E0813 00:41:04.879428 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:41:04.879480 kubelet[2610]: E0813 00:41:04.879438 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:41:04.879480 kubelet[2610]: E0813 00:41:04.879446 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:41:04.879480 kubelet[2610]: E0813 00:41:04.879454 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:41:04.879480 kubelet[2610]: E0813 00:41:04.879462 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:41:04.879480 kubelet[2610]: E0813 00:41:04.879470 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:41:04.879480 kubelet[2610]: I0813 00:41:04.879479 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:41:05.619047 systemd[1]: Started sshd@50-172.234.21.106:22-139.178.89.65:54744.service - OpenSSH per-connection server daemon (139.178.89.65:54744). Aug 13 00:41:05.947389 sshd[4635]: Accepted publickey for core from 139.178.89.65 port 54744 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:41:05.948879 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:05.952993 systemd-logind[1462]: New session 51 of user core. Aug 13 00:41:05.959941 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 00:41:06.245322 sshd[4637]: Connection closed by 139.178.89.65 port 54744 Aug 13 00:41:06.246192 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:06.249962 systemd-logind[1462]: Session 51 logged out. Waiting for processes to exit. Aug 13 00:41:06.250766 systemd[1]: sshd@50-172.234.21.106:22-139.178.89.65:54744.service: Deactivated successfully. Aug 13 00:41:06.253127 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 00:41:06.254116 systemd-logind[1462]: Removed session 51. Aug 13 00:41:11.312045 systemd[1]: Started sshd@51-172.234.21.106:22-139.178.89.65:45050.service - OpenSSH per-connection server daemon (139.178.89.65:45050). Aug 13 00:41:11.645295 sshd[4650]: Accepted publickey for core from 139.178.89.65 port 45050 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:41:11.646729 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:11.651881 systemd-logind[1462]: New session 52 of user core. Aug 13 00:41:11.656113 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 00:41:11.663676 kubelet[2610]: E0813 00:41:11.655914 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:11.947730 sshd[4654]: Connection closed by 139.178.89.65 port 45050 Aug 13 00:41:11.948433 sshd-session[4650]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:11.952054 systemd[1]: sshd@51-172.234.21.106:22-139.178.89.65:45050.service: Deactivated successfully. Aug 13 00:41:11.954326 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 00:41:11.956120 systemd-logind[1462]: Session 52 logged out. Waiting for processes to exit. Aug 13 00:41:11.957373 systemd-logind[1462]: Removed session 52. Aug 13 00:41:14.894477 kubelet[2610]: I0813 00:41:14.894439 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:41:14.894477 kubelet[2610]: I0813 00:41:14.894475 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:41:14.895599 kubelet[2610]: I0813 00:41:14.895575 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:41:14.904361 kubelet[2610]: I0813 00:41:14.904338 2610 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 00:41:14.904443 kubelet[2610]: I0813 00:41:14.904422 2610 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-628vf","kube-system/coredns-668d6bf9bc-qvgll","kube-system/coredns-668d6bf9bc-hmcbw","kube-system/cilium-rn4ll","kube-system/kube-controller-manager-172-234-21-106","kube-system/kube-proxy-thk97","kube-system/kube-apiserver-172-234-21-106","kube-system/kube-scheduler-172-234-21-106"] Aug 13 00:41:14.904479 kubelet[2610]: E0813 00:41:14.904451 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-628vf" Aug 13 00:41:14.904479 kubelet[2610]: E0813 00:41:14.904462 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-qvgll" Aug 13 00:41:14.904479 kubelet[2610]: E0813 00:41:14.904469 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-hmcbw" Aug 13 00:41:14.904479 kubelet[2610]: E0813 00:41:14.904478 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-rn4ll" Aug 13 00:41:14.904551 kubelet[2610]: E0813 00:41:14.904489 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-21-106" Aug 13 00:41:14.904551 kubelet[2610]: E0813 00:41:14.904499 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-thk97" Aug 13 00:41:14.904551 kubelet[2610]: E0813 00:41:14.904506 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-21-106" Aug 13 00:41:14.904551 kubelet[2610]: E0813 00:41:14.904513 2610 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-21-106" Aug 13 00:41:14.904551 kubelet[2610]: I0813 00:41:14.904522 2610 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 00:41:17.011058 systemd[1]: Started sshd@52-172.234.21.106:22-139.178.89.65:45062.service - OpenSSH per-connection server daemon (139.178.89.65:45062). Aug 13 00:41:17.335478 sshd[4666]: Accepted publickey for core from 139.178.89.65 port 45062 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:41:17.337610 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:17.342608 systemd-logind[1462]: New session 53 of user core. Aug 13 00:41:17.346968 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 00:41:17.641488 sshd[4668]: Connection closed by 139.178.89.65 port 45062 Aug 13 00:41:17.642117 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:17.645657 systemd[1]: sshd@52-172.234.21.106:22-139.178.89.65:45062.service: Deactivated successfully. Aug 13 00:41:17.648085 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 00:41:17.649874 systemd-logind[1462]: Session 53 logged out. Waiting for processes to exit. Aug 13 00:41:17.651219 systemd-logind[1462]: Removed session 53. Aug 13 00:41:17.706083 systemd[1]: Started sshd@53-172.234.21.106:22-139.178.89.65:45064.service - OpenSSH per-connection server daemon (139.178.89.65:45064). Aug 13 00:41:18.033040 sshd[4680]: Accepted publickey for core from 139.178.89.65 port 45064 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:41:18.035163 sshd-session[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:18.040459 systemd-logind[1462]: New session 54 of user core. Aug 13 00:41:18.047993 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 00:41:19.516669 containerd[1479]: time="2025-08-13T00:41:19.516560842Z" level=info msg="StopContainer for \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\" with timeout 30 (s)" Aug 13 00:41:19.520030 containerd[1479]: time="2025-08-13T00:41:19.520003694Z" level=info msg="Stop container \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\" with signal terminated" Aug 13 00:41:19.529404 containerd[1479]: time="2025-08-13T00:41:19.529362842Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:41:19.535895 systemd[1]: cri-containerd-e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d.scope: Deactivated successfully. Aug 13 00:41:19.539975 containerd[1479]: time="2025-08-13T00:41:19.539942617Z" level=info msg="StopContainer for \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\" with timeout 2 (s)" Aug 13 00:41:19.540420 containerd[1479]: time="2025-08-13T00:41:19.540364437Z" level=info msg="Stop container \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\" with signal terminated" Aug 13 00:41:19.554371 systemd-networkd[1391]: lxc_health: Link DOWN Aug 13 00:41:19.554414 systemd-networkd[1391]: lxc_health: Lost carrier Aug 13 00:41:19.579297 systemd[1]: cri-containerd-1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65.scope: Deactivated successfully. Aug 13 00:41:19.582223 systemd[1]: cri-containerd-1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65.scope: Consumed 6.932s CPU time, 125.8M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 00:41:19.588566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d-rootfs.mount: Deactivated successfully. Aug 13 00:41:19.593101 containerd[1479]: time="2025-08-13T00:41:19.593058953Z" level=info msg="shim disconnected" id=e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d namespace=k8s.io Aug 13 00:41:19.593435 containerd[1479]: time="2025-08-13T00:41:19.593177302Z" level=warning msg="cleaning up after shim disconnected" id=e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d namespace=k8s.io Aug 13 00:41:19.593435 containerd[1479]: time="2025-08-13T00:41:19.593189032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:19.612679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65-rootfs.mount: Deactivated successfully. Aug 13 00:41:19.616434 containerd[1479]: time="2025-08-13T00:41:19.616338549Z" level=info msg="StopContainer for \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\" returns successfully" Aug 13 00:41:19.617312 containerd[1479]: time="2025-08-13T00:41:19.617288747Z" level=info msg="StopPodSandbox for \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\"" Aug 13 00:41:19.617466 containerd[1479]: time="2025-08-13T00:41:19.617316107Z" level=info msg="Container to stop \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:19.619582 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b-shm.mount: Deactivated successfully. Aug 13 00:41:19.620210 containerd[1479]: time="2025-08-13T00:41:19.620045030Z" level=info msg="shim disconnected" id=1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65 namespace=k8s.io Aug 13 00:41:19.620210 containerd[1479]: time="2025-08-13T00:41:19.620085180Z" level=warning msg="cleaning up after shim disconnected" id=1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65 namespace=k8s.io Aug 13 00:41:19.620210 containerd[1479]: time="2025-08-13T00:41:19.620094060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:19.626452 systemd[1]: cri-containerd-b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b.scope: Deactivated successfully. Aug 13 00:41:19.641143 containerd[1479]: time="2025-08-13T00:41:19.641097260Z" level=info msg="StopContainer for \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\" returns successfully" Aug 13 00:41:19.641648 containerd[1479]: time="2025-08-13T00:41:19.641611049Z" level=info msg="StopPodSandbox for \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\"" Aug 13 00:41:19.641697 containerd[1479]: time="2025-08-13T00:41:19.641652139Z" level=info msg="Container to stop \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:19.641884 containerd[1479]: time="2025-08-13T00:41:19.641675159Z" level=info msg="Container to stop \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:19.643656 containerd[1479]: time="2025-08-13T00:41:19.641796339Z" level=info msg="Container to stop \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:19.643656 containerd[1479]: time="2025-08-13T00:41:19.643643074Z" level=info msg="Container to stop \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:19.643752 containerd[1479]: time="2025-08-13T00:41:19.643654144Z" level=info msg="Container to stop \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:41:19.647271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb-shm.mount: Deactivated successfully. Aug 13 00:41:19.658166 systemd[1]: cri-containerd-e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb.scope: Deactivated successfully. Aug 13 00:41:19.686919 containerd[1479]: time="2025-08-13T00:41:19.686868974Z" level=info msg="shim disconnected" id=b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b namespace=k8s.io Aug 13 00:41:19.687209 containerd[1479]: time="2025-08-13T00:41:19.687191952Z" level=warning msg="cleaning up after shim disconnected" id=b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b namespace=k8s.io Aug 13 00:41:19.687427 containerd[1479]: time="2025-08-13T00:41:19.687411502Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:19.697503 containerd[1479]: time="2025-08-13T00:41:19.697443569Z" level=info msg="shim disconnected" id=e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb namespace=k8s.io Aug 13 00:41:19.697503 containerd[1479]: time="2025-08-13T00:41:19.697496589Z" level=warning msg="cleaning up after shim disconnected" id=e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb namespace=k8s.io Aug 13 00:41:19.697679 containerd[1479]: time="2025-08-13T00:41:19.697506738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:19.710046 containerd[1479]: time="2025-08-13T00:41:19.710023539Z" level=info msg="TearDown network for sandbox \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\" successfully" Aug 13 00:41:19.710139 containerd[1479]: time="2025-08-13T00:41:19.710124349Z" level=info msg="StopPodSandbox for \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\" returns successfully" Aug 13 00:41:19.716751 containerd[1479]: time="2025-08-13T00:41:19.716717273Z" level=info msg="TearDown network for sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" successfully" Aug 13 00:41:19.716751 containerd[1479]: time="2025-08-13T00:41:19.716744813Z" level=info msg="StopPodSandbox for \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" returns successfully" Aug 13 00:41:19.818159 kubelet[2610]: I0813 00:41:19.818063 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-cgroup\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.818590 kubelet[2610]: I0813 00:41:19.818571 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-xtables-lock\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.818722 kubelet[2610]: I0813 00:41:19.818689 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-clustermesh-secrets\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.818836 kubelet[2610]: I0813 00:41:19.818800 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-hubble-tls\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.818931 kubelet[2610]: I0813 00:41:19.818919 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-bpf-maps\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.819041 kubelet[2610]: I0813 00:41:19.819008 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bnw7\" (UniqueName: \"kubernetes.io/projected/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-kube-api-access-5bnw7\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.819615 kubelet[2610]: I0813 00:41:19.819140 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cni-path\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.819615 kubelet[2610]: I0813 00:41:19.819163 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-host-proc-sys-kernel\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.819615 kubelet[2610]: I0813 00:41:19.819178 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-hostproc\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.819615 kubelet[2610]: I0813 00:41:19.819191 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-etc-cni-netd\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.819615 kubelet[2610]: I0813 00:41:19.819210 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-config-path\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.819615 kubelet[2610]: I0813 00:41:19.819227 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-run\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.819851 kubelet[2610]: I0813 00:41:19.819241 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-host-proc-sys-net\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.819851 kubelet[2610]: I0813 00:41:19.819258 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2946c\" (UniqueName: \"kubernetes.io/projected/77c4921b-346d-4f4e-9172-cbd21b4d587d-kube-api-access-2946c\") pod \"77c4921b-346d-4f4e-9172-cbd21b4d587d\" (UID: \"77c4921b-346d-4f4e-9172-cbd21b4d587d\") " Aug 13 00:41:19.819851 kubelet[2610]: I0813 00:41:19.819274 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77c4921b-346d-4f4e-9172-cbd21b4d587d-cilium-config-path\") pod \"77c4921b-346d-4f4e-9172-cbd21b4d587d\" (UID: \"77c4921b-346d-4f4e-9172-cbd21b4d587d\") " Aug 13 00:41:19.819851 kubelet[2610]: I0813 00:41:19.819288 2610 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-lib-modules\") pod \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\" (UID: \"478772a7-5d49-49d8-8ca6-a2ff4e5373c1\") " Aug 13 00:41:19.821043 kubelet[2610]: I0813 00:41:19.818131 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.821098 kubelet[2610]: I0813 00:41:19.819340 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.821127 kubelet[2610]: I0813 00:41:19.821092 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.821127 kubelet[2610]: I0813 00:41:19.821119 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.821172 kubelet[2610]: I0813 00:41:19.821135 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.821545 kubelet[2610]: I0813 00:41:19.821523 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.825592 kubelet[2610]: I0813 00:41:19.821674 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.825592 kubelet[2610]: I0813 00:41:19.822736 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cni-path" (OuterVolumeSpecName: "cni-path") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.825592 kubelet[2610]: I0813 00:41:19.822751 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.825592 kubelet[2610]: I0813 00:41:19.822764 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-hostproc" (OuterVolumeSpecName: "hostproc") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:41:19.827543 kubelet[2610]: I0813 00:41:19.827398 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:41:19.827543 kubelet[2610]: I0813 00:41:19.827506 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:41:19.828781 kubelet[2610]: I0813 00:41:19.828762 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77c4921b-346d-4f4e-9172-cbd21b4d587d-kube-api-access-2946c" (OuterVolumeSpecName: "kube-api-access-2946c") pod "77c4921b-346d-4f4e-9172-cbd21b4d587d" (UID: "77c4921b-346d-4f4e-9172-cbd21b4d587d"). InnerVolumeSpecName "kube-api-access-2946c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:41:19.830746 kubelet[2610]: I0813 00:41:19.829756 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-kube-api-access-5bnw7" (OuterVolumeSpecName: "kube-api-access-5bnw7") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "kube-api-access-5bnw7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:41:19.830746 kubelet[2610]: I0813 00:41:19.830196 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "478772a7-5d49-49d8-8ca6-a2ff4e5373c1" (UID: "478772a7-5d49-49d8-8ca6-a2ff4e5373c1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:41:19.832763 kubelet[2610]: I0813 00:41:19.832740 2610 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77c4921b-346d-4f4e-9172-cbd21b4d587d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "77c4921b-346d-4f4e-9172-cbd21b4d587d" (UID: "77c4921b-346d-4f4e-9172-cbd21b4d587d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:41:19.920513 kubelet[2610]: I0813 00:41:19.920458 2610 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-host-proc-sys-kernel\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920513 kubelet[2610]: I0813 00:41:19.920486 2610 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-etc-cni-netd\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920513 kubelet[2610]: I0813 00:41:19.920497 2610 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-config-path\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920513 kubelet[2610]: I0813 00:41:19.920507 2610 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-run\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920513 kubelet[2610]: I0813 00:41:19.920516 2610 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-hostproc\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920706 kubelet[2610]: I0813 00:41:19.920525 2610 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2946c\" (UniqueName: \"kubernetes.io/projected/77c4921b-346d-4f4e-9172-cbd21b4d587d-kube-api-access-2946c\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920706 kubelet[2610]: I0813 00:41:19.920534 2610 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77c4921b-346d-4f4e-9172-cbd21b4d587d-cilium-config-path\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920706 kubelet[2610]: I0813 00:41:19.920542 2610 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-lib-modules\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920706 kubelet[2610]: I0813 00:41:19.920550 2610 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-host-proc-sys-net\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920706 kubelet[2610]: I0813 00:41:19.920559 2610 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cilium-cgroup\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920706 kubelet[2610]: I0813 00:41:19.920567 2610 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-xtables-lock\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920706 kubelet[2610]: I0813 00:41:19.920574 2610 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-clustermesh-secrets\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920706 kubelet[2610]: I0813 00:41:19.920582 2610 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-hubble-tls\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920946 kubelet[2610]: I0813 00:41:19.920590 2610 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-bpf-maps\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920946 kubelet[2610]: I0813 00:41:19.920598 2610 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5bnw7\" (UniqueName: \"kubernetes.io/projected/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-kube-api-access-5bnw7\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:19.920946 kubelet[2610]: I0813 00:41:19.920606 2610 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/478772a7-5d49-49d8-8ca6-a2ff4e5373c1-cni-path\") on node \"172-234-21-106\" DevicePath \"\"" Aug 13 00:41:20.336528 kubelet[2610]: I0813 00:41:20.335719 2610 scope.go:117] "RemoveContainer" containerID="e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d" Aug 13 00:41:20.338695 containerd[1479]: time="2025-08-13T00:41:20.338652806Z" level=info msg="RemoveContainer for \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\"" Aug 13 00:41:20.343855 containerd[1479]: time="2025-08-13T00:41:20.343096776Z" level=info msg="RemoveContainer for \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\" returns successfully" Aug 13 00:41:20.343968 kubelet[2610]: I0813 00:41:20.343929 2610 scope.go:117] "RemoveContainer" containerID="e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d" Aug 13 00:41:20.344174 containerd[1479]: time="2025-08-13T00:41:20.344113573Z" level=error msg="ContainerStatus for \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\": not found" Aug 13 00:41:20.345676 kubelet[2610]: E0813 00:41:20.344817 2610 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\": not found" containerID="e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d" Aug 13 00:41:20.345676 kubelet[2610]: I0813 00:41:20.344847 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d"} err="failed to get container status \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e72f06b25ae00074344af6753fb0423253667a0015e52f35d4050f8b60cbe65d\": not found" Aug 13 00:41:20.345676 kubelet[2610]: I0813 00:41:20.344912 2610 scope.go:117] "RemoveContainer" containerID="1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65" Aug 13 00:41:20.345989 systemd[1]: Removed slice kubepods-besteffort-pod77c4921b_346d_4f4e_9172_cbd21b4d587d.slice - libcontainer container kubepods-besteffort-pod77c4921b_346d_4f4e_9172_cbd21b4d587d.slice. Aug 13 00:41:20.348784 containerd[1479]: time="2025-08-13T00:41:20.348765362Z" level=info msg="RemoveContainer for \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\"" Aug 13 00:41:20.349625 systemd[1]: Removed slice kubepods-burstable-pod478772a7_5d49_49d8_8ca6_a2ff4e5373c1.slice - libcontainer container kubepods-burstable-pod478772a7_5d49_49d8_8ca6_a2ff4e5373c1.slice. Aug 13 00:41:20.349716 systemd[1]: kubepods-burstable-pod478772a7_5d49_49d8_8ca6_a2ff4e5373c1.slice: Consumed 7.035s CPU time, 126.2M memory peak, 136K read from disk, 15.3M written to disk. Aug 13 00:41:20.353661 containerd[1479]: time="2025-08-13T00:41:20.353636591Z" level=info msg="RemoveContainer for \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\" returns successfully" Aug 13 00:41:20.354251 kubelet[2610]: I0813 00:41:20.354229 2610 scope.go:117] "RemoveContainer" containerID="4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c" Aug 13 00:41:20.355341 containerd[1479]: time="2025-08-13T00:41:20.355320837Z" level=info msg="RemoveContainer for \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\"" Aug 13 00:41:20.358247 containerd[1479]: time="2025-08-13T00:41:20.358229730Z" level=info msg="RemoveContainer for \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\" returns successfully" Aug 13 00:41:20.359118 kubelet[2610]: I0813 00:41:20.359077 2610 scope.go:117] "RemoveContainer" containerID="a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c" Aug 13 00:41:20.360089 containerd[1479]: time="2025-08-13T00:41:20.360011466Z" level=info msg="RemoveContainer for \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\"" Aug 13 00:41:20.363050 containerd[1479]: time="2025-08-13T00:41:20.362932380Z" level=info msg="RemoveContainer for \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\" returns successfully" Aug 13 00:41:20.363287 kubelet[2610]: I0813 00:41:20.363166 2610 scope.go:117] "RemoveContainer" containerID="f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45" Aug 13 00:41:20.364185 containerd[1479]: time="2025-08-13T00:41:20.364152796Z" level=info msg="RemoveContainer for \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\"" Aug 13 00:41:20.367445 containerd[1479]: time="2025-08-13T00:41:20.367417158Z" level=info msg="RemoveContainer for \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\" returns successfully" Aug 13 00:41:20.367916 kubelet[2610]: I0813 00:41:20.367895 2610 scope.go:117] "RemoveContainer" containerID="b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36" Aug 13 00:41:20.368746 containerd[1479]: time="2025-08-13T00:41:20.368678056Z" level=info msg="RemoveContainer for \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\"" Aug 13 00:41:20.372491 containerd[1479]: time="2025-08-13T00:41:20.372277747Z" level=info msg="RemoveContainer for \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\" returns successfully" Aug 13 00:41:20.372749 kubelet[2610]: I0813 00:41:20.372698 2610 scope.go:117] "RemoveContainer" containerID="1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65" Aug 13 00:41:20.373062 containerd[1479]: time="2025-08-13T00:41:20.373003116Z" level=error msg="ContainerStatus for \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\": not found" Aug 13 00:41:20.373267 kubelet[2610]: E0813 00:41:20.373247 2610 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\": not found" containerID="1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65" Aug 13 00:41:20.373599 kubelet[2610]: I0813 00:41:20.373404 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65"} err="failed to get container status \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a6c168293e43b6c34ad70146802b2a9187bbd4c824bee53163f2fc5abcfce65\": not found" Aug 13 00:41:20.373599 kubelet[2610]: I0813 00:41:20.373423 2610 scope.go:117] "RemoveContainer" containerID="4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c" Aug 13 00:41:20.373667 containerd[1479]: time="2025-08-13T00:41:20.373536815Z" level=error msg="ContainerStatus for \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\": not found" Aug 13 00:41:20.373800 kubelet[2610]: E0813 00:41:20.373736 2610 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\": not found" containerID="4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c" Aug 13 00:41:20.373800 kubelet[2610]: I0813 00:41:20.373756 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c"} err="failed to get container status \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f6b99724028ba307eeccb07b975ce209ee774cf105307493406930a5e80543c\": not found" Aug 13 00:41:20.373800 kubelet[2610]: I0813 00:41:20.373768 2610 scope.go:117] "RemoveContainer" containerID="a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c" Aug 13 00:41:20.374594 containerd[1479]: time="2025-08-13T00:41:20.374554272Z" level=error msg="ContainerStatus for \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\": not found" Aug 13 00:41:20.374884 kubelet[2610]: E0813 00:41:20.374743 2610 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\": not found" containerID="a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c" Aug 13 00:41:20.374884 kubelet[2610]: I0813 00:41:20.374759 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c"} err="failed to get container status \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9f19125b7185d4bf1b5f46a5abe2a3acd93d00418442a7ec04025683514fd4c\": not found" Aug 13 00:41:20.374884 kubelet[2610]: I0813 00:41:20.374783 2610 scope.go:117] "RemoveContainer" containerID="f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45" Aug 13 00:41:20.375076 containerd[1479]: time="2025-08-13T00:41:20.375047661Z" level=error msg="ContainerStatus for \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\": not found" Aug 13 00:41:20.375182 kubelet[2610]: E0813 00:41:20.375160 2610 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\": not found" containerID="f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45" Aug 13 00:41:20.375269 kubelet[2610]: I0813 00:41:20.375176 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45"} err="failed to get container status \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\": rpc error: code = NotFound desc = an error occurred when try to find container \"f49b0c6b69a5d7891d8c7d948b1b9617caf1442cf1edbf5f4bc33a4c3a88bc45\": not found" Aug 13 00:41:20.375269 kubelet[2610]: I0813 00:41:20.375256 2610 scope.go:117] "RemoveContainer" containerID="b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36" Aug 13 00:41:20.375417 containerd[1479]: time="2025-08-13T00:41:20.375391159Z" level=error msg="ContainerStatus for \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\": not found" Aug 13 00:41:20.375545 kubelet[2610]: E0813 00:41:20.375522 2610 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\": not found" containerID="b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36" Aug 13 00:41:20.375596 kubelet[2610]: I0813 00:41:20.375546 2610 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36"} err="failed to get container status \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8a4a5b5fd000ae1e336d6035592374f9ce0e60ae5f840fd5bbdf6994d828d36\": not found" Aug 13 00:41:20.513554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b-rootfs.mount: Deactivated successfully. Aug 13 00:41:20.513671 systemd[1]: var-lib-kubelet-pods-77c4921b\x2d346d\x2d4f4e\x2d9172\x2dcbd21b4d587d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2946c.mount: Deactivated successfully. Aug 13 00:41:20.513754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb-rootfs.mount: Deactivated successfully. Aug 13 00:41:20.513862 systemd[1]: var-lib-kubelet-pods-478772a7\x2d5d49\x2d49d8\x2d8ca6\x2da2ff4e5373c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5bnw7.mount: Deactivated successfully. Aug 13 00:41:20.513947 systemd[1]: var-lib-kubelet-pods-478772a7\x2d5d49\x2d49d8\x2d8ca6\x2da2ff4e5373c1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:41:20.514022 systemd[1]: var-lib-kubelet-pods-478772a7\x2d5d49\x2d49d8\x2d8ca6\x2da2ff4e5373c1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:41:21.515665 sshd[4682]: Connection closed by 139.178.89.65 port 45064 Aug 13 00:41:21.516400 sshd-session[4680]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:21.520488 systemd[1]: sshd@53-172.234.21.106:22-139.178.89.65:45064.service: Deactivated successfully. Aug 13 00:41:21.522468 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 00:41:21.523449 systemd-logind[1462]: Session 54 logged out. Waiting for processes to exit. Aug 13 00:41:21.524573 systemd-logind[1462]: Removed session 54. Aug 13 00:41:21.583010 systemd[1]: Started sshd@54-172.234.21.106:22-139.178.89.65:56732.service - OpenSSH per-connection server daemon (139.178.89.65:56732). Aug 13 00:41:21.654354 kubelet[2610]: I0813 00:41:21.654315 2610 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478772a7-5d49-49d8-8ca6-a2ff4e5373c1" path="/var/lib/kubelet/pods/478772a7-5d49-49d8-8ca6-a2ff4e5373c1/volumes" Aug 13 00:41:21.655105 kubelet[2610]: I0813 00:41:21.655084 2610 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77c4921b-346d-4f4e-9172-cbd21b4d587d" path="/var/lib/kubelet/pods/77c4921b-346d-4f4e-9172-cbd21b4d587d/volumes" Aug 13 00:41:21.911563 sshd[4848]: Accepted publickey for core from 139.178.89.65 port 56732 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:41:21.913248 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:21.918116 systemd-logind[1462]: New session 55 of user core. Aug 13 00:41:21.925938 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 00:41:22.518513 kubelet[2610]: I0813 00:41:22.517640 2610 memory_manager.go:355] "RemoveStaleState removing state" podUID="478772a7-5d49-49d8-8ca6-a2ff4e5373c1" containerName="cilium-agent" Aug 13 00:41:22.518513 kubelet[2610]: I0813 00:41:22.517665 2610 memory_manager.go:355] "RemoveStaleState removing state" podUID="77c4921b-346d-4f4e-9172-cbd21b4d587d" containerName="cilium-operator" Aug 13 00:41:22.527536 systemd[1]: Created slice kubepods-burstable-pod2a2a3526_398e_4c70_9c02_a98b28c8e95d.slice - libcontainer container kubepods-burstable-pod2a2a3526_398e_4c70_9c02_a98b28c8e95d.slice. Aug 13 00:41:22.529523 kubelet[2610]: I0813 00:41:22.529483 2610 status_manager.go:890] "Failed to get status for pod" podUID="2a2a3526-398e-4c70-9c02-a98b28c8e95d" pod="kube-system/cilium-8gccx" err="pods \"cilium-8gccx\" is forbidden: User \"system:node:172-234-21-106\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-21-106' and this object" Aug 13 00:41:22.554484 sshd[4850]: Connection closed by 139.178.89.65 port 56732 Aug 13 00:41:22.555202 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:22.559085 systemd[1]: sshd@54-172.234.21.106:22-139.178.89.65:56732.service: Deactivated successfully. Aug 13 00:41:22.561063 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 00:41:22.562915 systemd-logind[1462]: Session 55 logged out. Waiting for processes to exit. Aug 13 00:41:22.564344 systemd-logind[1462]: Removed session 55. Aug 13 00:41:22.611107 systemd[1]: Started sshd@55-172.234.21.106:22-139.178.89.65:56748.service - OpenSSH per-connection server daemon (139.178.89.65:56748). Aug 13 00:41:22.637165 kubelet[2610]: I0813 00:41:22.637134 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-cni-path\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637246 kubelet[2610]: I0813 00:41:22.637164 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a2a3526-398e-4c70-9c02-a98b28c8e95d-cilium-config-path\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637246 kubelet[2610]: I0813 00:41:22.637183 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a2a3526-398e-4c70-9c02-a98b28c8e95d-cilium-ipsec-secrets\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637246 kubelet[2610]: I0813 00:41:22.637197 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-host-proc-sys-net\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637246 kubelet[2610]: I0813 00:41:22.637211 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-cilium-cgroup\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637246 kubelet[2610]: I0813 00:41:22.637225 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-lib-modules\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637246 kubelet[2610]: I0813 00:41:22.637236 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-xtables-lock\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637381 kubelet[2610]: I0813 00:41:22.637250 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a2a3526-398e-4c70-9c02-a98b28c8e95d-clustermesh-secrets\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637381 kubelet[2610]: I0813 00:41:22.637263 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-host-proc-sys-kernel\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637381 kubelet[2610]: I0813 00:41:22.637279 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vllzt\" (UniqueName: \"kubernetes.io/projected/2a2a3526-398e-4c70-9c02-a98b28c8e95d-kube-api-access-vllzt\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637381 kubelet[2610]: I0813 00:41:22.637291 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a2a3526-398e-4c70-9c02-a98b28c8e95d-hubble-tls\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637381 kubelet[2610]: I0813 00:41:22.637318 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-etc-cni-netd\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637381 kubelet[2610]: I0813 00:41:22.637342 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-bpf-maps\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637509 kubelet[2610]: I0813 00:41:22.637368 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-cilium-run\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.637509 kubelet[2610]: I0813 00:41:22.637393 2610 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a2a3526-398e-4c70-9c02-a98b28c8e95d-hostproc\") pod \"cilium-8gccx\" (UID: \"2a2a3526-398e-4c70-9c02-a98b28c8e95d\") " pod="kube-system/cilium-8gccx" Aug 13 00:41:22.651726 kubelet[2610]: E0813 00:41:22.651707 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:22.832052 kubelet[2610]: E0813 00:41:22.831041 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:22.832575 containerd[1479]: time="2025-08-13T00:41:22.831602398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gccx,Uid:2a2a3526-398e-4c70-9c02-a98b28c8e95d,Namespace:kube-system,Attempt:0,}" Aug 13 00:41:22.851954 containerd[1479]: time="2025-08-13T00:41:22.851895480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:41:22.852064 containerd[1479]: time="2025-08-13T00:41:22.851975070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:41:22.852534 containerd[1479]: time="2025-08-13T00:41:22.852405879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:41:22.852534 containerd[1479]: time="2025-08-13T00:41:22.852475308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:41:22.873930 systemd[1]: Started cri-containerd-1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e.scope - libcontainer container 1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e. Aug 13 00:41:22.896521 containerd[1479]: time="2025-08-13T00:41:22.896276816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gccx,Uid:2a2a3526-398e-4c70-9c02-a98b28c8e95d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\"" Aug 13 00:41:22.897262 kubelet[2610]: E0813 00:41:22.897137 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:22.899440 containerd[1479]: time="2025-08-13T00:41:22.899317728Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:41:22.907570 containerd[1479]: time="2025-08-13T00:41:22.907549389Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4d6863dbbbf0bbd0ba5eb9fb1c76cbf59190c5065091469809999d4eceaaabd7\"" Aug 13 00:41:22.908658 containerd[1479]: time="2025-08-13T00:41:22.907970788Z" level=info msg="StartContainer for \"4d6863dbbbf0bbd0ba5eb9fb1c76cbf59190c5065091469809999d4eceaaabd7\"" Aug 13 00:41:22.933959 systemd[1]: Started cri-containerd-4d6863dbbbf0bbd0ba5eb9fb1c76cbf59190c5065091469809999d4eceaaabd7.scope - libcontainer container 4d6863dbbbf0bbd0ba5eb9fb1c76cbf59190c5065091469809999d4eceaaabd7. Aug 13 00:41:22.935859 sshd[4861]: Accepted publickey for core from 139.178.89.65 port 56748 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:41:22.937885 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:22.942818 systemd-logind[1462]: New session 56 of user core. Aug 13 00:41:22.949049 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 00:41:22.972286 containerd[1479]: time="2025-08-13T00:41:22.972058397Z" level=info msg="StartContainer for \"4d6863dbbbf0bbd0ba5eb9fb1c76cbf59190c5065091469809999d4eceaaabd7\" returns successfully" Aug 13 00:41:22.984344 systemd[1]: cri-containerd-4d6863dbbbf0bbd0ba5eb9fb1c76cbf59190c5065091469809999d4eceaaabd7.scope: Deactivated successfully. Aug 13 00:41:23.009581 containerd[1479]: time="2025-08-13T00:41:23.009527219Z" level=info msg="shim disconnected" id=4d6863dbbbf0bbd0ba5eb9fb1c76cbf59190c5065091469809999d4eceaaabd7 namespace=k8s.io Aug 13 00:41:23.009822 containerd[1479]: time="2025-08-13T00:41:23.009584719Z" level=warning msg="cleaning up after shim disconnected" id=4d6863dbbbf0bbd0ba5eb9fb1c76cbf59190c5065091469809999d4eceaaabd7 namespace=k8s.io Aug 13 00:41:23.009822 containerd[1479]: time="2025-08-13T00:41:23.009594219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:23.171952 sshd[4932]: Connection closed by 139.178.89.65 port 56748 Aug 13 00:41:23.172552 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:23.175573 systemd[1]: sshd@55-172.234.21.106:22-139.178.89.65:56748.service: Deactivated successfully. Aug 13 00:41:23.177433 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 00:41:23.178955 systemd-logind[1462]: Session 56 logged out. Waiting for processes to exit. Aug 13 00:41:23.179953 systemd-logind[1462]: Removed session 56. Aug 13 00:41:23.240002 systemd[1]: Started sshd@56-172.234.21.106:22-139.178.89.65:56750.service - OpenSSH per-connection server daemon (139.178.89.65:56750). Aug 13 00:41:23.353606 kubelet[2610]: E0813 00:41:23.353584 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:23.358154 containerd[1479]: time="2025-08-13T00:41:23.357921064Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:41:23.368622 containerd[1479]: time="2025-08-13T00:41:23.368586879Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d94bceca26230718e1445986f7691cdeb9c395c94088c12d9c3f6319f506110\"" Aug 13 00:41:23.370145 containerd[1479]: time="2025-08-13T00:41:23.370126746Z" level=info msg="StartContainer for \"8d94bceca26230718e1445986f7691cdeb9c395c94088c12d9c3f6319f506110\"" Aug 13 00:41:23.396895 systemd[1]: Started cri-containerd-8d94bceca26230718e1445986f7691cdeb9c395c94088c12d9c3f6319f506110.scope - libcontainer container 8d94bceca26230718e1445986f7691cdeb9c395c94088c12d9c3f6319f506110. Aug 13 00:41:23.424525 containerd[1479]: time="2025-08-13T00:41:23.424145608Z" level=info msg="StartContainer for \"8d94bceca26230718e1445986f7691cdeb9c395c94088c12d9c3f6319f506110\" returns successfully" Aug 13 00:41:23.433832 systemd[1]: cri-containerd-8d94bceca26230718e1445986f7691cdeb9c395c94088c12d9c3f6319f506110.scope: Deactivated successfully. Aug 13 00:41:23.452754 containerd[1479]: time="2025-08-13T00:41:23.452702621Z" level=info msg="shim disconnected" id=8d94bceca26230718e1445986f7691cdeb9c395c94088c12d9c3f6319f506110 namespace=k8s.io Aug 13 00:41:23.453030 containerd[1479]: time="2025-08-13T00:41:23.452768750Z" level=warning msg="cleaning up after shim disconnected" id=8d94bceca26230718e1445986f7691cdeb9c395c94088c12d9c3f6319f506110 namespace=k8s.io Aug 13 00:41:23.453030 containerd[1479]: time="2025-08-13T00:41:23.452778720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:23.573865 sshd[4980]: Accepted publickey for core from 139.178.89.65 port 56750 ssh2: RSA SHA256:oQRmtR3l5PP4/Uo0wDjw46qIAkFbGTAbDRc2qvFX5SY Aug 13 00:41:23.575190 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:41:23.579651 systemd-logind[1462]: New session 57 of user core. Aug 13 00:41:23.587927 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 00:41:23.789173 kubelet[2610]: E0813 00:41:23.788998 2610 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:41:24.352910 kubelet[2610]: E0813 00:41:24.352873 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:24.355696 containerd[1479]: time="2025-08-13T00:41:24.354891364Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:41:24.375881 containerd[1479]: time="2025-08-13T00:41:24.375844504Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fa3a5bc33adb57ef3fcb91c6e7ea9d6153fda20ce0378277bda68795980a8128\"" Aug 13 00:41:24.376687 containerd[1479]: time="2025-08-13T00:41:24.376230933Z" level=info msg="StartContainer for \"fa3a5bc33adb57ef3fcb91c6e7ea9d6153fda20ce0378277bda68795980a8128\"" Aug 13 00:41:24.402929 systemd[1]: Started cri-containerd-fa3a5bc33adb57ef3fcb91c6e7ea9d6153fda20ce0378277bda68795980a8128.scope - libcontainer container fa3a5bc33adb57ef3fcb91c6e7ea9d6153fda20ce0378277bda68795980a8128. Aug 13 00:41:24.432180 containerd[1479]: time="2025-08-13T00:41:24.432083070Z" level=info msg="StartContainer for \"fa3a5bc33adb57ef3fcb91c6e7ea9d6153fda20ce0378277bda68795980a8128\" returns successfully" Aug 13 00:41:24.434705 systemd[1]: cri-containerd-fa3a5bc33adb57ef3fcb91c6e7ea9d6153fda20ce0378277bda68795980a8128.scope: Deactivated successfully. Aug 13 00:41:24.465238 containerd[1479]: time="2025-08-13T00:41:24.465195302Z" level=info msg="shim disconnected" id=fa3a5bc33adb57ef3fcb91c6e7ea9d6153fda20ce0378277bda68795980a8128 namespace=k8s.io Aug 13 00:41:24.465440 containerd[1479]: time="2025-08-13T00:41:24.465410602Z" level=warning msg="cleaning up after shim disconnected" id=fa3a5bc33adb57ef3fcb91c6e7ea9d6153fda20ce0378277bda68795980a8128 namespace=k8s.io Aug 13 00:41:24.465440 containerd[1479]: time="2025-08-13T00:41:24.465429652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:24.477420 containerd[1479]: time="2025-08-13T00:41:24.477383794Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:41:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:41:24.743128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa3a5bc33adb57ef3fcb91c6e7ea9d6153fda20ce0378277bda68795980a8128-rootfs.mount: Deactivated successfully. Aug 13 00:41:24.918849 kubelet[2610]: I0813 00:41:24.918798 2610 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 00:41:24.918996 kubelet[2610]: I0813 00:41:24.918856 2610 container_gc.go:86] "Attempting to delete unused containers" Aug 13 00:41:24.920066 containerd[1479]: time="2025-08-13T00:41:24.920018824Z" level=info msg="StopPodSandbox for \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\"" Aug 13 00:41:24.920127 containerd[1479]: time="2025-08-13T00:41:24.920101783Z" level=info msg="TearDown network for sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" successfully" Aug 13 00:41:24.920127 containerd[1479]: time="2025-08-13T00:41:24.920112803Z" level=info msg="StopPodSandbox for \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" returns successfully" Aug 13 00:41:24.920513 containerd[1479]: time="2025-08-13T00:41:24.920469283Z" level=info msg="RemovePodSandbox for \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\"" Aug 13 00:41:24.920513 containerd[1479]: time="2025-08-13T00:41:24.920496982Z" level=info msg="Forcibly stopping sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\"" Aug 13 00:41:24.920602 containerd[1479]: time="2025-08-13T00:41:24.920561592Z" level=info msg="TearDown network for sandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" successfully" Aug 13 00:41:24.923759 containerd[1479]: time="2025-08-13T00:41:24.923735384Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:41:24.923876 containerd[1479]: time="2025-08-13T00:41:24.923771894Z" level=info msg="RemovePodSandbox \"e616278a1e3701fdd7dde0606247c6c2ec72bf485196b9ac8356c7da85f1a3eb\" returns successfully" Aug 13 00:41:24.924240 containerd[1479]: time="2025-08-13T00:41:24.924094414Z" level=info msg="StopPodSandbox for \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\"" Aug 13 00:41:24.924240 containerd[1479]: time="2025-08-13T00:41:24.924176423Z" level=info msg="TearDown network for sandbox \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\" successfully" Aug 13 00:41:24.924240 containerd[1479]: time="2025-08-13T00:41:24.924186913Z" level=info msg="StopPodSandbox for \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\" returns successfully" Aug 13 00:41:24.924496 containerd[1479]: time="2025-08-13T00:41:24.924463783Z" level=info msg="RemovePodSandbox for \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\"" Aug 13 00:41:24.924496 containerd[1479]: time="2025-08-13T00:41:24.924487183Z" level=info msg="Forcibly stopping sandbox \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\"" Aug 13 00:41:24.924648 containerd[1479]: time="2025-08-13T00:41:24.924543953Z" level=info msg="TearDown network for sandbox \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\" successfully" Aug 13 00:41:24.926919 containerd[1479]: time="2025-08-13T00:41:24.926894736Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:41:24.926998 containerd[1479]: time="2025-08-13T00:41:24.926925686Z" level=info msg="RemovePodSandbox \"b47050fd74853a639c63c00b6225e98a027d5d0b9eacd703fec4e25c9326452b\" returns successfully" Aug 13 00:41:24.927636 kubelet[2610]: I0813 00:41:24.927606 2610 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 00:41:24.928707 kubelet[2610]: I0813 00:41:24.928676 2610 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c" size=18897442 runtimeHandler="" Aug 13 00:41:24.928935 containerd[1479]: time="2025-08-13T00:41:24.928912082Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:41:24.929671 containerd[1479]: time="2025-08-13T00:41:24.929589201Z" level=info msg="ImageDelete event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:41:24.930120 containerd[1479]: time="2025-08-13T00:41:24.930103219Z" level=info msg="ImageDelete event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:41:24.947087 containerd[1479]: time="2025-08-13T00:41:24.947051540Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" returns successfully" Aug 13 00:41:24.954963 kubelet[2610]: I0813 00:41:24.954942 2610 eviction_manager.go:383] "Eviction manager: able to reduce resource pressure without evicting pods." resourceName="ephemeral-storage" Aug 13 00:41:25.355489 kubelet[2610]: E0813 00:41:25.355462 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:25.357828 containerd[1479]: time="2025-08-13T00:41:25.357777104Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:41:25.371173 containerd[1479]: time="2025-08-13T00:41:25.371132772Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"085ff41a23af7befdb172277e31708cf13ffb53e5147e7e2b783d2970088f173\"" Aug 13 00:41:25.372412 containerd[1479]: time="2025-08-13T00:41:25.372377509Z" level=info msg="StartContainer for \"085ff41a23af7befdb172277e31708cf13ffb53e5147e7e2b783d2970088f173\"" Aug 13 00:41:25.404931 systemd[1]: Started cri-containerd-085ff41a23af7befdb172277e31708cf13ffb53e5147e7e2b783d2970088f173.scope - libcontainer container 085ff41a23af7befdb172277e31708cf13ffb53e5147e7e2b783d2970088f173. Aug 13 00:41:25.428564 containerd[1479]: time="2025-08-13T00:41:25.428522305Z" level=info msg="StartContainer for \"085ff41a23af7befdb172277e31708cf13ffb53e5147e7e2b783d2970088f173\" returns successfully" Aug 13 00:41:25.429151 systemd[1]: cri-containerd-085ff41a23af7befdb172277e31708cf13ffb53e5147e7e2b783d2970088f173.scope: Deactivated successfully. Aug 13 00:41:25.448652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-085ff41a23af7befdb172277e31708cf13ffb53e5147e7e2b783d2970088f173-rootfs.mount: Deactivated successfully. Aug 13 00:41:25.451036 containerd[1479]: time="2025-08-13T00:41:25.450979271Z" level=info msg="shim disconnected" id=085ff41a23af7befdb172277e31708cf13ffb53e5147e7e2b783d2970088f173 namespace=k8s.io Aug 13 00:41:25.451274 containerd[1479]: time="2025-08-13T00:41:25.451254281Z" level=warning msg="cleaning up after shim disconnected" id=085ff41a23af7befdb172277e31708cf13ffb53e5147e7e2b783d2970088f173 namespace=k8s.io Aug 13 00:41:25.451274 containerd[1479]: time="2025-08-13T00:41:25.451270451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:41:26.358728 kubelet[2610]: E0813 00:41:26.358693 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:26.362463 containerd[1479]: time="2025-08-13T00:41:26.362434735Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:41:26.376961 containerd[1479]: time="2025-08-13T00:41:26.376929970Z" level=info msg="CreateContainer within sandbox \"1b1506a75e5da8c8e00b8a813ac32f3923896a513642ec9a8070146b9bfa4d4e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9647300d5f083e0e3a60c8555dd69666efd987ccb87802625e4f458db9262a7\"" Aug 13 00:41:26.377493 containerd[1479]: time="2025-08-13T00:41:26.377475008Z" level=info msg="StartContainer for \"b9647300d5f083e0e3a60c8555dd69666efd987ccb87802625e4f458db9262a7\"" Aug 13 00:41:26.407924 systemd[1]: Started cri-containerd-b9647300d5f083e0e3a60c8555dd69666efd987ccb87802625e4f458db9262a7.scope - libcontainer container b9647300d5f083e0e3a60c8555dd69666efd987ccb87802625e4f458db9262a7. Aug 13 00:41:26.436923 containerd[1479]: time="2025-08-13T00:41:26.436883745Z" level=info msg="StartContainer for \"b9647300d5f083e0e3a60c8555dd69666efd987ccb87802625e4f458db9262a7\" returns successfully" Aug 13 00:41:26.861850 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:41:27.363745 kubelet[2610]: E0813 00:41:27.363537 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:27.378901 kubelet[2610]: I0813 00:41:27.378544 2610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8gccx" podStartSLOduration=5.37853071 podStartE2EDuration="5.37853071s" podCreationTimestamp="2025-08-13 00:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:41:27.374712399 +0000 UTC m=+353.794195365" watchObservedRunningTime="2025-08-13 00:41:27.37853071 +0000 UTC m=+353.798013676" Aug 13 00:41:28.834206 kubelet[2610]: E0813 00:41:28.832685 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:29.665071 systemd-networkd[1391]: lxc_health: Link UP Aug 13 00:41:29.665395 systemd-networkd[1391]: lxc_health: Gained carrier Aug 13 00:41:30.018380 systemd[1]: run-containerd-runc-k8s.io-b9647300d5f083e0e3a60c8555dd69666efd987ccb87802625e4f458db9262a7-runc.cNPgsO.mount: Deactivated successfully. Aug 13 00:41:30.833337 kubelet[2610]: E0813 00:41:30.833018 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:31.192158 systemd-networkd[1391]: lxc_health: Gained IPv6LL Aug 13 00:41:31.373834 kubelet[2610]: E0813 00:41:31.372766 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:32.188370 kubelet[2610]: E0813 00:41:32.188106 2610 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57602->127.0.0.1:46471: write tcp 127.0.0.1:57602->127.0.0.1:46471: write: broken pipe Aug 13 00:41:32.375312 kubelet[2610]: E0813 00:41:32.374311 2610 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.19 172.232.0.20 172.232.0.15" Aug 13 00:41:34.246013 systemd[1]: run-containerd-runc-k8s.io-b9647300d5f083e0e3a60c8555dd69666efd987ccb87802625e4f458db9262a7-runc.zI4ovv.mount: Deactivated successfully. Aug 13 00:41:35.677124 kubelet[2610]: I0813 00:41:35.677083 2610 ???:1] "http: TLS handshake error from 198.235.24.103:58638: client sent an HTTP request to an HTTPS server" Aug 13 00:41:36.358386 systemd[1]: run-containerd-runc-k8s.io-b9647300d5f083e0e3a60c8555dd69666efd987ccb87802625e4f458db9262a7-runc.7oT7Vt.mount: Deactivated successfully. Aug 13 00:41:36.455461 sshd[5044]: Connection closed by 139.178.89.65 port 56750 Aug 13 00:41:36.456118 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Aug 13 00:41:36.460240 systemd[1]: sshd@56-172.234.21.106:22-139.178.89.65:56750.service: Deactivated successfully. Aug 13 00:41:36.462256 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 00:41:36.463150 systemd-logind[1462]: Session 57 logged out. Waiting for processes to exit. Aug 13 00:41:36.464142 systemd-logind[1462]: Removed session 57.