Aug 13 01:03:57.939574 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 13 01:03:57.940707 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 01:03:57.940723 kernel: BIOS-provided physical RAM map: Aug 13 01:03:57.940732 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:03:57.940741 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:03:57.940755 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:03:57.940766 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:03:57.940775 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:03:57.940784 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:03:57.940793 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:03:57.940803 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:03:57.940813 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:03:57.940821 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:03:57.940831 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:03:57.940847 kernel: NX (Execute Disable) protection: active Aug 13 01:03:57.940857 kernel: APIC: Static calls initialized Aug 13 01:03:57.940867 kernel: SMBIOS 2.8 present. Aug 13 01:03:57.940877 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:03:57.940887 kernel: Hypervisor detected: KVM Aug 13 01:03:57.940901 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:03:57.940911 kernel: kvm-clock: using sched offset of 4528931760 cycles Aug 13 01:03:57.940922 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:03:57.940932 kernel: tsc: Detected 2000.000 MHz processor Aug 13 01:03:57.940943 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:03:57.940954 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:03:57.940964 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:03:57.940975 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:03:57.940986 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:03:57.941000 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:03:57.941011 kernel: Using GB pages for direct mapping Aug 13 01:03:57.941022 kernel: ACPI: Early table checksum verification disabled Aug 13 01:03:57.941032 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:03:57.941042 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:03:57.941053 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:03:57.941063 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:03:57.941074 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:03:57.941084 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:03:57.941098 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:03:57.941108 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:03:57.941119 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:03:57.941133 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:03:57.941144 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:03:57.941155 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:03:57.941168 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:03:57.941180 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:03:57.941191 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:03:57.941201 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:03:57.941213 kernel: No NUMA configuration found Aug 13 01:03:57.941224 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:03:57.941235 kernel: NODE_DATA(0) allocated [mem 0x17fffa000-0x17fffffff] Aug 13 01:03:57.941246 kernel: Zone ranges: Aug 13 01:03:57.941257 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:03:57.941272 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:03:57.941283 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:03:57.941294 kernel: Movable zone start for each node Aug 13 01:03:57.941306 kernel: Early memory node ranges Aug 13 01:03:57.941317 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:03:57.941327 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:03:57.941339 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:03:57.941350 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:03:57.941361 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:03:57.941376 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:03:57.941387 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:03:57.941398 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:03:57.941409 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:03:57.941420 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:03:57.941431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:03:57.941442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:03:57.941453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:03:57.941464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:03:57.941478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:03:57.941488 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:03:57.941498 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:03:57.941509 kernel: TSC deadline timer available Aug 13 01:03:57.941520 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 01:03:57.941530 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:03:57.941541 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:03:57.941552 kernel: kvm-guest: setup PV sched yield Aug 13 01:03:57.941563 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:03:57.941578 kernel: Booting paravirtualized kernel on KVM Aug 13 01:03:57.945270 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:03:57.945293 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:03:57.945306 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 01:03:57.945319 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 01:03:57.945329 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:03:57.945341 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:03:57.945352 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:03:57.945365 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 01:03:57.945384 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:03:57.945396 kernel: random: crng init done Aug 13 01:03:57.945408 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:03:57.945420 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:03:57.945431 kernel: Fallback order for Node 0: 0 Aug 13 01:03:57.945443 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Aug 13 01:03:57.945455 kernel: Policy zone: Normal Aug 13 01:03:57.945467 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:03:57.945482 kernel: software IO TLB: area num 2. Aug 13 01:03:57.945494 kernel: Memory: 3964164K/4193772K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 229348K reserved, 0K cma-reserved) Aug 13 01:03:57.945505 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:03:57.945516 kernel: ftrace: allocating 37942 entries in 149 pages Aug 13 01:03:57.945527 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 01:03:57.945538 kernel: Dynamic Preempt: voluntary Aug 13 01:03:57.945549 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:03:57.945561 kernel: rcu: RCU event tracing is enabled. Aug 13 01:03:57.945573 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:03:57.945588 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:03:57.945627 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:03:57.945638 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:03:57.945649 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:03:57.945660 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:03:57.945671 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:03:57.945683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:03:57.945694 kernel: Console: colour VGA+ 80x25 Aug 13 01:03:57.945705 kernel: printk: console [tty0] enabled Aug 13 01:03:57.945716 kernel: printk: console [ttyS0] enabled Aug 13 01:03:57.945731 kernel: ACPI: Core revision 20230628 Aug 13 01:03:57.945742 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:03:57.945754 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:03:57.945778 kernel: x2apic enabled Aug 13 01:03:57.945793 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:03:57.945805 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:03:57.945816 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:03:57.945828 kernel: kvm-guest: setup PV IPIs Aug 13 01:03:57.945839 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:03:57.945850 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Aug 13 01:03:57.945862 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 01:03:57.945879 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:03:57.945891 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:03:57.945903 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:03:57.945915 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:03:57.945928 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:03:57.945943 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:03:57.945955 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:03:57.945968 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:03:57.945980 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:03:57.945992 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:03:57.946005 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:03:57.946017 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:03:57.946029 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:03:57.946044 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:03:57.946056 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:03:57.946068 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:03:57.946080 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:03:57.946092 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:03:57.946103 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:03:57.946115 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:03:57.946127 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:03:57.946139 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:03:57.946156 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:03:57.946168 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 01:03:57.946180 kernel: landlock: Up and running. Aug 13 01:03:57.946192 kernel: SELinux: Initializing. Aug 13 01:03:57.946204 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:03:57.946216 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:03:57.946228 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:03:57.946239 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:03:57.946251 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:03:57.946267 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:03:57.946279 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:03:57.946290 kernel: ... version: 0 Aug 13 01:03:57.946303 kernel: ... bit width: 48 Aug 13 01:03:57.946341 kernel: ... generic registers: 6 Aug 13 01:03:57.946353 kernel: ... value mask: 0000ffffffffffff Aug 13 01:03:57.946365 kernel: ... max period: 00007fffffffffff Aug 13 01:03:57.946377 kernel: ... fixed-purpose events: 0 Aug 13 01:03:57.946389 kernel: ... event mask: 000000000000003f Aug 13 01:03:57.946405 kernel: signal: max sigframe size: 3376 Aug 13 01:03:57.946417 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:03:57.946429 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:03:57.946441 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:03:57.946453 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:03:57.946465 kernel: .... node #0, CPUs: #1 Aug 13 01:03:57.946476 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:03:57.946488 kernel: smpboot: Max logical packages: 1 Aug 13 01:03:57.946500 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 01:03:57.946516 kernel: devtmpfs: initialized Aug 13 01:03:57.946529 kernel: x86/mm: Memory block size: 128MB Aug 13 01:03:57.946541 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:03:57.946553 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:03:57.946566 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:03:57.946579 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:03:57.946617 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:03:57.946659 kernel: audit: type=2000 audit(1755047037.579:1): state=initialized audit_enabled=0 res=1 Aug 13 01:03:57.946673 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:03:57.946690 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:03:57.946701 kernel: cpuidle: using governor menu Aug 13 01:03:57.946712 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:03:57.946724 kernel: dca service started, version 1.12.1 Aug 13 01:03:57.946735 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 01:03:57.946746 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:03:57.946758 kernel: PCI: Using configuration type 1 for base access Aug 13 01:03:57.946769 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:03:57.946780 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:03:57.946794 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:03:57.946806 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:03:57.946817 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:03:57.946828 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:03:57.946839 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:03:57.946851 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:03:57.946863 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:03:57.946874 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 01:03:57.946885 kernel: ACPI: Interpreter enabled Aug 13 01:03:57.946900 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:03:57.946912 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:03:57.946924 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:03:57.946936 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:03:57.946949 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:03:57.946961 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:03:57.947218 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:03:57.947418 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:03:57.947633 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:03:57.947652 kernel: PCI host bridge to bus 0000:00 Aug 13 01:03:57.947837 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:03:57.948011 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:03:57.948166 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:03:57.948335 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:03:57.948491 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:03:57.948737 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:03:57.948908 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:03:57.949121 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 01:03:57.949329 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Aug 13 01:03:57.949523 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Aug 13 01:03:57.951264 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Aug 13 01:03:57.951461 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:03:57.951710 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:03:57.952052 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Aug 13 01:03:57.952234 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Aug 13 01:03:57.952415 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Aug 13 01:03:57.952782 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:03:57.952998 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 01:03:57.953192 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 13 01:03:57.953374 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Aug 13 01:03:57.953565 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:03:57.953795 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:03:57.953983 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 01:03:57.954161 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:03:57.954365 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 01:03:57.954563 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Aug 13 01:03:57.954774 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Aug 13 01:03:57.954972 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 01:03:57.955167 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 01:03:57.955187 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:03:57.955201 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:03:57.955214 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:03:57.955232 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:03:57.955244 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:03:57.955256 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:03:57.955267 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:03:57.955280 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:03:57.955292 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:03:57.955304 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:03:57.955315 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:03:57.955327 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:03:57.955343 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:03:57.955355 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:03:57.955367 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:03:57.955379 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:03:57.955390 kernel: iommu: Default domain type: Translated Aug 13 01:03:57.955402 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:03:57.955413 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:03:57.955425 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:03:57.955438 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:03:57.955455 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:03:57.955701 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:03:57.955892 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:03:57.956081 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:03:57.956101 kernel: vgaarb: loaded Aug 13 01:03:57.956115 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:03:57.956127 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:03:57.956139 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:03:57.956157 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:03:57.956169 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:03:57.956181 kernel: pnp: PnP ACPI init Aug 13 01:03:57.956401 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:03:57.956422 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:03:57.956436 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:03:57.956449 kernel: NET: Registered PF_INET protocol family Aug 13 01:03:57.956461 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:03:57.956473 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:03:57.956491 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:03:57.956503 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:03:57.956515 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:03:57.956527 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:03:57.956539 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:03:57.956551 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:03:57.956562 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:03:57.956574 kernel: NET: Registered PF_XDP protocol family Aug 13 01:03:57.956751 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:03:57.956913 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:03:57.957078 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:03:57.957251 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:03:57.957426 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:03:57.957586 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:03:57.957623 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:03:57.957636 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:03:57.957649 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:03:57.957666 kernel: Initialise system trusted keyrings Aug 13 01:03:57.957678 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:03:57.957690 kernel: Key type asymmetric registered Aug 13 01:03:57.957703 kernel: Asymmetric key parser 'x509' registered Aug 13 01:03:57.957715 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 01:03:57.957727 kernel: io scheduler mq-deadline registered Aug 13 01:03:57.957739 kernel: io scheduler kyber registered Aug 13 01:03:57.957751 kernel: io scheduler bfq registered Aug 13 01:03:57.957764 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:03:57.957781 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:03:57.957794 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:03:57.957807 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:03:57.957819 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:03:57.957832 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:03:57.957845 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:03:57.957856 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:03:57.957868 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:03:57.958055 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:03:57.958220 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:03:57.958375 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:03:57 UTC (1755047037) Aug 13 01:03:57.958526 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:03:57.958544 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:03:57.958556 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:03:57.958568 kernel: Segment Routing with IPv6 Aug 13 01:03:57.958580 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:03:57.958622 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:03:57.958641 kernel: Key type dns_resolver registered Aug 13 01:03:57.958654 kernel: IPI shorthand broadcast: enabled Aug 13 01:03:57.958666 kernel: sched_clock: Marking stable (680002655, 208894110)->(929267775, -40371010) Aug 13 01:03:57.958678 kernel: registered taskstats version 1 Aug 13 01:03:57.958690 kernel: Loading compiled-in X.509 certificates Aug 13 01:03:57.958702 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 13 01:03:57.958714 kernel: Key type .fscrypt registered Aug 13 01:03:57.958726 kernel: Key type fscrypt-provisioning registered Aug 13 01:03:57.958742 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:03:57.958754 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:03:57.958765 kernel: ima: No architecture policies found Aug 13 01:03:57.958777 kernel: clk: Disabling unused clocks Aug 13 01:03:57.958789 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 13 01:03:57.958801 kernel: Write protecting the kernel read-only data: 38912k Aug 13 01:03:57.958812 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 13 01:03:57.958824 kernel: Run /init as init process Aug 13 01:03:57.958836 kernel: with arguments: Aug 13 01:03:57.958847 kernel: /init Aug 13 01:03:57.958863 kernel: with environment: Aug 13 01:03:57.958874 kernel: HOME=/ Aug 13 01:03:57.958885 kernel: TERM=linux Aug 13 01:03:57.958897 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:03:57.958911 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:03:57.958928 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:03:57.958943 systemd[1]: Detected virtualization kvm. Aug 13 01:03:57.958960 systemd[1]: Detected architecture x86-64. Aug 13 01:03:57.958973 systemd[1]: Running in initrd. Aug 13 01:03:57.958986 systemd[1]: No hostname configured, using default hostname. Aug 13 01:03:57.959000 systemd[1]: Hostname set to . Aug 13 01:03:57.959014 systemd[1]: Initializing machine ID from random generator. Aug 13 01:03:57.959048 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:03:57.959068 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:03:57.959082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:03:57.959096 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:03:57.959109 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:03:57.959122 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:03:57.959136 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:03:57.959151 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:03:57.959168 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:03:57.959181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:03:57.959194 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:03:57.959207 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:03:57.959220 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:03:57.959232 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:03:57.959245 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:03:57.959258 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:03:57.959272 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:03:57.959289 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:03:57.959303 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:03:57.959316 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:03:57.959330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:03:57.959343 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:03:57.959357 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:03:57.959370 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:03:57.959384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:03:57.959402 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:03:57.959416 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:03:57.959430 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:03:57.959446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:03:57.959459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:03:57.959473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:03:57.959487 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:03:57.959538 systemd-journald[177]: Collecting audit messages is disabled. Aug 13 01:03:57.959576 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:03:57.959616 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:03:57.959632 systemd-journald[177]: Journal started Aug 13 01:03:57.959661 systemd-journald[177]: Runtime Journal (/run/log/journal/2ca91f89de37412aabbc85d8fb808d1e) is 8M, max 78.3M, 70.3M free. Aug 13 01:03:57.934652 systemd-modules-load[178]: Inserted module 'overlay' Aug 13 01:03:58.007287 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:03:58.007316 kernel: Bridge firewalling registered Aug 13 01:03:57.977027 systemd-modules-load[178]: Inserted module 'br_netfilter' Aug 13 01:03:58.019692 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:03:58.020543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:03:58.022389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:03:58.025894 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:03:58.032754 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:03:58.038752 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:03:58.042707 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:03:58.064771 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:03:58.066081 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:03:58.071183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:03:58.082358 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:03:58.093006 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:03:58.094500 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:03:58.097733 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:03:58.105070 dracut-cmdline[213]: dracut-dracut-053 Aug 13 01:03:58.109407 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 01:03:58.143067 systemd-resolved[217]: Positive Trust Anchors: Aug 13 01:03:58.143081 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:03:58.143107 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:03:58.146967 systemd-resolved[217]: Defaulting to hostname 'linux'. Aug 13 01:03:58.150293 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:03:58.151259 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:03:58.182618 kernel: SCSI subsystem initialized Aug 13 01:03:58.191617 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:03:58.202628 kernel: iscsi: registered transport (tcp) Aug 13 01:03:58.222886 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:03:58.222929 kernel: QLogic iSCSI HBA Driver Aug 13 01:03:58.269290 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:03:58.275731 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:03:58.301019 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:03:58.301067 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:03:58.304202 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 01:03:58.344765 kernel: raid6: avx2x4 gen() 24350 MB/s Aug 13 01:03:58.362615 kernel: raid6: avx2x2 gen() 23219 MB/s Aug 13 01:03:58.381183 kernel: raid6: avx2x1 gen() 13875 MB/s Aug 13 01:03:58.381205 kernel: raid6: using algorithm avx2x4 gen() 24350 MB/s Aug 13 01:03:58.399969 kernel: raid6: .... xor() 3027 MB/s, rmw enabled Aug 13 01:03:58.400001 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:03:58.420620 kernel: xor: automatically using best checksumming function avx Aug 13 01:03:58.542631 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:03:58.555545 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:03:58.560736 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:03:58.577689 systemd-udevd[398]: Using default interface naming scheme 'v255'. Aug 13 01:03:58.582529 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:03:58.589718 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:03:58.602889 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Aug 13 01:03:58.632040 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:03:58.638720 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:03:58.701070 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:03:58.707714 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:03:58.720426 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:03:58.722362 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:03:58.722932 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:03:58.723486 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:03:58.731747 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:03:58.746379 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:03:58.769636 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:03:58.783700 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:03:58.786049 kernel: libata version 3.00 loaded. Aug 13 01:03:58.803610 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:03:58.807615 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 01:03:58.807779 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:03:58.808723 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:03:58.813808 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 01:03:58.813976 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:03:58.818880 kernel: AES CTR mode by8 optimization enabled Aug 13 01:03:58.818902 kernel: scsi host1: ahci Aug 13 01:03:58.821133 kernel: scsi host2: ahci Aug 13 01:03:58.824716 kernel: scsi host3: ahci Aug 13 01:03:58.824888 kernel: scsi host4: ahci Aug 13 01:03:58.828713 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:03:58.829950 kernel: scsi host5: ahci Aug 13 01:03:58.830283 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:03:58.833216 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:03:58.985949 kernel: scsi host6: ahci Aug 13 01:03:58.986144 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 Aug 13 01:03:58.986157 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 Aug 13 01:03:58.986167 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 Aug 13 01:03:58.986176 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 Aug 13 01:03:58.986185 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 Aug 13 01:03:58.986195 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 Aug 13 01:03:58.985144 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:03:58.985296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:03:58.987091 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:03:59.017730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:03:59.076931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:03:59.083729 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:03:59.098630 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:03:59.150781 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:03:59.150825 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:03:59.150836 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:03:59.150846 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:03:59.150855 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:03:59.151612 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:03:59.172634 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:03:59.198196 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:03:59.198409 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:03:59.198554 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:03:59.198758 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:03:59.205740 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:03:59.205761 kernel: GPT:9289727 != 9297919 Aug 13 01:03:59.205771 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:03:59.207317 kernel: GPT:9289727 != 9297919 Aug 13 01:03:59.208368 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:03:59.210820 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:03:59.212361 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:03:59.248608 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/sda3 scanned by (udev-worker) (448) Aug 13 01:03:59.251608 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (457) Aug 13 01:03:59.266575 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:03:59.280970 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:03:59.288535 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:03:59.289194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:03:59.298020 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:03:59.303718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:03:59.308772 disk-uuid[572]: Primary Header is updated. Aug 13 01:03:59.308772 disk-uuid[572]: Secondary Entries is updated. Aug 13 01:03:59.308772 disk-uuid[572]: Secondary Header is updated. Aug 13 01:03:59.313621 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:03:59.318620 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:04:00.322644 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:04:00.323237 disk-uuid[573]: The operation has completed successfully. Aug 13 01:04:00.376898 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:04:00.377017 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:04:00.404714 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:04:00.408412 sh[587]: Success Aug 13 01:04:00.423950 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Aug 13 01:04:00.473166 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:04:00.486683 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:04:00.488138 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:04:00.507932 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 13 01:04:00.507961 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:04:00.507972 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 01:04:00.510056 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 01:04:00.511563 kernel: BTRFS info (device dm-0): using free space tree Aug 13 01:04:00.519615 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 01:04:00.520670 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:04:00.522475 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:04:00.533694 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:04:00.535575 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:04:00.559110 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:04:00.559141 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:04:00.559152 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:04:00.563601 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:04:00.563625 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 01:04:00.570283 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:04:00.574069 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:04:00.581940 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:04:00.656943 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:04:00.660119 ignition[696]: Ignition 2.20.0 Aug 13 01:04:00.660131 ignition[696]: Stage: fetch-offline Aug 13 01:04:00.660167 ignition[696]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:04:00.660177 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:04:00.660254 ignition[696]: parsed url from cmdline: "" Aug 13 01:04:00.660258 ignition[696]: no config URL provided Aug 13 01:04:00.660263 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:04:00.660271 ignition[696]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:04:00.660276 ignition[696]: failed to fetch config: resource requires networking Aug 13 01:04:00.660425 ignition[696]: Ignition finished successfully Aug 13 01:04:00.670945 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:04:00.672520 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:04:00.696801 systemd-networkd[771]: lo: Link UP Aug 13 01:04:00.696813 systemd-networkd[771]: lo: Gained carrier Aug 13 01:04:00.698381 systemd-networkd[771]: Enumeration completed Aug 13 01:04:00.698453 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:04:00.699046 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:04:00.699051 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:04:00.700254 systemd-networkd[771]: eth0: Link UP Aug 13 01:04:00.700254 systemd[1]: Reached target network.target - Network. Aug 13 01:04:00.700258 systemd-networkd[771]: eth0: Gained carrier Aug 13 01:04:00.700265 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:04:00.706313 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:04:00.717687 ignition[775]: Ignition 2.20.0 Aug 13 01:04:00.717697 ignition[775]: Stage: fetch Aug 13 01:04:00.717832 ignition[775]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:04:00.717843 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:04:00.717930 ignition[775]: parsed url from cmdline: "" Aug 13 01:04:00.717934 ignition[775]: no config URL provided Aug 13 01:04:00.717938 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:04:00.717947 ignition[775]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:04:00.717967 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:04:00.718101 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:04:00.918839 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:04:00.919058 ignition[775]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:04:01.239691 systemd-networkd[771]: eth0: DHCPv4 address 172.234.214.143/24, gateway 172.234.214.1 acquired from 23.40.197.137 Aug 13 01:04:01.319238 ignition[775]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:04:01.428659 ignition[775]: PUT result: OK Aug 13 01:04:01.428730 ignition[775]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:04:01.541859 ignition[775]: GET result: OK Aug 13 01:04:01.541958 ignition[775]: parsing config with SHA512: 8917af75ea08b5fcdf6f138df95761bd75a2b4850d63085d9c14184a142a38ca0f106195b09e9e7507a1857c08e36a924e1a518771ae9220b31202e20493426f Aug 13 01:04:01.546042 unknown[775]: fetched base config from "system" Aug 13 01:04:01.546054 unknown[775]: fetched base config from "system" Aug 13 01:04:01.550015 ignition[775]: fetch: fetch complete Aug 13 01:04:01.546060 unknown[775]: fetched user config from "akamai" Aug 13 01:04:01.550024 ignition[775]: fetch: fetch passed Aug 13 01:04:01.553556 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:04:01.550338 ignition[775]: Ignition finished successfully Aug 13 01:04:01.558989 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:04:01.584664 ignition[783]: Ignition 2.20.0 Aug 13 01:04:01.585350 ignition[783]: Stage: kargs Aug 13 01:04:01.585501 ignition[783]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:04:01.585512 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:04:01.586426 ignition[783]: kargs: kargs passed Aug 13 01:04:01.586468 ignition[783]: Ignition finished successfully Aug 13 01:04:01.590426 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:04:01.596741 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:04:01.609441 ignition[789]: Ignition 2.20.0 Aug 13 01:04:01.609455 ignition[789]: Stage: disks Aug 13 01:04:01.610695 ignition[789]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:04:01.610731 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:04:01.612842 ignition[789]: disks: disks passed Aug 13 01:04:01.612932 ignition[789]: Ignition finished successfully Aug 13 01:04:01.614967 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:04:01.636262 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:04:01.637963 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:04:01.639658 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:04:01.641032 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:04:01.642204 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:04:01.654788 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:04:01.674645 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 01:04:01.678106 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:04:01.685698 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:04:01.777622 kernel: EXT4-fs (sda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 13 01:04:01.778049 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:04:01.779221 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:04:01.785663 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:04:01.787695 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:04:01.788951 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:04:01.788997 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:04:01.789020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:04:01.799140 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:04:01.802707 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (805) Aug 13 01:04:01.802732 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:04:01.802743 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:04:01.802753 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:04:01.810604 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:04:01.810642 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 01:04:01.812839 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:04:01.824717 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:04:01.849734 systemd-networkd[771]: eth0: Gained IPv6LL Aug 13 01:04:01.870801 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:04:01.876866 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:04:01.882959 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:04:01.886645 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:04:01.983141 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:04:01.989870 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:04:01.992731 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:04:02.001381 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:04:02.005126 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:04:02.024451 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:04:02.037645 ignition[918]: INFO : Ignition 2.20.0 Aug 13 01:04:02.037645 ignition[918]: INFO : Stage: mount Aug 13 01:04:02.037645 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:04:02.037645 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:04:02.040471 ignition[918]: INFO : mount: mount passed Aug 13 01:04:02.040471 ignition[918]: INFO : Ignition finished successfully Aug 13 01:04:02.041353 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:04:02.046701 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:04:02.785835 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:04:02.805634 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (929) Aug 13 01:04:02.809051 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 01:04:02.809111 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:04:02.810793 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:04:02.816938 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:04:02.816970 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 01:04:02.820974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:04:02.849242 ignition[946]: INFO : Ignition 2.20.0 Aug 13 01:04:02.850081 ignition[946]: INFO : Stage: files Aug 13 01:04:02.850750 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:04:02.852208 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:04:02.852208 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:04:02.853654 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:04:02.853654 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:04:02.855742 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:04:02.856729 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:04:02.857514 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:04:02.857120 unknown[946]: wrote ssh authorized keys file for user: core Aug 13 01:04:02.859126 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:04:02.859126 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:04:03.293757 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:04:03.625353 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:04:03.625353 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:04:03.627535 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 01:04:03.837761 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:04:03.932357 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 01:04:03.932357 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:04:03.935280 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:04:04.343930 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 01:04:04.762390 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:04:04.762390 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 01:04:04.786168 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:04:04.786168 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:04:04.786168 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 01:04:04.786168 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 13 01:04:04.786168 ignition[946]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:04:04.786168 ignition[946]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:04:04.786168 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 13 01:04:04.786168 ignition[946]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:04:04.786168 ignition[946]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:04:04.786168 ignition[946]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:04:04.786168 ignition[946]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:04:04.786168 ignition[946]: INFO : files: files passed Aug 13 01:04:04.786168 ignition[946]: INFO : Ignition finished successfully Aug 13 01:04:04.766279 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:04:04.797786 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:04:04.800723 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:04:04.804464 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:04:04.804580 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:04:04.820053 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:04:04.820053 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:04:04.821770 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:04:04.822897 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:04:04.823632 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:04:04.836733 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:04:04.855282 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:04:04.855392 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:04:04.856383 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:04:04.857835 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:04:04.858992 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:04:04.860340 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:04:04.873801 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:04:04.878703 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:04:04.886678 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:04:04.887299 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:04:04.888010 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:04:04.889170 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:04:04.889304 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:04:04.890660 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:04:04.891381 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:04:04.892391 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:04:04.893539 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:04:04.894558 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:04:04.895525 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:04:04.896694 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:04:04.897856 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:04:04.898986 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:04:04.900099 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:04:04.901194 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:04:04.901291 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:04:04.902968 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:04:04.903725 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:04:04.904668 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:04:04.904985 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:04:04.905862 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:04:04.905957 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:04:04.907655 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:04:04.907759 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:04:04.908863 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:04:04.908958 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:04:04.919718 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:04:04.920953 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:04:04.921066 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:04:04.924212 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:04:04.925427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:04:04.926315 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:04:04.927848 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:04:04.928466 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:04:04.935634 ignition[999]: INFO : Ignition 2.20.0 Aug 13 01:04:04.935634 ignition[999]: INFO : Stage: umount Aug 13 01:04:04.935634 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:04:04.935634 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:04:04.939719 ignition[999]: INFO : umount: umount passed Aug 13 01:04:04.939719 ignition[999]: INFO : Ignition finished successfully Aug 13 01:04:04.938137 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:04:04.938246 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:04:04.939433 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:04:04.939531 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:04:04.943499 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:04:04.943604 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:04:04.946167 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:04:04.946217 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:04:04.949984 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:04:04.950032 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:04:04.973317 systemd[1]: Stopped target network.target - Network. Aug 13 01:04:04.974454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:04:04.974512 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:04:04.975582 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:04:04.976834 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:04:04.977018 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:04:04.977988 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:04:04.978956 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:04:04.980121 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:04:04.980163 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:04:04.981314 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:04:04.981355 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:04:04.982417 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:04:04.982503 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:04:04.983541 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:04:04.983618 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:04:04.984926 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:04:04.985980 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:04:04.988316 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:04:04.988926 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:04:04.989025 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:04:04.991862 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:04:04.991972 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:04:04.998180 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:04:04.998455 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:04:04.998568 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:04:05.000221 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:04:05.001581 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:04:05.001709 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:04:05.002942 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:04:05.002995 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:04:05.009702 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:04:05.010299 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:04:05.010350 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:04:05.012351 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:04:05.012401 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:04:05.013683 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:04:05.013734 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:04:05.014806 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:04:05.014853 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:04:05.016236 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:04:05.018923 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:04:05.019180 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:04:05.029651 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:04:05.029818 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:04:05.031610 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:04:05.031708 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:04:05.033172 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:04:05.033240 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:04:05.034288 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:04:05.034326 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:04:05.035392 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:04:05.035441 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:04:05.037081 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:04:05.037130 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:04:05.038236 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:04:05.038285 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:04:05.047955 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:04:05.048498 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:04:05.048549 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:04:05.049700 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:04:05.049746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:04:05.054566 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:04:05.054641 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:04:05.055031 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:04:05.055122 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:04:05.056751 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:04:05.066730 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:04:05.073253 systemd[1]: Switching root. Aug 13 01:04:05.099565 systemd-journald[177]: Journal stopped Aug 13 01:04:06.131251 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Aug 13 01:04:06.131278 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:04:06.131290 kernel: SELinux: policy capability open_perms=1 Aug 13 01:04:06.131299 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:04:06.131308 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:04:06.131320 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:04:06.131329 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:04:06.131338 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:04:06.131347 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:04:06.131356 kernel: audit: type=1403 audit(1755047045.205:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:04:06.131366 systemd[1]: Successfully loaded SELinux policy in 45.169ms. Aug 13 01:04:06.131378 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.270ms. Aug 13 01:04:06.131389 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:04:06.131399 systemd[1]: Detected virtualization kvm. Aug 13 01:04:06.131409 systemd[1]: Detected architecture x86-64. Aug 13 01:04:06.131419 systemd[1]: Detected first boot. Aug 13 01:04:06.131432 systemd[1]: Initializing machine ID from random generator. Aug 13 01:04:06.131441 zram_generator::config[1044]: No configuration found. Aug 13 01:04:06.131452 kernel: Guest personality initialized and is inactive Aug 13 01:04:06.131461 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:04:06.131470 kernel: Initialized host personality Aug 13 01:04:06.131480 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:04:06.131489 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:04:06.131502 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:04:06.131512 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:04:06.131521 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:04:06.131531 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:04:06.131541 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:04:06.131550 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:04:06.131560 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:04:06.131572 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:04:06.131582 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:04:06.131606 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:04:06.131617 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:04:06.131627 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:04:06.131637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:04:06.131646 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:04:06.131656 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:04:06.131666 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:04:06.131678 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:04:06.131691 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:04:06.131701 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:04:06.131714 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:04:06.131724 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:04:06.131734 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:04:06.131744 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:04:06.131756 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:04:06.131766 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:04:06.131776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:04:06.131786 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:04:06.131796 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:04:06.131806 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:04:06.131816 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:04:06.131826 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:04:06.131836 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:04:06.131848 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:04:06.131858 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:04:06.131868 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:04:06.131878 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:04:06.131890 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:04:06.131900 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:04:06.131910 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:04:06.131920 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:04:06.131931 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:04:06.131941 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:04:06.131951 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:04:06.131962 systemd[1]: Reached target machines.target - Containers. Aug 13 01:04:06.131974 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:04:06.131984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:04:06.131994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:04:06.132004 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:04:06.132014 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:04:06.132024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:04:06.132034 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:04:06.132044 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:04:06.132054 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:04:06.132066 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:04:06.132076 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:04:06.132086 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:04:06.132096 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:04:06.132106 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:04:06.132116 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:04:06.132126 kernel: fuse: init (API version 7.39) Aug 13 01:04:06.132139 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:04:06.132149 kernel: loop: module loaded Aug 13 01:04:06.132159 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:04:06.132169 kernel: ACPI: bus type drm_connector registered Aug 13 01:04:06.132178 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:04:06.132188 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:04:06.132198 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:04:06.132209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:04:06.132237 systemd-journald[1135]: Collecting audit messages is disabled. Aug 13 01:04:06.132259 systemd-journald[1135]: Journal started Aug 13 01:04:06.132279 systemd-journald[1135]: Runtime Journal (/run/log/journal/87aa9beaf05645ad888dc62af32768ed) is 8M, max 78.3M, 70.3M free. Aug 13 01:04:05.835207 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:04:05.846569 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:04:05.847044 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:04:06.142643 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:04:06.147103 systemd[1]: Stopped verity-setup.service. Aug 13 01:04:06.147134 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:04:06.154748 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:04:06.163320 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:04:06.164983 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:04:06.165663 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:04:06.166466 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:04:06.167098 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:04:06.167935 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:04:06.168967 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:04:06.169941 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:04:06.170917 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:04:06.171192 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:04:06.172192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:04:06.172462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:04:06.173960 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:04:06.174249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:04:06.175442 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:04:06.175794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:04:06.177006 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:04:06.177281 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:04:06.178155 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:04:06.178422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:04:06.179606 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:04:06.180565 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:04:06.182077 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:04:06.183222 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:04:06.199971 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:04:06.207235 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:04:06.211694 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:04:06.212341 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:04:06.212423 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:04:06.213951 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:04:06.222723 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:04:06.227766 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:04:06.230258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:04:06.237058 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:04:06.240367 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:04:06.242309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:04:06.247791 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:04:06.248371 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:04:06.250692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:04:06.254968 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:04:06.258690 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:04:06.262411 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:04:06.264302 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:04:06.267658 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:04:06.269111 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:04:06.283031 systemd-journald[1135]: Time spent on flushing to /var/log/journal/87aa9beaf05645ad888dc62af32768ed is 49.433ms for 993 entries. Aug 13 01:04:06.283031 systemd-journald[1135]: System Journal (/var/log/journal/87aa9beaf05645ad888dc62af32768ed) is 8M, max 195.6M, 187.6M free. Aug 13 01:04:06.350355 systemd-journald[1135]: Received client request to flush runtime journal. Aug 13 01:04:06.350582 kernel: loop0: detected capacity change from 0 to 147912 Aug 13 01:04:06.291881 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 01:04:06.294832 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:04:06.297939 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:04:06.310423 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:04:06.348757 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 01:04:06.356279 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:04:06.368039 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:04:06.384638 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:04:06.389531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:04:06.392575 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:04:06.402031 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:04:06.411294 kernel: loop1: detected capacity change from 0 to 138176 Aug 13 01:04:06.443204 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Aug 13 01:04:06.443901 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Aug 13 01:04:06.456621 kernel: loop2: detected capacity change from 0 to 221472 Aug 13 01:04:06.456125 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:04:06.508625 kernel: loop3: detected capacity change from 0 to 8 Aug 13 01:04:06.525974 kernel: loop4: detected capacity change from 0 to 147912 Aug 13 01:04:06.554693 kernel: loop5: detected capacity change from 0 to 138176 Aug 13 01:04:06.574237 kernel: loop6: detected capacity change from 0 to 221472 Aug 13 01:04:06.603113 kernel: loop7: detected capacity change from 0 to 8 Aug 13 01:04:06.604223 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:04:06.606798 (sd-merge)[1195]: Merged extensions into '/usr'. Aug 13 01:04:06.615446 systemd[1]: Reload requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:04:06.615534 systemd[1]: Reloading... Aug 13 01:04:06.713735 zram_generator::config[1219]: No configuration found. Aug 13 01:04:06.846031 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:04:06.856217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:04:06.913976 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:04:06.914306 systemd[1]: Reloading finished in 296 ms. Aug 13 01:04:06.929719 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:04:06.930664 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:04:06.931506 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:04:06.944036 systemd[1]: Starting ensure-sysext.service... Aug 13 01:04:06.947727 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:04:06.956096 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:04:06.965110 systemd[1]: Reload requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:04:06.965188 systemd[1]: Reloading... Aug 13 01:04:06.984248 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:04:06.984495 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:04:06.986271 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:04:06.986537 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Aug 13 01:04:06.987507 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Aug 13 01:04:06.994831 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:04:06.994843 systemd-tmpfiles[1268]: Skipping /boot Aug 13 01:04:07.002308 systemd-udevd[1269]: Using default interface naming scheme 'v255'. Aug 13 01:04:07.014517 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:04:07.014527 systemd-tmpfiles[1268]: Skipping /boot Aug 13 01:04:07.081630 zram_generator::config[1311]: No configuration found. Aug 13 01:04:07.207664 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 01:04:07.226744 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:04:07.249504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:04:07.269604 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:04:07.269868 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 01:04:07.270049 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:04:07.277612 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 01:04:07.295610 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1329) Aug 13 01:04:07.301258 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:04:07.330018 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:04:07.330330 systemd[1]: Reloading finished in 364 ms. Aug 13 01:04:07.348997 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:04:07.373007 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:04:07.413659 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:04:07.411498 systemd[1]: Finished ensure-sysext.service. Aug 13 01:04:07.432339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:04:07.436576 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 01:04:07.445289 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:04:07.453282 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:04:07.456865 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:04:07.457544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:04:07.461664 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 01:04:07.465632 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:04:07.470466 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:04:07.474906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:04:07.480548 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:04:07.481434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:04:07.483752 lvm[1379]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:04:07.489193 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:04:07.490647 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:04:07.493511 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:04:07.504407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:04:07.512815 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:04:07.522507 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:04:07.526696 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:04:07.529974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:04:07.531841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:04:07.534657 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 01:04:07.535885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:04:07.536342 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:04:07.538054 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:04:07.538650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:04:07.551760 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:04:07.562766 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 01:04:07.563872 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:04:07.568497 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:04:07.569767 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:04:07.570893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:04:07.572096 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:04:07.572647 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:04:07.574570 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:04:07.578870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:04:07.584644 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:04:07.585497 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:04:07.598364 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:04:07.607889 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 01:04:07.618191 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:04:07.633448 augenrules[1424]: No rules Aug 13 01:04:07.634063 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:04:07.635210 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:04:07.635460 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:04:07.645775 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:04:07.663660 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:04:07.668961 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:04:07.742698 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:04:07.787689 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:04:07.789735 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:04:07.797187 systemd-networkd[1391]: lo: Link UP Aug 13 01:04:07.797663 systemd-networkd[1391]: lo: Gained carrier Aug 13 01:04:07.800306 systemd-networkd[1391]: Enumeration completed Aug 13 01:04:07.800383 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:04:07.800995 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:04:07.801000 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:04:07.801577 systemd-networkd[1391]: eth0: Link UP Aug 13 01:04:07.801588 systemd-networkd[1391]: eth0: Gained carrier Aug 13 01:04:07.803527 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:04:07.809374 systemd-resolved[1392]: Positive Trust Anchors: Aug 13 01:04:07.809391 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:04:07.809417 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:04:07.809717 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:04:07.811768 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:04:07.817369 systemd-resolved[1392]: Defaulting to hostname 'linux'. Aug 13 01:04:07.819626 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:04:07.820447 systemd[1]: Reached target network.target - Network. Aug 13 01:04:07.821030 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:04:07.821653 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:04:07.822315 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:04:07.823130 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:04:07.823955 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:04:07.824647 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:04:07.825211 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:04:07.825968 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:04:07.825999 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:04:07.826500 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:04:07.828237 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:04:07.830960 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:04:07.834565 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:04:07.835473 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:04:07.836080 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:04:07.839373 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:04:07.840406 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:04:07.842052 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:04:07.843017 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:04:07.844314 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:04:07.844864 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:04:07.845420 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:04:07.845458 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:04:07.849770 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:04:07.852169 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:04:07.855747 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:04:07.859704 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:04:07.863767 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:04:07.865850 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:04:07.867307 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:04:07.875727 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:04:07.884794 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:04:07.888000 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:04:07.907476 jq[1452]: false Aug 13 01:04:07.913566 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:04:07.914477 extend-filesystems[1453]: Found loop4 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found loop5 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found loop6 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found loop7 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found sda Aug 13 01:04:07.914477 extend-filesystems[1453]: Found sda1 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found sda2 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found sda3 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found usr Aug 13 01:04:07.914477 extend-filesystems[1453]: Found sda4 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found sda6 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found sda7 Aug 13 01:04:07.914477 extend-filesystems[1453]: Found sda9 Aug 13 01:04:07.914477 extend-filesystems[1453]: Checking size of /dev/sda9 Aug 13 01:04:08.012665 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:04:08.012765 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:04:08.012867 coreos-metadata[1450]: Aug 13 01:04:07.964 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:04:07.916047 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:04:07.957283 dbus-daemon[1451]: [system] SELinux support is enabled Aug 13 01:04:08.019242 extend-filesystems[1453]: Resized partition /dev/sda9 Aug 13 01:04:07.916562 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:04:08.024634 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Aug 13 01:04:08.024634 extend-filesystems[1471]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:04:08.024634 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:04:08.024634 extend-filesystems[1471]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:04:07.925030 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:04:08.039239 extend-filesystems[1453]: Resized filesystem in /dev/sda9 Aug 13 01:04:07.935672 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:04:08.047618 jq[1470]: true Aug 13 01:04:07.943933 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:04:08.047863 update_engine[1466]: I20250813 01:04:07.989552 1466 main.cc:92] Flatcar Update Engine starting Aug 13 01:04:08.047863 update_engine[1466]: I20250813 01:04:07.994457 1466 update_check_scheduler.cc:74] Next update check in 11m2s Aug 13 01:04:07.944204 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:04:07.954034 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:04:08.064424 tar[1474]: linux-amd64/helm Aug 13 01:04:07.954282 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:04:08.064928 jq[1484]: true Aug 13 01:04:07.971454 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:04:07.982650 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:04:07.983079 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:04:08.026355 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:04:08.026418 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:04:08.030190 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:04:08.030210 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:04:08.038494 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:04:08.051254 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:04:08.052443 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:04:08.076774 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:04:08.090711 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:04:08.097624 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1328) Aug 13 01:04:08.195184 systemd-logind[1460]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 01:04:08.196949 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:04:08.197365 systemd-logind[1460]: New seat seat0. Aug 13 01:04:08.201819 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:04:08.203983 bash[1515]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:04:08.209649 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:04:08.210338 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:04:08.231124 systemd[1]: Starting sshkeys.service... Aug 13 01:04:08.249754 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:04:08.259523 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:04:08.261043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:04:08.278157 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:04:08.285859 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:04:08.306609 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:04:08.307115 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:04:08.319875 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:04:08.344587 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:04:08.348331 coreos-metadata[1525]: Aug 13 01:04:08.348 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:04:08.350923 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:04:08.360876 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:04:08.361312 systemd-networkd[1391]: eth0: DHCPv4 address 172.234.214.143/24, gateway 172.234.214.1 acquired from 23.40.197.137 Aug 13 01:04:08.361615 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:04:08.362171 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Aug 13 01:04:08.363160 dbus-daemon[1451]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1391 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:04:08.365150 containerd[1485]: time="2025-08-13T01:04:08.365012066Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 01:04:08.373074 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:04:08.403252 containerd[1485]: time="2025-08-13T01:04:08.403210047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:04:08.409455 containerd[1485]: time="2025-08-13T01:04:08.409422004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:04:08.409455 containerd[1485]: time="2025-08-13T01:04:08.409452314Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:04:08.409507 containerd[1485]: time="2025-08-13T01:04:08.409471844Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:04:08.409688 containerd[1485]: time="2025-08-13T01:04:08.409658953Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 01:04:08.409688 containerd[1485]: time="2025-08-13T01:04:08.409683633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 01:04:08.409774 containerd[1485]: time="2025-08-13T01:04:08.409747703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:04:08.409774 containerd[1485]: time="2025-08-13T01:04:08.409768423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:04:08.410081 containerd[1485]: time="2025-08-13T01:04:08.409948713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:04:08.410081 containerd[1485]: time="2025-08-13T01:04:08.409966783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:04:08.410081 containerd[1485]: time="2025-08-13T01:04:08.409978673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:04:08.410081 containerd[1485]: time="2025-08-13T01:04:08.409987153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:04:08.410081 containerd[1485]: time="2025-08-13T01:04:08.410073243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:04:08.410311 containerd[1485]: time="2025-08-13T01:04:08.410285563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:04:08.410487 containerd[1485]: time="2025-08-13T01:04:08.410431483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:04:08.410487 containerd[1485]: time="2025-08-13T01:04:08.410448943Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:04:08.410617 containerd[1485]: time="2025-08-13T01:04:08.410540223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:04:08.410640 containerd[1485]: time="2025-08-13T01:04:08.410616383Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:04:08.420267 containerd[1485]: time="2025-08-13T01:04:08.420100758Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:04:08.420267 containerd[1485]: time="2025-08-13T01:04:08.420141568Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:04:08.420267 containerd[1485]: time="2025-08-13T01:04:08.420155508Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 01:04:08.420267 containerd[1485]: time="2025-08-13T01:04:08.420168868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 01:04:08.420267 containerd[1485]: time="2025-08-13T01:04:08.420180108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:04:08.420380 containerd[1485]: time="2025-08-13T01:04:08.420295268Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:04:08.420622 containerd[1485]: time="2025-08-13T01:04:08.420459138Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:04:08.420622 containerd[1485]: time="2025-08-13T01:04:08.420563728Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 01:04:08.420622 containerd[1485]: time="2025-08-13T01:04:08.420577888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 01:04:08.420622 containerd[1485]: time="2025-08-13T01:04:08.420615368Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 01:04:08.420884 containerd[1485]: time="2025-08-13T01:04:08.420635618Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:04:08.420884 containerd[1485]: time="2025-08-13T01:04:08.420647018Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:04:08.420884 containerd[1485]: time="2025-08-13T01:04:08.420848178Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:04:08.420884 containerd[1485]: time="2025-08-13T01:04:08.420859368Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:04:08.420884 containerd[1485]: time="2025-08-13T01:04:08.420871048Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:04:08.420884 containerd[1485]: time="2025-08-13T01:04:08.420882088Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:04:08.420982 containerd[1485]: time="2025-08-13T01:04:08.420892488Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:04:08.420982 containerd[1485]: time="2025-08-13T01:04:08.420902288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:04:08.420982 containerd[1485]: time="2025-08-13T01:04:08.420918418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.420982 containerd[1485]: time="2025-08-13T01:04:08.420928968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.420982 containerd[1485]: time="2025-08-13T01:04:08.420942258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.420982 containerd[1485]: time="2025-08-13T01:04:08.420952628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.420982 containerd[1485]: time="2025-08-13T01:04:08.420962908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.420982 containerd[1485]: time="2025-08-13T01:04:08.420973038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.420982 containerd[1485]: time="2025-08-13T01:04:08.420983728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.420994478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.421005478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.421019148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.421032098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.421041688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.421052088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.421063508Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.421080258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.421090308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421127 containerd[1485]: time="2025-08-13T01:04:08.421099238Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:04:08.421282 containerd[1485]: time="2025-08-13T01:04:08.421136308Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:04:08.421282 containerd[1485]: time="2025-08-13T01:04:08.421149208Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 01:04:08.421282 containerd[1485]: time="2025-08-13T01:04:08.421158878Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:04:08.421282 containerd[1485]: time="2025-08-13T01:04:08.421168548Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 01:04:08.421282 containerd[1485]: time="2025-08-13T01:04:08.421176298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.421282 containerd[1485]: time="2025-08-13T01:04:08.421186338Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 01:04:08.421282 containerd[1485]: time="2025-08-13T01:04:08.421195318Z" level=info msg="NRI interface is disabled by configuration." Aug 13 01:04:08.421282 containerd[1485]: time="2025-08-13T01:04:08.421203728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:04:08.422405 containerd[1485]: time="2025-08-13T01:04:08.421412638Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:04:08.422405 containerd[1485]: time="2025-08-13T01:04:08.421455818Z" level=info msg="Connect containerd service" Aug 13 01:04:08.422405 containerd[1485]: time="2025-08-13T01:04:08.421494488Z" level=info msg="using legacy CRI server" Aug 13 01:04:08.422405 containerd[1485]: time="2025-08-13T01:04:08.421501668Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:04:08.422405 containerd[1485]: time="2025-08-13T01:04:08.421580237Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:04:08.423347 containerd[1485]: time="2025-08-13T01:04:08.422504107Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:04:08.423347 containerd[1485]: time="2025-08-13T01:04:08.422640537Z" level=info msg="Start subscribing containerd event" Aug 13 01:04:08.423347 containerd[1485]: time="2025-08-13T01:04:08.422906637Z" level=info msg="Start recovering state" Aug 13 01:04:08.423347 containerd[1485]: time="2025-08-13T01:04:08.422979987Z" level=info msg="Start event monitor" Aug 13 01:04:08.423347 containerd[1485]: time="2025-08-13T01:04:08.422992557Z" level=info msg="Start snapshots syncer" Aug 13 01:04:08.423347 containerd[1485]: time="2025-08-13T01:04:08.423002597Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:04:08.423347 containerd[1485]: time="2025-08-13T01:04:08.423011387Z" level=info msg="Start streaming server" Aug 13 01:04:08.424806 containerd[1485]: time="2025-08-13T01:04:08.424743166Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:04:08.424838 containerd[1485]: time="2025-08-13T01:04:08.424827326Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:04:08.426285 containerd[1485]: time="2025-08-13T01:04:08.424940766Z" level=info msg="containerd successfully booted in 0.060993s" Aug 13 01:04:08.424976 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:04:08.465165 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:04:08.466991 dbus-daemon[1451]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:04:08.467315 dbus-daemon[1451]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1543 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:04:08.475329 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:04:08.484083 polkitd[1547]: Started polkitd version 121 Aug 13 01:04:08.487410 polkitd[1547]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:04:08.487464 polkitd[1547]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:04:08.488080 polkitd[1547]: Finished loading, compiling and executing 2 rules Aug 13 01:04:08.488402 dbus-daemon[1451]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:04:08.488510 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:04:08.489286 polkitd[1547]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:04:08.497628 systemd-resolved[1392]: System hostname changed to '172-234-214-143'. Aug 13 01:04:08.497753 systemd-hostnamed[1543]: Hostname set to <172-234-214-143> (transient) Aug 13 01:04:08.674168 tar[1474]: linux-amd64/LICENSE Aug 13 01:04:08.674168 tar[1474]: linux-amd64/README.md Aug 13 01:04:08.685056 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:04:08.888996 systemd-networkd[1391]: eth0: Gained IPv6LL Aug 13 01:04:08.889589 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Aug 13 01:04:08.890983 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:04:08.892698 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:04:08.898790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:04:08.903661 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:04:08.922891 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:04:08.974451 coreos-metadata[1450]: Aug 13 01:04:08.974 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:04:09.062865 coreos-metadata[1450]: Aug 13 01:04:09.062 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:04:09.249727 coreos-metadata[1450]: Aug 13 01:04:09.249 INFO Fetch successful Aug 13 01:04:09.249933 coreos-metadata[1450]: Aug 13 01:04:09.249 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:04:09.358931 coreos-metadata[1525]: Aug 13 01:04:09.358 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:04:09.505770 coreos-metadata[1450]: Aug 13 01:04:09.505 INFO Fetch successful Aug 13 01:04:09.529361 coreos-metadata[1525]: Aug 13 01:04:09.529 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:04:09.579513 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:04:09.581120 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Aug 13 01:04:09.581823 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:04:09.660332 coreos-metadata[1525]: Aug 13 01:04:09.660 INFO Fetch successful Aug 13 01:04:09.673496 update-ssh-keys[1594]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:04:09.674139 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:04:09.676156 systemd[1]: Finished sshkeys.service. Aug 13 01:04:09.737040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:04:09.738081 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:04:09.740537 systemd[1]: Startup finished in 822ms (kernel) + 7.493s (initrd) + 4.579s (userspace) = 12.894s. Aug 13 01:04:09.779881 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:04:10.250293 kubelet[1603]: E0813 01:04:10.250163 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:04:10.253973 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:04:10.254168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:04:10.254561 systemd[1]: kubelet.service: Consumed 842ms CPU time, 266.6M memory peak. Aug 13 01:04:11.642583 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Aug 13 01:04:12.189498 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:04:12.194820 systemd[1]: Started sshd@0-172.234.214.143:22-139.178.89.65:41262.service - OpenSSH per-connection server daemon (139.178.89.65:41262). Aug 13 01:04:12.531542 sshd[1615]: Accepted publickey for core from 139.178.89.65 port 41262 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:04:12.532282 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:12.538436 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:04:12.543785 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:04:12.550601 systemd-logind[1460]: New session 1 of user core. Aug 13 01:04:12.555775 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:04:12.563812 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:04:12.566673 (systemd)[1619]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:04:12.568842 systemd-logind[1460]: New session c1 of user core. Aug 13 01:04:12.697212 systemd[1619]: Queued start job for default target default.target. Aug 13 01:04:12.704878 systemd[1619]: Created slice app.slice - User Application Slice. Aug 13 01:04:12.704907 systemd[1619]: Reached target paths.target - Paths. Aug 13 01:04:12.704947 systemd[1619]: Reached target timers.target - Timers. Aug 13 01:04:12.706369 systemd[1619]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:04:12.716977 systemd[1619]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:04:12.717092 systemd[1619]: Reached target sockets.target - Sockets. Aug 13 01:04:12.717129 systemd[1619]: Reached target basic.target - Basic System. Aug 13 01:04:12.717170 systemd[1619]: Reached target default.target - Main User Target. Aug 13 01:04:12.717199 systemd[1619]: Startup finished in 142ms. Aug 13 01:04:12.717472 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:04:12.725754 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:04:12.984827 systemd[1]: Started sshd@1-172.234.214.143:22-139.178.89.65:41264.service - OpenSSH per-connection server daemon (139.178.89.65:41264). Aug 13 01:04:13.308752 sshd[1630]: Accepted publickey for core from 139.178.89.65 port 41264 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:04:13.310746 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:13.316190 systemd-logind[1460]: New session 2 of user core. Aug 13 01:04:13.326750 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:04:13.551966 sshd[1632]: Connection closed by 139.178.89.65 port 41264 Aug 13 01:04:13.552662 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:13.556608 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:04:13.557719 systemd[1]: sshd@1-172.234.214.143:22-139.178.89.65:41264.service: Deactivated successfully. Aug 13 01:04:13.559764 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:04:13.561054 systemd-logind[1460]: Removed session 2. Aug 13 01:04:13.621788 systemd[1]: Started sshd@2-172.234.214.143:22-139.178.89.65:41280.service - OpenSSH per-connection server daemon (139.178.89.65:41280). Aug 13 01:04:13.947887 sshd[1638]: Accepted publickey for core from 139.178.89.65 port 41280 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:04:13.949844 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:13.955909 systemd-logind[1460]: New session 3 of user core. Aug 13 01:04:13.967749 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:04:14.192251 sshd[1640]: Connection closed by 139.178.89.65 port 41280 Aug 13 01:04:14.192929 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:14.197382 systemd[1]: sshd@2-172.234.214.143:22-139.178.89.65:41280.service: Deactivated successfully. Aug 13 01:04:14.199214 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:04:14.200154 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:04:14.201237 systemd-logind[1460]: Removed session 3. Aug 13 01:04:14.255792 systemd[1]: Started sshd@3-172.234.214.143:22-139.178.89.65:41286.service - OpenSSH per-connection server daemon (139.178.89.65:41286). Aug 13 01:04:14.587554 sshd[1646]: Accepted publickey for core from 139.178.89.65 port 41286 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:04:14.589428 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:14.593292 systemd-logind[1460]: New session 4 of user core. Aug 13 01:04:14.596729 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:04:14.833456 sshd[1648]: Connection closed by 139.178.89.65 port 41286 Aug 13 01:04:14.834837 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:14.838272 systemd[1]: sshd@3-172.234.214.143:22-139.178.89.65:41286.service: Deactivated successfully. Aug 13 01:04:14.840514 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:04:14.842070 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:04:14.843296 systemd-logind[1460]: Removed session 4. Aug 13 01:04:14.896910 systemd[1]: Started sshd@4-172.234.214.143:22-139.178.89.65:41294.service - OpenSSH per-connection server daemon (139.178.89.65:41294). Aug 13 01:04:15.215984 sshd[1654]: Accepted publickey for core from 139.178.89.65 port 41294 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:04:15.218134 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:15.222680 systemd-logind[1460]: New session 5 of user core. Aug 13 01:04:15.228710 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:04:15.416906 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:04:15.417224 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:04:15.431899 sudo[1657]: pam_unix(sudo:session): session closed for user root Aug 13 01:04:15.481662 sshd[1656]: Connection closed by 139.178.89.65 port 41294 Aug 13 01:04:15.482505 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:15.485439 systemd[1]: sshd@4-172.234.214.143:22-139.178.89.65:41294.service: Deactivated successfully. Aug 13 01:04:15.487118 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:04:15.488405 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:04:15.489401 systemd-logind[1460]: Removed session 5. Aug 13 01:04:15.549825 systemd[1]: Started sshd@5-172.234.214.143:22-139.178.89.65:41308.service - OpenSSH per-connection server daemon (139.178.89.65:41308). Aug 13 01:04:15.879906 sshd[1663]: Accepted publickey for core from 139.178.89.65 port 41308 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:04:15.881690 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:15.886502 systemd-logind[1460]: New session 6 of user core. Aug 13 01:04:15.893737 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:04:16.080354 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:04:16.080714 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:04:16.084638 sudo[1667]: pam_unix(sudo:session): session closed for user root Aug 13 01:04:16.090273 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:04:16.090577 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:04:16.109853 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:04:16.139087 augenrules[1689]: No rules Aug 13 01:04:16.140763 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:04:16.141050 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:04:16.142348 sudo[1666]: pam_unix(sudo:session): session closed for user root Aug 13 01:04:16.193960 sshd[1665]: Connection closed by 139.178.89.65 port 41308 Aug 13 01:04:16.194577 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:16.198196 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:04:16.198454 systemd[1]: sshd@5-172.234.214.143:22-139.178.89.65:41308.service: Deactivated successfully. Aug 13 01:04:16.200359 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:04:16.201142 systemd-logind[1460]: Removed session 6. Aug 13 01:04:16.261798 systemd[1]: Started sshd@6-172.234.214.143:22-139.178.89.65:41316.service - OpenSSH per-connection server daemon (139.178.89.65:41316). Aug 13 01:04:16.594134 sshd[1698]: Accepted publickey for core from 139.178.89.65 port 41316 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:04:16.595834 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:16.600423 systemd-logind[1460]: New session 7 of user core. Aug 13 01:04:16.612727 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:04:16.795204 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:04:16.795520 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:04:17.059021 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:04:17.059177 (dockerd)[1719]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:04:17.307684 dockerd[1719]: time="2025-08-13T01:04:17.307625423Z" level=info msg="Starting up" Aug 13 01:04:17.363848 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2510712547-merged.mount: Deactivated successfully. Aug 13 01:04:17.387904 dockerd[1719]: time="2025-08-13T01:04:17.387877153Z" level=info msg="Loading containers: start." Aug 13 01:04:17.533622 kernel: Initializing XFRM netlink socket Aug 13 01:04:17.555691 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Aug 13 01:04:17.565498 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Aug 13 01:04:17.608471 systemd-networkd[1391]: docker0: Link UP Aug 13 01:04:17.608698 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Aug 13 01:04:17.626960 dockerd[1719]: time="2025-08-13T01:04:17.626886974Z" level=info msg="Loading containers: done." Aug 13 01:04:17.640226 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3426160387-merged.mount: Deactivated successfully. Aug 13 01:04:17.642315 dockerd[1719]: time="2025-08-13T01:04:17.642279786Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:04:17.642383 dockerd[1719]: time="2025-08-13T01:04:17.642357476Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 01:04:17.642474 dockerd[1719]: time="2025-08-13T01:04:17.642452896Z" level=info msg="Daemon has completed initialization" Aug 13 01:04:17.666870 dockerd[1719]: time="2025-08-13T01:04:17.666363794Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:04:17.666480 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:04:18.202959 containerd[1485]: time="2025-08-13T01:04:18.202911236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:04:19.085572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1179527917.mount: Deactivated successfully. Aug 13 01:04:20.466982 containerd[1485]: time="2025-08-13T01:04:20.465822054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:20.466982 containerd[1485]: time="2025-08-13T01:04:20.466926273Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 01:04:20.467656 containerd[1485]: time="2025-08-13T01:04:20.467628263Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:20.470169 containerd[1485]: time="2025-08-13T01:04:20.470143132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:20.471503 containerd[1485]: time="2025-08-13T01:04:20.471475931Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.268517435s" Aug 13 01:04:20.471548 containerd[1485]: time="2025-08-13T01:04:20.471507211Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:04:20.472243 containerd[1485]: time="2025-08-13T01:04:20.472201421Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:04:20.504646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:04:20.509912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:04:20.659777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:04:20.665903 (kubelet)[1967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:04:20.705667 kubelet[1967]: E0813 01:04:20.705545 1967 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:04:20.711520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:04:20.712385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:04:20.712942 systemd[1]: kubelet.service: Consumed 188ms CPU time, 108.7M memory peak. Aug 13 01:04:21.982869 containerd[1485]: time="2025-08-13T01:04:21.982808585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:21.983791 containerd[1485]: time="2025-08-13T01:04:21.983759115Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 01:04:21.984529 containerd[1485]: time="2025-08-13T01:04:21.984502834Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:21.986613 containerd[1485]: time="2025-08-13T01:04:21.986419713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:21.987865 containerd[1485]: time="2025-08-13T01:04:21.987314133Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.515084842s" Aug 13 01:04:21.987865 containerd[1485]: time="2025-08-13T01:04:21.987341273Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:04:21.988084 containerd[1485]: time="2025-08-13T01:04:21.988056093Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:04:23.287171 containerd[1485]: time="2025-08-13T01:04:23.286241803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:23.287171 containerd[1485]: time="2025-08-13T01:04:23.286897413Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 01:04:23.287619 containerd[1485]: time="2025-08-13T01:04:23.287561873Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:23.289518 containerd[1485]: time="2025-08-13T01:04:23.289497502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:23.290370 containerd[1485]: time="2025-08-13T01:04:23.290344721Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.302259128s" Aug 13 01:04:23.290412 containerd[1485]: time="2025-08-13T01:04:23.290372751Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:04:23.290875 containerd[1485]: time="2025-08-13T01:04:23.290852511Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:04:24.526542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158423499.mount: Deactivated successfully. Aug 13 01:04:24.824199 containerd[1485]: time="2025-08-13T01:04:24.823862284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:24.825414 containerd[1485]: time="2025-08-13T01:04:24.825280414Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 01:04:24.825414 containerd[1485]: time="2025-08-13T01:04:24.825299394Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:24.827210 containerd[1485]: time="2025-08-13T01:04:24.827176923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:24.827860 containerd[1485]: time="2025-08-13T01:04:24.827838952Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.536960271s" Aug 13 01:04:24.827933 containerd[1485]: time="2025-08-13T01:04:24.827918522Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:04:24.828605 containerd[1485]: time="2025-08-13T01:04:24.828554662Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:04:25.529491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55121894.mount: Deactivated successfully. Aug 13 01:04:26.209240 containerd[1485]: time="2025-08-13T01:04:26.209164582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:26.210579 containerd[1485]: time="2025-08-13T01:04:26.210540121Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:04:26.211028 containerd[1485]: time="2025-08-13T01:04:26.210964401Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:26.213945 containerd[1485]: time="2025-08-13T01:04:26.213905259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:26.215188 containerd[1485]: time="2025-08-13T01:04:26.215024449Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.386443297s" Aug 13 01:04:26.215188 containerd[1485]: time="2025-08-13T01:04:26.215056139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:04:26.216259 containerd[1485]: time="2025-08-13T01:04:26.216222218Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:04:26.909183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464398233.mount: Deactivated successfully. Aug 13 01:04:26.914396 containerd[1485]: time="2025-08-13T01:04:26.913574499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:26.914396 containerd[1485]: time="2025-08-13T01:04:26.914368509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:04:26.914583 containerd[1485]: time="2025-08-13T01:04:26.914551339Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:26.916569 containerd[1485]: time="2025-08-13T01:04:26.916513738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:26.917940 containerd[1485]: time="2025-08-13T01:04:26.917901807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 701.648049ms" Aug 13 01:04:26.917940 containerd[1485]: time="2025-08-13T01:04:26.917929757Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:04:26.918416 containerd[1485]: time="2025-08-13T01:04:26.918390897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:04:27.637043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846025314.mount: Deactivated successfully. Aug 13 01:04:28.833180 containerd[1485]: time="2025-08-13T01:04:28.833106429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:28.834307 containerd[1485]: time="2025-08-13T01:04:28.834261849Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 01:04:28.835008 containerd[1485]: time="2025-08-13T01:04:28.834967168Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:28.838619 containerd[1485]: time="2025-08-13T01:04:28.837326247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:28.838619 containerd[1485]: time="2025-08-13T01:04:28.838485767Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.92006671s" Aug 13 01:04:28.838619 containerd[1485]: time="2025-08-13T01:04:28.838510407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:04:30.704329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:04:30.704474 systemd[1]: kubelet.service: Consumed 188ms CPU time, 108.7M memory peak. Aug 13 01:04:30.712357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:04:30.743823 systemd[1]: Reload requested from client PID 2127 ('systemctl') (unit session-7.scope)... Aug 13 01:04:30.743843 systemd[1]: Reloading... Aug 13 01:04:30.891615 zram_generator::config[2172]: No configuration found. Aug 13 01:04:30.998622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:04:31.089885 systemd[1]: Reloading finished in 345 ms. Aug 13 01:04:31.134560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:04:31.140135 (kubelet)[2216]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:04:31.142312 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:04:31.143542 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:04:31.143849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:04:31.143894 systemd[1]: kubelet.service: Consumed 138ms CPU time, 98.3M memory peak. Aug 13 01:04:31.145715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:04:31.308710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:04:31.310681 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:04:31.345334 kubelet[2229]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:04:31.345334 kubelet[2229]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:04:31.345334 kubelet[2229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:04:31.345633 kubelet[2229]: I0813 01:04:31.345373 2229 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:04:31.569666 kubelet[2229]: I0813 01:04:31.569139 2229 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:04:31.569666 kubelet[2229]: I0813 01:04:31.569166 2229 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:04:31.569666 kubelet[2229]: I0813 01:04:31.569436 2229 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:04:31.593277 kubelet[2229]: E0813 01:04:31.593258 2229 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.214.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.214.143:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:04:31.594293 kubelet[2229]: I0813 01:04:31.594256 2229 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:04:31.599583 kubelet[2229]: E0813 01:04:31.599559 2229 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:04:31.599583 kubelet[2229]: I0813 01:04:31.599579 2229 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:04:31.604577 kubelet[2229]: I0813 01:04:31.604548 2229 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:04:31.605377 kubelet[2229]: I0813 01:04:31.605350 2229 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:04:31.605510 kubelet[2229]: I0813 01:04:31.605474 2229 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:04:31.605649 kubelet[2229]: I0813 01:04:31.605501 2229 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-214-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:04:31.605748 kubelet[2229]: I0813 01:04:31.605660 2229 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:04:31.605748 kubelet[2229]: I0813 01:04:31.605669 2229 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:04:31.605784 kubelet[2229]: I0813 01:04:31.605755 2229 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:04:31.608856 kubelet[2229]: I0813 01:04:31.608655 2229 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:04:31.608856 kubelet[2229]: I0813 01:04:31.608683 2229 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:04:31.608856 kubelet[2229]: I0813 01:04:31.608711 2229 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:04:31.608856 kubelet[2229]: I0813 01:04:31.608729 2229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:04:31.614845 kubelet[2229]: W0813 01:04:31.614798 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.214.143:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-214-143&limit=500&resourceVersion=0": dial tcp 172.234.214.143:6443: connect: connection refused Aug 13 01:04:31.614845 kubelet[2229]: E0813 01:04:31.614842 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.214.143:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-214-143&limit=500&resourceVersion=0\": dial tcp 172.234.214.143:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:04:31.616929 kubelet[2229]: W0813 01:04:31.616890 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.214.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.214.143:6443: connect: connection refused Aug 13 01:04:31.616983 kubelet[2229]: E0813 01:04:31.616935 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.214.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.214.143:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:04:31.617009 kubelet[2229]: I0813 01:04:31.616999 2229 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 01:04:31.618217 kubelet[2229]: I0813 01:04:31.617804 2229 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:04:31.618363 kubelet[2229]: W0813 01:04:31.618345 2229 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:04:31.622298 kubelet[2229]: I0813 01:04:31.622284 2229 server.go:1274] "Started kubelet" Aug 13 01:04:31.623437 kubelet[2229]: I0813 01:04:31.623413 2229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:04:31.629632 kubelet[2229]: I0813 01:04:31.629376 2229 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:04:31.630176 kubelet[2229]: I0813 01:04:31.630162 2229 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:04:31.630670 kubelet[2229]: E0813 01:04:31.629706 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.214.143:6443/api/v1/namespaces/default/events\": dial tcp 172.234.214.143:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-214-143.185b2e03542261f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-214-143,UID:172-234-214-143,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-214-143,},FirstTimestamp:2025-08-13 01:04:31.622259184 +0000 UTC m=+0.308221016,LastTimestamp:2025-08-13 01:04:31.622259184 +0000 UTC m=+0.308221016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-214-143,}" Aug 13 01:04:31.631154 kubelet[2229]: I0813 01:04:31.631127 2229 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:04:31.631334 kubelet[2229]: I0813 01:04:31.631304 2229 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:04:31.632153 kubelet[2229]: I0813 01:04:31.631451 2229 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:04:31.632629 kubelet[2229]: I0813 01:04:31.632582 2229 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:04:31.632873 kubelet[2229]: E0813 01:04:31.632859 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:31.633246 kubelet[2229]: I0813 01:04:31.633224 2229 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:04:31.633319 kubelet[2229]: I0813 01:04:31.633299 2229 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:04:31.634255 kubelet[2229]: E0813 01:04:31.634234 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.214.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-214-143?timeout=10s\": dial tcp 172.234.214.143:6443: connect: connection refused" interval="200ms" Aug 13 01:04:31.634628 kubelet[2229]: I0813 01:04:31.634330 2229 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:04:31.634628 kubelet[2229]: I0813 01:04:31.634355 2229 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:04:31.634628 kubelet[2229]: W0813 01:04:31.634552 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.214.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.214.143:6443: connect: connection refused Aug 13 01:04:31.634895 kubelet[2229]: E0813 01:04:31.634582 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.214.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.214.143:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:04:31.635011 kubelet[2229]: I0813 01:04:31.634991 2229 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:04:31.640358 kubelet[2229]: E0813 01:04:31.640337 2229 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:04:31.643794 kubelet[2229]: I0813 01:04:31.643770 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:04:31.645621 kubelet[2229]: I0813 01:04:31.644919 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:04:31.645621 kubelet[2229]: I0813 01:04:31.644935 2229 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:04:31.645621 kubelet[2229]: I0813 01:04:31.644947 2229 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:04:31.645621 kubelet[2229]: E0813 01:04:31.644980 2229 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:04:31.652489 kubelet[2229]: W0813 01:04:31.652462 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.214.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.214.143:6443: connect: connection refused Aug 13 01:04:31.652545 kubelet[2229]: E0813 01:04:31.652495 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.214.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.214.143:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:04:31.663894 kubelet[2229]: I0813 01:04:31.663881 2229 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:04:31.663894 kubelet[2229]: I0813 01:04:31.663892 2229 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:04:31.663960 kubelet[2229]: I0813 01:04:31.663906 2229 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:04:31.665738 kubelet[2229]: I0813 01:04:31.665724 2229 policy_none.go:49] "None policy: Start" Aug 13 01:04:31.666550 kubelet[2229]: I0813 01:04:31.666531 2229 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:04:31.666639 kubelet[2229]: I0813 01:04:31.666556 2229 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:04:31.672803 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:04:31.684008 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:04:31.687251 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:04:31.695841 kubelet[2229]: I0813 01:04:31.695491 2229 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:04:31.695841 kubelet[2229]: I0813 01:04:31.695650 2229 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:04:31.695841 kubelet[2229]: I0813 01:04:31.695659 2229 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:04:31.695841 kubelet[2229]: I0813 01:04:31.695789 2229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:04:31.699630 kubelet[2229]: E0813 01:04:31.699531 2229 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-214-143\" not found" Aug 13 01:04:31.755492 systemd[1]: Created slice kubepods-burstable-podac1fcd0bf3b82a8b6660ddc6945eadb8.slice - libcontainer container kubepods-burstable-podac1fcd0bf3b82a8b6660ddc6945eadb8.slice. Aug 13 01:04:31.771169 systemd[1]: Created slice kubepods-burstable-pod7920843ffc9fb1fea7377c14037f5431.slice - libcontainer container kubepods-burstable-pod7920843ffc9fb1fea7377c14037f5431.slice. Aug 13 01:04:31.777919 systemd[1]: Created slice kubepods-burstable-pod32f00bc872b8c32f7a96994c9fea5737.slice - libcontainer container kubepods-burstable-pod32f00bc872b8c32f7a96994c9fea5737.slice. Aug 13 01:04:31.796900 kubelet[2229]: I0813 01:04:31.796876 2229 kubelet_node_status.go:72] "Attempting to register node" node="172-234-214-143" Aug 13 01:04:31.797130 kubelet[2229]: E0813 01:04:31.797112 2229 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.214.143:6443/api/v1/nodes\": dial tcp 172.234.214.143:6443: connect: connection refused" node="172-234-214-143" Aug 13 01:04:31.834783 kubelet[2229]: E0813 01:04:31.834698 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.214.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-214-143?timeout=10s\": dial tcp 172.234.214.143:6443: connect: connection refused" interval="400ms" Aug 13 01:04:31.835783 kubelet[2229]: I0813 01:04:31.835748 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-ca-certs\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:31.835783 kubelet[2229]: I0813 01:04:31.835770 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-flexvolume-dir\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:31.835907 kubelet[2229]: I0813 01:04:31.835786 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-k8s-certs\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:31.835907 kubelet[2229]: I0813 01:04:31.835799 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac1fcd0bf3b82a8b6660ddc6945eadb8-kubeconfig\") pod \"kube-scheduler-172-234-214-143\" (UID: \"ac1fcd0bf3b82a8b6660ddc6945eadb8\") " pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:04:31.835907 kubelet[2229]: I0813 01:04:31.835810 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7920843ffc9fb1fea7377c14037f5431-ca-certs\") pod \"kube-apiserver-172-234-214-143\" (UID: \"7920843ffc9fb1fea7377c14037f5431\") " pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:04:31.835907 kubelet[2229]: I0813 01:04:31.835822 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7920843ffc9fb1fea7377c14037f5431-k8s-certs\") pod \"kube-apiserver-172-234-214-143\" (UID: \"7920843ffc9fb1fea7377c14037f5431\") " pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:04:31.835907 kubelet[2229]: I0813 01:04:31.835833 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7920843ffc9fb1fea7377c14037f5431-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-214-143\" (UID: \"7920843ffc9fb1fea7377c14037f5431\") " pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:04:31.836173 kubelet[2229]: I0813 01:04:31.835845 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-kubeconfig\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:31.836173 kubelet[2229]: I0813 01:04:31.835857 2229 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:31.999386 kubelet[2229]: I0813 01:04:31.999339 2229 kubelet_node_status.go:72] "Attempting to register node" node="172-234-214-143" Aug 13 01:04:31.999825 kubelet[2229]: E0813 01:04:31.999792 2229 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.214.143:6443/api/v1/nodes\": dial tcp 172.234.214.143:6443: connect: connection refused" node="172-234-214-143" Aug 13 01:04:32.069693 kubelet[2229]: E0813 01:04:32.069656 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:32.070770 containerd[1485]: time="2025-08-13T01:04:32.070729490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-214-143,Uid:ac1fcd0bf3b82a8b6660ddc6945eadb8,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:32.075761 kubelet[2229]: E0813 01:04:32.075725 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:32.076374 containerd[1485]: time="2025-08-13T01:04:32.076330597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-214-143,Uid:7920843ffc9fb1fea7377c14037f5431,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:32.081315 kubelet[2229]: E0813 01:04:32.081278 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:32.081966 containerd[1485]: time="2025-08-13T01:04:32.081871934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-214-143,Uid:32f00bc872b8c32f7a96994c9fea5737,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:32.235968 kubelet[2229]: E0813 01:04:32.235769 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.214.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-214-143?timeout=10s\": dial tcp 172.234.214.143:6443: connect: connection refused" interval="800ms" Aug 13 01:04:32.402503 kubelet[2229]: I0813 01:04:32.402466 2229 kubelet_node_status.go:72] "Attempting to register node" node="172-234-214-143" Aug 13 01:04:32.402945 kubelet[2229]: E0813 01:04:32.402749 2229 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.214.143:6443/api/v1/nodes\": dial tcp 172.234.214.143:6443: connect: connection refused" node="172-234-214-143" Aug 13 01:04:32.592696 kubelet[2229]: W0813 01:04:32.592426 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.214.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.214.143:6443: connect: connection refused Aug 13 01:04:32.592696 kubelet[2229]: E0813 01:04:32.592497 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.214.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.214.143:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:04:32.737800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount902029368.mount: Deactivated successfully. Aug 13 01:04:32.743197 containerd[1485]: time="2025-08-13T01:04:32.743160424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:04:32.744245 containerd[1485]: time="2025-08-13T01:04:32.744226873Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:04:32.745084 containerd[1485]: time="2025-08-13T01:04:32.745055843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 13 01:04:32.745655 containerd[1485]: time="2025-08-13T01:04:32.745622073Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 01:04:32.749832 containerd[1485]: time="2025-08-13T01:04:32.749801950Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:04:32.751815 containerd[1485]: time="2025-08-13T01:04:32.751775019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 01:04:32.753008 containerd[1485]: time="2025-08-13T01:04:32.752988199Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:04:32.754860 containerd[1485]: time="2025-08-13T01:04:32.754434508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 683.485618ms" Aug 13 01:04:32.756630 containerd[1485]: time="2025-08-13T01:04:32.756313007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:04:32.756726 containerd[1485]: time="2025-08-13T01:04:32.756689617Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 680.26765ms" Aug 13 01:04:32.784516 containerd[1485]: time="2025-08-13T01:04:32.783802663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 701.813629ms" Aug 13 01:04:32.847303 containerd[1485]: time="2025-08-13T01:04:32.846688852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:04:32.847303 containerd[1485]: time="2025-08-13T01:04:32.846754402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:04:32.847303 containerd[1485]: time="2025-08-13T01:04:32.846767162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:32.847303 containerd[1485]: time="2025-08-13T01:04:32.846995192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:04:32.847681 containerd[1485]: time="2025-08-13T01:04:32.847454912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:32.850656 containerd[1485]: time="2025-08-13T01:04:32.847036222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:04:32.850656 containerd[1485]: time="2025-08-13T01:04:32.850481360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:32.850656 containerd[1485]: time="2025-08-13T01:04:32.850554720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:32.856616 containerd[1485]: time="2025-08-13T01:04:32.855726547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:04:32.856616 containerd[1485]: time="2025-08-13T01:04:32.855806997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:04:32.856616 containerd[1485]: time="2025-08-13T01:04:32.855860347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:32.856616 containerd[1485]: time="2025-08-13T01:04:32.855989067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:32.863264 kubelet[2229]: W0813 01:04:32.863210 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.214.143:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-214-143&limit=500&resourceVersion=0": dial tcp 172.234.214.143:6443: connect: connection refused Aug 13 01:04:32.863375 kubelet[2229]: E0813 01:04:32.863355 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.214.143:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-214-143&limit=500&resourceVersion=0\": dial tcp 172.234.214.143:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:04:32.880741 systemd[1]: Started cri-containerd-1d1846c27a9554b2c176e8939b1e2c56b959609b4b064a124eda7ae877a0f421.scope - libcontainer container 1d1846c27a9554b2c176e8939b1e2c56b959609b4b064a124eda7ae877a0f421. Aug 13 01:04:32.882086 systemd[1]: Started cri-containerd-7757c6c4b9de913efaa6ffc08db9c80fd17612d4491907b60146f20829f13eaf.scope - libcontainer container 7757c6c4b9de913efaa6ffc08db9c80fd17612d4491907b60146f20829f13eaf. Aug 13 01:04:32.891754 systemd[1]: Started cri-containerd-bb76897242aa95abc0a75b32facdb89be0f93a2c42770300d17f77ad475fbbf7.scope - libcontainer container bb76897242aa95abc0a75b32facdb89be0f93a2c42770300d17f77ad475fbbf7. Aug 13 01:04:32.898960 kubelet[2229]: W0813 01:04:32.898901 2229 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.214.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.214.143:6443: connect: connection refused Aug 13 01:04:32.899279 kubelet[2229]: E0813 01:04:32.899245 2229 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.214.143:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.214.143:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:04:32.941051 containerd[1485]: time="2025-08-13T01:04:32.941010185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-214-143,Uid:7920843ffc9fb1fea7377c14037f5431,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d1846c27a9554b2c176e8939b1e2c56b959609b4b064a124eda7ae877a0f421\"" Aug 13 01:04:32.944615 kubelet[2229]: E0813 01:04:32.943372 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:32.947892 containerd[1485]: time="2025-08-13T01:04:32.947583732Z" level=info msg="CreateContainer within sandbox \"1d1846c27a9554b2c176e8939b1e2c56b959609b4b064a124eda7ae877a0f421\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:04:32.964870 containerd[1485]: time="2025-08-13T01:04:32.964848343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-214-143,Uid:ac1fcd0bf3b82a8b6660ddc6945eadb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7757c6c4b9de913efaa6ffc08db9c80fd17612d4491907b60146f20829f13eaf\"" Aug 13 01:04:32.965378 kubelet[2229]: E0813 01:04:32.965358 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:32.967063 containerd[1485]: time="2025-08-13T01:04:32.967017762Z" level=info msg="CreateContainer within sandbox \"7757c6c4b9de913efaa6ffc08db9c80fd17612d4491907b60146f20829f13eaf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:04:32.968005 containerd[1485]: time="2025-08-13T01:04:32.967984091Z" level=info msg="CreateContainer within sandbox \"1d1846c27a9554b2c176e8939b1e2c56b959609b4b064a124eda7ae877a0f421\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9bf1262acb374ce3640727b4889e52912bbcfa956d2756b72337d05c46c8966c\"" Aug 13 01:04:32.969052 containerd[1485]: time="2025-08-13T01:04:32.969033201Z" level=info msg="StartContainer for \"9bf1262acb374ce3640727b4889e52912bbcfa956d2756b72337d05c46c8966c\"" Aug 13 01:04:32.995412 containerd[1485]: time="2025-08-13T01:04:32.995391508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-214-143,Uid:32f00bc872b8c32f7a96994c9fea5737,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb76897242aa95abc0a75b32facdb89be0f93a2c42770300d17f77ad475fbbf7\"" Aug 13 01:04:32.996736 kubelet[2229]: E0813 01:04:32.996566 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:32.997110 systemd[1]: Started cri-containerd-9bf1262acb374ce3640727b4889e52912bbcfa956d2756b72337d05c46c8966c.scope - libcontainer container 9bf1262acb374ce3640727b4889e52912bbcfa956d2756b72337d05c46c8966c. Aug 13 01:04:32.998850 containerd[1485]: time="2025-08-13T01:04:32.998769036Z" level=info msg="CreateContainer within sandbox \"7757c6c4b9de913efaa6ffc08db9c80fd17612d4491907b60146f20829f13eaf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ab2e12f245a3fcfcd22af3edd4582ad6809921cc97a9aca2471778980d31437c\"" Aug 13 01:04:32.999287 containerd[1485]: time="2025-08-13T01:04:32.999268286Z" level=info msg="StartContainer for \"ab2e12f245a3fcfcd22af3edd4582ad6809921cc97a9aca2471778980d31437c\"" Aug 13 01:04:32.999993 containerd[1485]: time="2025-08-13T01:04:32.999898425Z" level=info msg="CreateContainer within sandbox \"bb76897242aa95abc0a75b32facdb89be0f93a2c42770300d17f77ad475fbbf7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:04:33.014191 containerd[1485]: time="2025-08-13T01:04:33.014167258Z" level=info msg="CreateContainer within sandbox \"bb76897242aa95abc0a75b32facdb89be0f93a2c42770300d17f77ad475fbbf7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0292467a476d4f37427aaaecc8fe5063dbfc256cc1cba94459523b3ecc7a2508\"" Aug 13 01:04:33.015970 containerd[1485]: time="2025-08-13T01:04:33.015007078Z" level=info msg="StartContainer for \"0292467a476d4f37427aaaecc8fe5063dbfc256cc1cba94459523b3ecc7a2508\"" Aug 13 01:04:33.036698 kubelet[2229]: E0813 01:04:33.036667 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.214.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-214-143?timeout=10s\": dial tcp 172.234.214.143:6443: connect: connection refused" interval="1.6s" Aug 13 01:04:33.044836 systemd[1]: Started cri-containerd-ab2e12f245a3fcfcd22af3edd4582ad6809921cc97a9aca2471778980d31437c.scope - libcontainer container ab2e12f245a3fcfcd22af3edd4582ad6809921cc97a9aca2471778980d31437c. Aug 13 01:04:33.058707 systemd[1]: Started cri-containerd-0292467a476d4f37427aaaecc8fe5063dbfc256cc1cba94459523b3ecc7a2508.scope - libcontainer container 0292467a476d4f37427aaaecc8fe5063dbfc256cc1cba94459523b3ecc7a2508. Aug 13 01:04:33.064927 containerd[1485]: time="2025-08-13T01:04:33.064883573Z" level=info msg="StartContainer for \"9bf1262acb374ce3640727b4889e52912bbcfa956d2756b72337d05c46c8966c\" returns successfully" Aug 13 01:04:33.109364 containerd[1485]: time="2025-08-13T01:04:33.108899611Z" level=info msg="StartContainer for \"ab2e12f245a3fcfcd22af3edd4582ad6809921cc97a9aca2471778980d31437c\" returns successfully" Aug 13 01:04:33.184237 containerd[1485]: time="2025-08-13T01:04:33.184140913Z" level=info msg="StartContainer for \"0292467a476d4f37427aaaecc8fe5063dbfc256cc1cba94459523b3ecc7a2508\" returns successfully" Aug 13 01:04:33.205423 kubelet[2229]: I0813 01:04:33.205181 2229 kubelet_node_status.go:72] "Attempting to register node" node="172-234-214-143" Aug 13 01:04:33.668803 kubelet[2229]: E0813 01:04:33.668728 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:33.669811 kubelet[2229]: E0813 01:04:33.669785 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:33.676235 kubelet[2229]: E0813 01:04:33.676207 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:34.529754 kubelet[2229]: I0813 01:04:34.529708 2229 kubelet_node_status.go:75] "Successfully registered node" node="172-234-214-143" Aug 13 01:04:34.529754 kubelet[2229]: E0813 01:04:34.529744 2229 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172-234-214-143\": node \"172-234-214-143\" not found" Aug 13 01:04:34.552455 kubelet[2229]: E0813 01:04:34.552410 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:34.653354 kubelet[2229]: E0813 01:04:34.653307 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:34.678581 kubelet[2229]: E0813 01:04:34.678512 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:34.754022 kubelet[2229]: E0813 01:04:34.753997 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:34.854679 kubelet[2229]: E0813 01:04:34.854536 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:34.954709 kubelet[2229]: E0813 01:04:34.954647 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.055330 kubelet[2229]: E0813 01:04:35.055289 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.156405 kubelet[2229]: E0813 01:04:35.156265 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.257228 kubelet[2229]: E0813 01:04:35.257185 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.357557 kubelet[2229]: E0813 01:04:35.357523 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.458372 kubelet[2229]: E0813 01:04:35.458094 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.558555 kubelet[2229]: E0813 01:04:35.558513 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.658806 kubelet[2229]: E0813 01:04:35.658753 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.760050 kubelet[2229]: E0813 01:04:35.759917 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.860740 kubelet[2229]: E0813 01:04:35.860684 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:35.961249 kubelet[2229]: E0813 01:04:35.961192 2229 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:36.261640 systemd[1]: Reload requested from client PID 2503 ('systemctl') (unit session-7.scope)... Aug 13 01:04:36.261660 systemd[1]: Reloading... Aug 13 01:04:36.399551 zram_generator::config[2548]: No configuration found. Aug 13 01:04:36.521633 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:04:36.618063 kubelet[2229]: I0813 01:04:36.618018 2229 apiserver.go:52] "Watching apiserver" Aug 13 01:04:36.626367 systemd[1]: Reloading finished in 364 ms. Aug 13 01:04:36.634717 kubelet[2229]: I0813 01:04:36.634586 2229 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:04:36.660263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:04:36.680564 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:04:36.680947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:04:36.681037 systemd[1]: kubelet.service: Consumed 665ms CPU time, 131M memory peak. Aug 13 01:04:36.686796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:04:36.868855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:04:36.873480 (kubelet)[2599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:04:36.934238 kubelet[2599]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:04:36.934572 kubelet[2599]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:04:36.934657 kubelet[2599]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:04:36.934854 kubelet[2599]: I0813 01:04:36.934815 2599 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:04:36.943303 kubelet[2599]: I0813 01:04:36.943253 2599 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:04:36.943303 kubelet[2599]: I0813 01:04:36.943292 2599 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:04:36.943558 kubelet[2599]: I0813 01:04:36.943528 2599 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:04:36.944868 kubelet[2599]: I0813 01:04:36.944830 2599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:04:36.947210 kubelet[2599]: I0813 01:04:36.947076 2599 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:04:36.950849 kubelet[2599]: E0813 01:04:36.950813 2599 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:04:36.950849 kubelet[2599]: I0813 01:04:36.950843 2599 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:04:36.955915 kubelet[2599]: I0813 01:04:36.955300 2599 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:04:36.955915 kubelet[2599]: I0813 01:04:36.955436 2599 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:04:36.955915 kubelet[2599]: I0813 01:04:36.955559 2599 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:04:36.955915 kubelet[2599]: I0813 01:04:36.955586 2599 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-214-143","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:04:36.956081 kubelet[2599]: I0813 01:04:36.955878 2599 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:04:36.956081 kubelet[2599]: I0813 01:04:36.955890 2599 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:04:36.956301 kubelet[2599]: I0813 01:04:36.956276 2599 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:04:36.956703 kubelet[2599]: I0813 01:04:36.956677 2599 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:04:36.956703 kubelet[2599]: I0813 01:04:36.956704 2599 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:04:36.956827 kubelet[2599]: I0813 01:04:36.956741 2599 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:04:36.956827 kubelet[2599]: I0813 01:04:36.956753 2599 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:04:36.959779 kubelet[2599]: I0813 01:04:36.959734 2599 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 01:04:36.960120 kubelet[2599]: I0813 01:04:36.960095 2599 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:04:36.960556 kubelet[2599]: I0813 01:04:36.960531 2599 server.go:1274] "Started kubelet" Aug 13 01:04:36.963976 kubelet[2599]: I0813 01:04:36.963935 2599 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:04:36.973658 kubelet[2599]: I0813 01:04:36.973350 2599 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:04:36.974634 kubelet[2599]: I0813 01:04:36.974579 2599 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:04:36.975554 kubelet[2599]: I0813 01:04:36.975512 2599 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:04:36.975953 kubelet[2599]: I0813 01:04:36.975804 2599 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:04:36.976058 kubelet[2599]: I0813 01:04:36.976032 2599 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:04:36.977602 kubelet[2599]: I0813 01:04:36.977525 2599 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:04:36.977817 kubelet[2599]: E0813 01:04:36.977748 2599 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-214-143\" not found" Aug 13 01:04:36.979796 kubelet[2599]: I0813 01:04:36.979468 2599 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:04:36.979796 kubelet[2599]: I0813 01:04:36.979630 2599 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:04:36.985245 kubelet[2599]: I0813 01:04:36.984677 2599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:04:36.986421 kubelet[2599]: I0813 01:04:36.985989 2599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:04:36.986421 kubelet[2599]: I0813 01:04:36.986022 2599 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:04:36.986421 kubelet[2599]: I0813 01:04:36.986042 2599 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:04:36.986421 kubelet[2599]: E0813 01:04:36.986089 2599 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:04:36.987806 kubelet[2599]: I0813 01:04:36.987503 2599 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:04:36.987924 kubelet[2599]: I0813 01:04:36.987903 2599 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:04:36.990124 kubelet[2599]: I0813 01:04:36.990108 2599 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:04:37.057910 kubelet[2599]: I0813 01:04:37.057878 2599 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:04:37.057910 kubelet[2599]: I0813 01:04:37.057899 2599 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:04:37.058174 kubelet[2599]: I0813 01:04:37.058115 2599 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:04:37.058319 kubelet[2599]: I0813 01:04:37.058294 2599 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:04:37.058345 kubelet[2599]: I0813 01:04:37.058313 2599 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:04:37.058345 kubelet[2599]: I0813 01:04:37.058340 2599 policy_none.go:49] "None policy: Start" Aug 13 01:04:37.059636 kubelet[2599]: I0813 01:04:37.059330 2599 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:04:37.059636 kubelet[2599]: I0813 01:04:37.059355 2599 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:04:37.059636 kubelet[2599]: I0813 01:04:37.059523 2599 state_mem.go:75] "Updated machine memory state" Aug 13 01:04:37.065371 kubelet[2599]: I0813 01:04:37.064941 2599 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:04:37.065371 kubelet[2599]: I0813 01:04:37.065102 2599 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:04:37.065371 kubelet[2599]: I0813 01:04:37.065116 2599 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:04:37.065371 kubelet[2599]: I0813 01:04:37.065291 2599 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:04:37.170873 kubelet[2599]: I0813 01:04:37.169642 2599 kubelet_node_status.go:72] "Attempting to register node" node="172-234-214-143" Aug 13 01:04:37.177494 kubelet[2599]: I0813 01:04:37.177470 2599 kubelet_node_status.go:111] "Node was previously registered" node="172-234-214-143" Aug 13 01:04:37.177692 kubelet[2599]: I0813 01:04:37.177662 2599 kubelet_node_status.go:75] "Successfully registered node" node="172-234-214-143" Aug 13 01:04:37.180500 kubelet[2599]: I0813 01:04:37.179844 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-flexvolume-dir\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:37.180500 kubelet[2599]: I0813 01:04:37.179879 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-k8s-certs\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:37.180500 kubelet[2599]: I0813 01:04:37.179918 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-kubeconfig\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:37.180500 kubelet[2599]: I0813 01:04:37.180134 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac1fcd0bf3b82a8b6660ddc6945eadb8-kubeconfig\") pod \"kube-scheduler-172-234-214-143\" (UID: \"ac1fcd0bf3b82a8b6660ddc6945eadb8\") " pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:04:37.180500 kubelet[2599]: I0813 01:04:37.180154 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7920843ffc9fb1fea7377c14037f5431-ca-certs\") pod \"kube-apiserver-172-234-214-143\" (UID: \"7920843ffc9fb1fea7377c14037f5431\") " pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:04:37.180690 kubelet[2599]: I0813 01:04:37.180169 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7920843ffc9fb1fea7377c14037f5431-k8s-certs\") pod \"kube-apiserver-172-234-214-143\" (UID: \"7920843ffc9fb1fea7377c14037f5431\") " pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:04:37.180690 kubelet[2599]: I0813 01:04:37.180188 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7920843ffc9fb1fea7377c14037f5431-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-214-143\" (UID: \"7920843ffc9fb1fea7377c14037f5431\") " pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:04:37.180690 kubelet[2599]: I0813 01:04:37.180209 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-ca-certs\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:37.180690 kubelet[2599]: I0813 01:04:37.180238 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32f00bc872b8c32f7a96994c9fea5737-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-214-143\" (UID: \"32f00bc872b8c32f7a96994c9fea5737\") " pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:37.260472 sudo[2634]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 01:04:37.260939 sudo[2634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 01:04:37.395223 kubelet[2599]: E0813 01:04:37.395166 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:37.396449 kubelet[2599]: E0813 01:04:37.395885 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:37.396449 kubelet[2599]: E0813 01:04:37.396013 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:37.755273 sudo[2634]: pam_unix(sudo:session): session closed for user root Aug 13 01:04:37.962033 kubelet[2599]: I0813 01:04:37.961999 2599 apiserver.go:52] "Watching apiserver" Aug 13 01:04:37.980533 kubelet[2599]: I0813 01:04:37.980470 2599 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:04:38.032548 kubelet[2599]: E0813 01:04:38.030562 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:38.033203 kubelet[2599]: E0813 01:04:38.032827 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:38.038438 kubelet[2599]: E0813 01:04:38.038422 2599 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-234-214-143\" already exists" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:04:38.038979 kubelet[2599]: E0813 01:04:38.038959 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:38.056574 kubelet[2599]: I0813 01:04:38.056384 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-214-143" podStartSLOduration=1.056346667 podStartE2EDuration="1.056346667s" podCreationTimestamp="2025-08-13 01:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:04:38.054920667 +0000 UTC m=+1.176400722" watchObservedRunningTime="2025-08-13 01:04:38.056346667 +0000 UTC m=+1.177826722" Aug 13 01:04:38.071710 kubelet[2599]: I0813 01:04:38.071162 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-214-143" podStartSLOduration=1.071152499 podStartE2EDuration="1.071152499s" podCreationTimestamp="2025-08-13 01:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:04:38.064358903 +0000 UTC m=+1.185838958" watchObservedRunningTime="2025-08-13 01:04:38.071152499 +0000 UTC m=+1.192632564" Aug 13 01:04:38.079160 kubelet[2599]: I0813 01:04:38.078692 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-214-143" podStartSLOduration=1.078683905 podStartE2EDuration="1.078683905s" podCreationTimestamp="2025-08-13 01:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:04:38.071215369 +0000 UTC m=+1.192695424" watchObservedRunningTime="2025-08-13 01:04:38.078683905 +0000 UTC m=+1.200163970" Aug 13 01:04:38.523985 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:04:39.014715 sudo[1701]: pam_unix(sudo:session): session closed for user root Aug 13 01:04:39.031674 kubelet[2599]: E0813 01:04:39.031647 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:39.032134 kubelet[2599]: E0813 01:04:39.032117 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:39.066428 sshd[1700]: Connection closed by 139.178.89.65 port 41316 Aug 13 01:04:39.066968 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:39.071320 systemd[1]: sshd@6-172.234.214.143:22-139.178.89.65:41316.service: Deactivated successfully. Aug 13 01:04:39.073902 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:04:39.074289 systemd[1]: session-7.scope: Consumed 3.705s CPU time, 257.9M memory peak. Aug 13 01:04:39.075654 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:04:39.076514 systemd-logind[1460]: Removed session 7. Aug 13 01:04:42.442525 kubelet[2599]: E0813 01:04:42.442477 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:43.209251 kubelet[2599]: I0813 01:04:43.209193 2599 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:04:43.209567 containerd[1485]: time="2025-08-13T01:04:43.209495989Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:04:43.210628 kubelet[2599]: I0813 01:04:43.209651 2599 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:04:44.180794 systemd[1]: Created slice kubepods-besteffort-pod59493087_f0d7_4953_bafd_458c2e8eb23e.slice - libcontainer container kubepods-besteffort-pod59493087_f0d7_4953_bafd_458c2e8eb23e.slice. Aug 13 01:04:44.196967 systemd[1]: Created slice kubepods-burstable-pod0619f9c5_cbd0_435a_928c_03a8fb6c230e.slice - libcontainer container kubepods-burstable-pod0619f9c5_cbd0_435a_928c_03a8fb6c230e.slice. Aug 13 01:04:44.227862 kubelet[2599]: I0813 01:04:44.227804 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/59493087-f0d7-4953-bafd-458c2e8eb23e-kube-proxy\") pod \"kube-proxy-qzrh6\" (UID: \"59493087-f0d7-4953-bafd-458c2e8eb23e\") " pod="kube-system/kube-proxy-qzrh6" Aug 13 01:04:44.227862 kubelet[2599]: I0813 01:04:44.227833 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cni-path\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.227862 kubelet[2599]: I0813 01:04:44.227851 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-etc-cni-netd\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.227862 kubelet[2599]: I0813 01:04:44.227866 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-config-path\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228564 kubelet[2599]: I0813 01:04:44.227879 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-run\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228564 kubelet[2599]: I0813 01:04:44.227894 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-host-proc-sys-net\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228564 kubelet[2599]: I0813 01:04:44.227907 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-host-proc-sys-kernel\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228564 kubelet[2599]: I0813 01:04:44.227920 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrwmn\" (UniqueName: \"kubernetes.io/projected/0619f9c5-cbd0-435a-928c-03a8fb6c230e-kube-api-access-rrwmn\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228564 kubelet[2599]: I0813 01:04:44.227935 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhvx5\" (UniqueName: \"kubernetes.io/projected/59493087-f0d7-4953-bafd-458c2e8eb23e-kube-api-access-bhvx5\") pod \"kube-proxy-qzrh6\" (UID: \"59493087-f0d7-4953-bafd-458c2e8eb23e\") " pod="kube-system/kube-proxy-qzrh6" Aug 13 01:04:44.228733 kubelet[2599]: I0813 01:04:44.227948 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-bpf-maps\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228733 kubelet[2599]: I0813 01:04:44.227961 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-hostproc\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228733 kubelet[2599]: I0813 01:04:44.227974 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-cgroup\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228733 kubelet[2599]: I0813 01:04:44.227987 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0619f9c5-cbd0-435a-928c-03a8fb6c230e-clustermesh-secrets\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228733 kubelet[2599]: I0813 01:04:44.228001 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0619f9c5-cbd0-435a-928c-03a8fb6c230e-hubble-tls\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228733 kubelet[2599]: I0813 01:04:44.228014 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-lib-modules\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.228857 kubelet[2599]: I0813 01:04:44.228028 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59493087-f0d7-4953-bafd-458c2e8eb23e-xtables-lock\") pod \"kube-proxy-qzrh6\" (UID: \"59493087-f0d7-4953-bafd-458c2e8eb23e\") " pod="kube-system/kube-proxy-qzrh6" Aug 13 01:04:44.228857 kubelet[2599]: I0813 01:04:44.228040 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59493087-f0d7-4953-bafd-458c2e8eb23e-lib-modules\") pod \"kube-proxy-qzrh6\" (UID: \"59493087-f0d7-4953-bafd-458c2e8eb23e\") " pod="kube-system/kube-proxy-qzrh6" Aug 13 01:04:44.228857 kubelet[2599]: I0813 01:04:44.228054 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-xtables-lock\") pod \"cilium-tp96b\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " pod="kube-system/cilium-tp96b" Aug 13 01:04:44.267046 systemd[1]: Created slice kubepods-besteffort-pod48b2e51e_7897_4b6b_9af1_5b6cd55d1227.slice - libcontainer container kubepods-besteffort-pod48b2e51e_7897_4b6b_9af1_5b6cd55d1227.slice. Aug 13 01:04:44.328933 kubelet[2599]: I0813 01:04:44.328905 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48b2e51e-7897-4b6b-9af1-5b6cd55d1227-cilium-config-path\") pod \"cilium-operator-5d85765b45-x724r\" (UID: \"48b2e51e-7897-4b6b-9af1-5b6cd55d1227\") " pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:04:44.329057 kubelet[2599]: I0813 01:04:44.329043 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zmj\" (UniqueName: \"kubernetes.io/projected/48b2e51e-7897-4b6b-9af1-5b6cd55d1227-kube-api-access-g5zmj\") pod \"cilium-operator-5d85765b45-x724r\" (UID: \"48b2e51e-7897-4b6b-9af1-5b6cd55d1227\") " pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:04:44.493393 kubelet[2599]: E0813 01:04:44.493300 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:44.493836 containerd[1485]: time="2025-08-13T01:04:44.493798947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qzrh6,Uid:59493087-f0d7-4953-bafd-458c2e8eb23e,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:44.501527 kubelet[2599]: E0813 01:04:44.501510 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:44.502701 containerd[1485]: time="2025-08-13T01:04:44.501860243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tp96b,Uid:0619f9c5-cbd0-435a-928c-03a8fb6c230e,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:44.521098 containerd[1485]: time="2025-08-13T01:04:44.520999603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:04:44.521098 containerd[1485]: time="2025-08-13T01:04:44.521093883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:04:44.521239 containerd[1485]: time="2025-08-13T01:04:44.521124583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:44.521239 containerd[1485]: time="2025-08-13T01:04:44.521189623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:44.526380 containerd[1485]: time="2025-08-13T01:04:44.526228511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:04:44.526501 containerd[1485]: time="2025-08-13T01:04:44.526478061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:04:44.526739 containerd[1485]: time="2025-08-13T01:04:44.526675601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:44.527604 containerd[1485]: time="2025-08-13T01:04:44.527476340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:44.539750 systemd[1]: Started cri-containerd-32dcb35356298c511b72753865a6fc53ac7086075f04e35e875dca3d977dd637.scope - libcontainer container 32dcb35356298c511b72753865a6fc53ac7086075f04e35e875dca3d977dd637. Aug 13 01:04:44.547042 systemd[1]: Started cri-containerd-06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae.scope - libcontainer container 06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae. Aug 13 01:04:44.570441 kubelet[2599]: E0813 01:04:44.570365 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:44.573582 containerd[1485]: time="2025-08-13T01:04:44.573126117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-x724r,Uid:48b2e51e-7897-4b6b-9af1-5b6cd55d1227,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:44.578281 containerd[1485]: time="2025-08-13T01:04:44.578219855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tp96b,Uid:0619f9c5-cbd0-435a-928c-03a8fb6c230e,Namespace:kube-system,Attempt:0,} returns sandbox id \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\"" Aug 13 01:04:44.579104 kubelet[2599]: E0813 01:04:44.579052 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:44.580669 containerd[1485]: time="2025-08-13T01:04:44.580624554Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:04:44.580863 containerd[1485]: time="2025-08-13T01:04:44.580844354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qzrh6,Uid:59493087-f0d7-4953-bafd-458c2e8eb23e,Namespace:kube-system,Attempt:0,} returns sandbox id \"32dcb35356298c511b72753865a6fc53ac7086075f04e35e875dca3d977dd637\"" Aug 13 01:04:44.582169 kubelet[2599]: E0813 01:04:44.582096 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:44.585194 containerd[1485]: time="2025-08-13T01:04:44.585166911Z" level=info msg="CreateContainer within sandbox \"32dcb35356298c511b72753865a6fc53ac7086075f04e35e875dca3d977dd637\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:04:44.597575 containerd[1485]: time="2025-08-13T01:04:44.597494755Z" level=info msg="CreateContainer within sandbox \"32dcb35356298c511b72753865a6fc53ac7086075f04e35e875dca3d977dd637\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6a0029adcfb3181381e43959be0c9f4b9592d806be099442ab5933236db20c45\"" Aug 13 01:04:44.599064 containerd[1485]: time="2025-08-13T01:04:44.598981384Z" level=info msg="StartContainer for \"6a0029adcfb3181381e43959be0c9f4b9592d806be099442ab5933236db20c45\"" Aug 13 01:04:44.606540 containerd[1485]: time="2025-08-13T01:04:44.606331641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:04:44.606540 containerd[1485]: time="2025-08-13T01:04:44.606487151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:04:44.606540 containerd[1485]: time="2025-08-13T01:04:44.606508111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:44.606832 containerd[1485]: time="2025-08-13T01:04:44.606630521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:04:44.632736 systemd[1]: Started cri-containerd-5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7.scope - libcontainer container 5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7. Aug 13 01:04:44.637833 systemd[1]: Started cri-containerd-6a0029adcfb3181381e43959be0c9f4b9592d806be099442ab5933236db20c45.scope - libcontainer container 6a0029adcfb3181381e43959be0c9f4b9592d806be099442ab5933236db20c45. Aug 13 01:04:44.681883 containerd[1485]: time="2025-08-13T01:04:44.681807883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-x724r,Uid:48b2e51e-7897-4b6b-9af1-5b6cd55d1227,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\"" Aug 13 01:04:44.682848 kubelet[2599]: E0813 01:04:44.682818 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:44.696084 containerd[1485]: time="2025-08-13T01:04:44.696061886Z" level=info msg="StartContainer for \"6a0029adcfb3181381e43959be0c9f4b9592d806be099442ab5933236db20c45\" returns successfully" Aug 13 01:04:45.041634 kubelet[2599]: E0813 01:04:45.041248 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:45.813992 kubelet[2599]: E0813 01:04:45.813956 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:45.827955 kubelet[2599]: I0813 01:04:45.827917 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qzrh6" podStartSLOduration=1.82790458 podStartE2EDuration="1.82790458s" podCreationTimestamp="2025-08-13 01:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:04:45.055721486 +0000 UTC m=+8.177201541" watchObservedRunningTime="2025-08-13 01:04:45.82790458 +0000 UTC m=+8.949384635" Aug 13 01:04:46.047442 kubelet[2599]: E0813 01:04:46.047399 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:47.962220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937950547.mount: Deactivated successfully. Aug 13 01:04:48.833345 systemd-timesyncd[1393]: Contacted time server [2602:81b:9000::c10c]:123 (2.flatcar.pool.ntp.org). Aug 13 01:04:48.833404 systemd-timesyncd[1393]: Initial clock synchronization to Wed 2025-08-13 01:04:48.833133 UTC. Aug 13 01:04:48.833465 systemd-resolved[1392]: Clock change detected. Flushing caches. Aug 13 01:04:49.638316 kubelet[2599]: E0813 01:04:49.638286 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:50.243128 containerd[1485]: time="2025-08-13T01:04:50.243086717Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:50.243942 containerd[1485]: time="2025-08-13T01:04:50.243823397Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 01:04:50.244603 containerd[1485]: time="2025-08-13T01:04:50.244414596Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:50.245770 containerd[1485]: time="2025-08-13T01:04:50.245669326Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 4.830567444s" Aug 13 01:04:50.245770 containerd[1485]: time="2025-08-13T01:04:50.245695806Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:04:50.247434 containerd[1485]: time="2025-08-13T01:04:50.247405775Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:04:50.249166 containerd[1485]: time="2025-08-13T01:04:50.249139164Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:04:50.269886 containerd[1485]: time="2025-08-13T01:04:50.269864644Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\"" Aug 13 01:04:50.270346 containerd[1485]: time="2025-08-13T01:04:50.270298373Z" level=info msg="StartContainer for \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\"" Aug 13 01:04:50.294954 systemd[1]: run-containerd-runc-k8s.io-1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f-runc.fMS70l.mount: Deactivated successfully. Aug 13 01:04:50.303168 systemd[1]: Started cri-containerd-1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f.scope - libcontainer container 1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f. Aug 13 01:04:50.326324 containerd[1485]: time="2025-08-13T01:04:50.326291365Z" level=info msg="StartContainer for \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\" returns successfully" Aug 13 01:04:50.338338 systemd[1]: cri-containerd-1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f.scope: Deactivated successfully. Aug 13 01:04:50.414280 containerd[1485]: time="2025-08-13T01:04:50.414144771Z" level=info msg="shim disconnected" id=1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f namespace=k8s.io Aug 13 01:04:50.414280 containerd[1485]: time="2025-08-13T01:04:50.414206231Z" level=warning msg="cleaning up after shim disconnected" id=1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f namespace=k8s.io Aug 13 01:04:50.414280 containerd[1485]: time="2025-08-13T01:04:50.414214221Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:04:50.890575 kubelet[2599]: E0813 01:04:50.890526 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:50.893980 containerd[1485]: time="2025-08-13T01:04:50.893842722Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:04:50.909157 containerd[1485]: time="2025-08-13T01:04:50.909018904Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\"" Aug 13 01:04:50.910399 containerd[1485]: time="2025-08-13T01:04:50.909394484Z" level=info msg="StartContainer for \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\"" Aug 13 01:04:50.939174 systemd[1]: Started cri-containerd-198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5.scope - libcontainer container 198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5. Aug 13 01:04:50.963369 containerd[1485]: time="2025-08-13T01:04:50.963335737Z" level=info msg="StartContainer for \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\" returns successfully" Aug 13 01:04:50.977110 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:04:50.977708 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:04:50.977923 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:04:50.982329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:04:50.982510 systemd[1]: cri-containerd-198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5.scope: Deactivated successfully. Aug 13 01:04:51.001566 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:04:51.019466 containerd[1485]: time="2025-08-13T01:04:51.018872159Z" level=info msg="shim disconnected" id=198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5 namespace=k8s.io Aug 13 01:04:51.019466 containerd[1485]: time="2025-08-13T01:04:51.019177389Z" level=warning msg="cleaning up after shim disconnected" id=198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5 namespace=k8s.io Aug 13 01:04:51.019466 containerd[1485]: time="2025-08-13T01:04:51.019186839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:04:51.258274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f-rootfs.mount: Deactivated successfully. Aug 13 01:04:51.582379 containerd[1485]: time="2025-08-13T01:04:51.582178707Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:51.583321 containerd[1485]: time="2025-08-13T01:04:51.583172757Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 01:04:51.584101 containerd[1485]: time="2025-08-13T01:04:51.584066666Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:51.585363 containerd[1485]: time="2025-08-13T01:04:51.585269496Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.337834401s" Aug 13 01:04:51.585363 containerd[1485]: time="2025-08-13T01:04:51.585294676Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:04:51.588097 containerd[1485]: time="2025-08-13T01:04:51.588072054Z" level=info msg="CreateContainer within sandbox \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 01:04:51.604200 containerd[1485]: time="2025-08-13T01:04:51.604178136Z" level=info msg="CreateContainer within sandbox \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\"" Aug 13 01:04:51.605306 containerd[1485]: time="2025-08-13T01:04:51.604531036Z" level=info msg="StartContainer for \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\"" Aug 13 01:04:51.633164 systemd[1]: Started cri-containerd-33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd.scope - libcontainer container 33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd. Aug 13 01:04:51.657499 containerd[1485]: time="2025-08-13T01:04:51.657461700Z" level=info msg="StartContainer for \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\" returns successfully" Aug 13 01:04:51.892706 kubelet[2599]: E0813 01:04:51.892596 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:51.897508 kubelet[2599]: E0813 01:04:51.897484 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:51.899892 containerd[1485]: time="2025-08-13T01:04:51.899470899Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:04:51.919000 containerd[1485]: time="2025-08-13T01:04:51.918904929Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\"" Aug 13 01:04:51.922061 containerd[1485]: time="2025-08-13T01:04:51.920567628Z" level=info msg="StartContainer for \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\"" Aug 13 01:04:51.941343 kubelet[2599]: I0813 01:04:51.941285 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-x724r" podStartSLOduration=1.8727646629999999 podStartE2EDuration="7.941269528s" podCreationTimestamp="2025-08-13 01:04:44 +0000 UTC" firstStartedPulling="2025-08-13 01:04:44.683746192 +0000 UTC m=+7.805226247" lastFinishedPulling="2025-08-13 01:04:51.586702255 +0000 UTC m=+13.873731112" observedRunningTime="2025-08-13 01:04:51.904830456 +0000 UTC m=+14.191859313" watchObservedRunningTime="2025-08-13 01:04:51.941269528 +0000 UTC m=+14.228298385" Aug 13 01:04:51.954197 systemd[1]: Started cri-containerd-d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57.scope - libcontainer container d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57. Aug 13 01:04:51.992270 containerd[1485]: time="2025-08-13T01:04:51.992213072Z" level=info msg="StartContainer for \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\" returns successfully" Aug 13 01:04:52.001396 systemd[1]: cri-containerd-d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57.scope: Deactivated successfully. Aug 13 01:04:52.068722 containerd[1485]: time="2025-08-13T01:04:52.068671724Z" level=info msg="shim disconnected" id=d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57 namespace=k8s.io Aug 13 01:04:52.069135 containerd[1485]: time="2025-08-13T01:04:52.068944354Z" level=warning msg="cleaning up after shim disconnected" id=d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57 namespace=k8s.io Aug 13 01:04:52.069135 containerd[1485]: time="2025-08-13T01:04:52.068957974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:04:52.089869 containerd[1485]: time="2025-08-13T01:04:52.089735033Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:04:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:04:52.902462 kubelet[2599]: E0813 01:04:52.902404 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:52.903163 kubelet[2599]: E0813 01:04:52.903128 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:52.906327 containerd[1485]: time="2025-08-13T01:04:52.905877325Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:04:52.919327 containerd[1485]: time="2025-08-13T01:04:52.918571129Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\"" Aug 13 01:04:52.920988 containerd[1485]: time="2025-08-13T01:04:52.920951668Z" level=info msg="StartContainer for \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\"" Aug 13 01:04:52.961204 systemd[1]: Started cri-containerd-b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4.scope - libcontainer container b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4. Aug 13 01:04:52.986449 systemd[1]: cri-containerd-b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4.scope: Deactivated successfully. Aug 13 01:04:52.987796 containerd[1485]: time="2025-08-13T01:04:52.987763834Z" level=info msg="StartContainer for \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\" returns successfully" Aug 13 01:04:53.005428 containerd[1485]: time="2025-08-13T01:04:53.005379056Z" level=info msg="shim disconnected" id=b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4 namespace=k8s.io Aug 13 01:04:53.005428 containerd[1485]: time="2025-08-13T01:04:53.005426296Z" level=warning msg="cleaning up after shim disconnected" id=b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4 namespace=k8s.io Aug 13 01:04:53.005690 containerd[1485]: time="2025-08-13T01:04:53.005434386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:04:53.257828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4-rootfs.mount: Deactivated successfully. Aug 13 01:04:53.281387 kubelet[2599]: E0813 01:04:53.280672 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:53.681250 update_engine[1466]: I20250813 01:04:53.681020 1466 update_attempter.cc:509] Updating boot flags... Aug 13 01:04:53.728744 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3296) Aug 13 01:04:53.837161 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3297) Aug 13 01:04:53.932559 kubelet[2599]: E0813 01:04:53.932297 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:53.947738 containerd[1485]: time="2025-08-13T01:04:53.943213817Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:04:53.965167 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (3297) Aug 13 01:04:53.984085 containerd[1485]: time="2025-08-13T01:04:53.984036356Z" level=info msg="CreateContainer within sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\"" Aug 13 01:04:53.992076 containerd[1485]: time="2025-08-13T01:04:53.991160413Z" level=info msg="StartContainer for \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\"" Aug 13 01:04:54.082169 systemd[1]: Started cri-containerd-6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264.scope - libcontainer container 6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264. Aug 13 01:04:54.127543 containerd[1485]: time="2025-08-13T01:04:54.125491715Z" level=info msg="StartContainer for \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\" returns successfully" Aug 13 01:04:54.289011 kubelet[2599]: I0813 01:04:54.288577 2599 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:04:54.312570 kubelet[2599]: W0813 01:04:54.312485 2599 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-234-214-143" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-234-214-143' and this object Aug 13 01:04:54.312570 kubelet[2599]: E0813 01:04:54.312517 2599 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:172-234-214-143\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-214-143' and this object" logger="UnhandledError" Aug 13 01:04:54.323762 systemd[1]: Created slice kubepods-burstable-pod78fa6c3b_212a_4748_94ca_c14248ee6107.slice - libcontainer container kubepods-burstable-pod78fa6c3b_212a_4748_94ca_c14248ee6107.slice. Aug 13 01:04:54.328402 kubelet[2599]: I0813 01:04:54.328379 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78fa6c3b-212a-4748-94ca-c14248ee6107-config-volume\") pod \"coredns-7c65d6cfc9-bghtd\" (UID: \"78fa6c3b-212a-4748-94ca-c14248ee6107\") " pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:04:54.328480 kubelet[2599]: I0813 01:04:54.328408 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hzm\" (UniqueName: \"kubernetes.io/projected/78fa6c3b-212a-4748-94ca-c14248ee6107-kube-api-access-79hzm\") pod \"coredns-7c65d6cfc9-bghtd\" (UID: \"78fa6c3b-212a-4748-94ca-c14248ee6107\") " pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:04:54.329474 systemd[1]: Created slice kubepods-burstable-pod1494ea32_39e4_4b6b_ada8_768f760ddc03.slice - libcontainer container kubepods-burstable-pod1494ea32_39e4_4b6b_ada8_768f760ddc03.slice. Aug 13 01:04:54.429213 kubelet[2599]: I0813 01:04:54.429192 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trpnq\" (UniqueName: \"kubernetes.io/projected/1494ea32-39e4-4b6b-ada8-768f760ddc03-kube-api-access-trpnq\") pod \"coredns-7c65d6cfc9-f7p9h\" (UID: \"1494ea32-39e4-4b6b-ada8-768f760ddc03\") " pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:04:54.429301 kubelet[2599]: I0813 01:04:54.429230 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1494ea32-39e4-4b6b-ada8-768f760ddc03-config-volume\") pod \"coredns-7c65d6cfc9-f7p9h\" (UID: \"1494ea32-39e4-4b6b-ada8-768f760ddc03\") " pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:04:54.933599 kubelet[2599]: E0813 01:04:54.933416 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:54.944744 kubelet[2599]: I0813 01:04:54.944602 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tp96b" podStartSLOduration=6.112560423 podStartE2EDuration="10.944590116s" podCreationTimestamp="2025-08-13 01:04:44 +0000 UTC" firstStartedPulling="2025-08-13 01:04:44.580254674 +0000 UTC m=+7.701734729" lastFinishedPulling="2025-08-13 01:04:50.246735565 +0000 UTC m=+12.533764422" observedRunningTime="2025-08-13 01:04:54.943906276 +0000 UTC m=+17.230935143" watchObservedRunningTime="2025-08-13 01:04:54.944590116 +0000 UTC m=+17.231618973" Aug 13 01:04:55.429670 kubelet[2599]: E0813 01:04:55.429536 2599 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 13 01:04:55.429670 kubelet[2599]: E0813 01:04:55.429613 2599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78fa6c3b-212a-4748-94ca-c14248ee6107-config-volume podName:78fa6c3b-212a-4748-94ca-c14248ee6107 nodeName:}" failed. No retries permitted until 2025-08-13 01:04:55.929593073 +0000 UTC m=+18.216621930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/78fa6c3b-212a-4748-94ca-c14248ee6107-config-volume") pod "coredns-7c65d6cfc9-bghtd" (UID: "78fa6c3b-212a-4748-94ca-c14248ee6107") : failed to sync configmap cache: timed out waiting for the condition Aug 13 01:04:55.530632 kubelet[2599]: E0813 01:04:55.530598 2599 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Aug 13 01:04:55.530720 kubelet[2599]: E0813 01:04:55.530652 2599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1494ea32-39e4-4b6b-ada8-768f760ddc03-config-volume podName:1494ea32-39e4-4b6b-ada8-768f760ddc03 nodeName:}" failed. No retries permitted until 2025-08-13 01:04:56.030638333 +0000 UTC m=+18.317667190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1494ea32-39e4-4b6b-ada8-768f760ddc03-config-volume") pod "coredns-7c65d6cfc9-f7p9h" (UID: "1494ea32-39e4-4b6b-ada8-768f760ddc03") : failed to sync configmap cache: timed out waiting for the condition Aug 13 01:04:55.935452 kubelet[2599]: E0813 01:04:55.935017 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:56.126099 kubelet[2599]: E0813 01:04:56.126065 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:56.126492 containerd[1485]: time="2025-08-13T01:04:56.126443695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bghtd,Uid:78fa6c3b-212a-4748-94ca-c14248ee6107,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:56.135213 kubelet[2599]: E0813 01:04:56.135159 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:56.136279 containerd[1485]: time="2025-08-13T01:04:56.135869460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f7p9h,Uid:1494ea32-39e4-4b6b-ada8-768f760ddc03,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:56.345063 systemd-networkd[1391]: cilium_host: Link UP Aug 13 01:04:56.349170 systemd-networkd[1391]: cilium_net: Link UP Aug 13 01:04:56.350681 systemd-networkd[1391]: cilium_net: Gained carrier Aug 13 01:04:56.351330 systemd-networkd[1391]: cilium_host: Gained carrier Aug 13 01:04:56.462229 systemd-networkd[1391]: cilium_vxlan: Link UP Aug 13 01:04:56.462238 systemd-networkd[1391]: cilium_vxlan: Gained carrier Aug 13 01:04:56.663091 kernel: NET: Registered PF_ALG protocol family Aug 13 01:04:56.891328 systemd-networkd[1391]: cilium_host: Gained IPv6LL Aug 13 01:04:56.938665 kubelet[2599]: E0813 01:04:56.938626 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:57.334597 systemd-networkd[1391]: lxc_health: Link UP Aug 13 01:04:57.338190 systemd-networkd[1391]: lxc_health: Gained carrier Aug 13 01:04:57.339774 systemd-networkd[1391]: cilium_net: Gained IPv6LL Aug 13 01:04:57.691654 systemd-networkd[1391]: lxc14cc9edfea6a: Link UP Aug 13 01:04:57.692691 systemd-networkd[1391]: lxc916c789fe22d: Link UP Aug 13 01:04:57.695071 kernel: eth0: renamed from tmp732a7 Aug 13 01:04:57.701082 kernel: eth0: renamed from tmp76c22 Aug 13 01:04:57.706556 systemd-networkd[1391]: lxc916c789fe22d: Gained carrier Aug 13 01:04:57.707928 systemd-networkd[1391]: lxc14cc9edfea6a: Gained carrier Aug 13 01:04:57.938875 kubelet[2599]: E0813 01:04:57.938836 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:57.990362 kubelet[2599]: I0813 01:04:57.990330 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:57.990362 kubelet[2599]: I0813 01:04:57.990373 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:04:57.996524 kubelet[2599]: I0813 01:04:57.996497 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:04:58.017608 kubelet[2599]: I0813 01:04:58.017577 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:58.017707 kubelet[2599]: I0813 01:04:58.017674 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-operator-5d85765b45-x724r","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/cilium-tp96b","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:04:58.017744 kubelet[2599]: E0813 01:04:58.017709 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:04:58.017744 kubelet[2599]: E0813 01:04:58.017719 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:04:58.017744 kubelet[2599]: E0813 01:04:58.017729 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:04:58.017744 kubelet[2599]: E0813 01:04:58.017738 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:04:58.017744 kubelet[2599]: E0813 01:04:58.017746 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:04:58.017837 kubelet[2599]: E0813 01:04:58.017754 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:04:58.017837 kubelet[2599]: E0813 01:04:58.017762 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:04:58.017837 kubelet[2599]: E0813 01:04:58.017769 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:04:58.017837 kubelet[2599]: I0813 01:04:58.017778 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:04:58.299253 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Aug 13 01:04:58.811189 systemd-networkd[1391]: lxc_health: Gained IPv6LL Aug 13 01:04:58.940437 kubelet[2599]: E0813 01:04:58.940265 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:04:59.003161 systemd-networkd[1391]: lxc916c789fe22d: Gained IPv6LL Aug 13 01:04:59.323218 systemd-networkd[1391]: lxc14cc9edfea6a: Gained IPv6LL Aug 13 01:04:59.941864 kubelet[2599]: E0813 01:04:59.941455 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:05:00.891071 containerd[1485]: time="2025-08-13T01:05:00.890013512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:05:00.891071 containerd[1485]: time="2025-08-13T01:05:00.890273502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:05:00.891071 containerd[1485]: time="2025-08-13T01:05:00.890341082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:05:00.891536 containerd[1485]: time="2025-08-13T01:05:00.891220722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:05:00.894166 containerd[1485]: time="2025-08-13T01:05:00.892951191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:05:00.894166 containerd[1485]: time="2025-08-13T01:05:00.893166641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:05:00.894166 containerd[1485]: time="2025-08-13T01:05:00.894008360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:05:00.895069 containerd[1485]: time="2025-08-13T01:05:00.894205180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:05:00.929568 systemd[1]: Started cri-containerd-732a71a304d9fefce6cc62436255651cdc17cee73d99008d2fadbe6c7ffd6a31.scope - libcontainer container 732a71a304d9fefce6cc62436255651cdc17cee73d99008d2fadbe6c7ffd6a31. Aug 13 01:05:00.942176 systemd[1]: Started cri-containerd-76c227338127f20470ac20494d3bb42afa3b725704400bf48bbaa164f2acc310.scope - libcontainer container 76c227338127f20470ac20494d3bb42afa3b725704400bf48bbaa164f2acc310. Aug 13 01:05:00.989706 containerd[1485]: time="2025-08-13T01:05:00.989669482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bghtd,Uid:78fa6c3b-212a-4748-94ca-c14248ee6107,Namespace:kube-system,Attempt:0,} returns sandbox id \"732a71a304d9fefce6cc62436255651cdc17cee73d99008d2fadbe6c7ffd6a31\"" Aug 13 01:05:00.990461 kubelet[2599]: E0813 01:05:00.990435 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:05:00.994016 containerd[1485]: time="2025-08-13T01:05:00.993982980Z" level=info msg="CreateContainer within sandbox \"732a71a304d9fefce6cc62436255651cdc17cee73d99008d2fadbe6c7ffd6a31\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:05:01.013463 containerd[1485]: time="2025-08-13T01:05:01.013435471Z" level=info msg="CreateContainer within sandbox \"732a71a304d9fefce6cc62436255651cdc17cee73d99008d2fadbe6c7ffd6a31\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6cfff2a7cf1bd98037486be2aee0d8fc07dd52ef4404f73a5cf09d1c299a18d9\"" Aug 13 01:05:01.015259 containerd[1485]: time="2025-08-13T01:05:01.015150980Z" level=info msg="StartContainer for \"6cfff2a7cf1bd98037486be2aee0d8fc07dd52ef4404f73a5cf09d1c299a18d9\"" Aug 13 01:05:01.025418 containerd[1485]: time="2025-08-13T01:05:01.025383405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f7p9h,Uid:1494ea32-39e4-4b6b-ada8-768f760ddc03,Namespace:kube-system,Attempt:0,} returns sandbox id \"76c227338127f20470ac20494d3bb42afa3b725704400bf48bbaa164f2acc310\"" Aug 13 01:05:01.027477 kubelet[2599]: E0813 01:05:01.026505 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:05:01.031968 containerd[1485]: time="2025-08-13T01:05:01.031946981Z" level=info msg="CreateContainer within sandbox \"76c227338127f20470ac20494d3bb42afa3b725704400bf48bbaa164f2acc310\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:05:01.052170 systemd[1]: Started cri-containerd-6cfff2a7cf1bd98037486be2aee0d8fc07dd52ef4404f73a5cf09d1c299a18d9.scope - libcontainer container 6cfff2a7cf1bd98037486be2aee0d8fc07dd52ef4404f73a5cf09d1c299a18d9. Aug 13 01:05:01.052516 containerd[1485]: time="2025-08-13T01:05:01.052485821Z" level=info msg="CreateContainer within sandbox \"76c227338127f20470ac20494d3bb42afa3b725704400bf48bbaa164f2acc310\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c32f15f6a2c9a9d1727c83e7fe1e20a529c6fee0ab73e92d582632b41e428d3b\"" Aug 13 01:05:01.054333 containerd[1485]: time="2025-08-13T01:05:01.054306790Z" level=info msg="StartContainer for \"c32f15f6a2c9a9d1727c83e7fe1e20a529c6fee0ab73e92d582632b41e428d3b\"" Aug 13 01:05:01.096383 systemd[1]: Started cri-containerd-c32f15f6a2c9a9d1727c83e7fe1e20a529c6fee0ab73e92d582632b41e428d3b.scope - libcontainer container c32f15f6a2c9a9d1727c83e7fe1e20a529c6fee0ab73e92d582632b41e428d3b. Aug 13 01:05:01.099315 containerd[1485]: time="2025-08-13T01:05:01.099275198Z" level=info msg="StartContainer for \"6cfff2a7cf1bd98037486be2aee0d8fc07dd52ef4404f73a5cf09d1c299a18d9\" returns successfully" Aug 13 01:05:01.131020 containerd[1485]: time="2025-08-13T01:05:01.130398052Z" level=info msg="StartContainer for \"c32f15f6a2c9a9d1727c83e7fe1e20a529c6fee0ab73e92d582632b41e428d3b\" returns successfully" Aug 13 01:05:01.900163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922337193.mount: Deactivated successfully. Aug 13 01:05:01.949783 kubelet[2599]: E0813 01:05:01.949579 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:05:01.953418 kubelet[2599]: E0813 01:05:01.953391 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:05:01.961512 kubelet[2599]: I0813 01:05:01.961476 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f7p9h" podStartSLOduration=17.961466096 podStartE2EDuration="17.961466096s" podCreationTimestamp="2025-08-13 01:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:05:01.961067647 +0000 UTC m=+24.248096504" watchObservedRunningTime="2025-08-13 01:05:01.961466096 +0000 UTC m=+24.248494953" Aug 13 01:05:02.955800 kubelet[2599]: E0813 01:05:02.955672 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:05:02.955800 kubelet[2599]: E0813 01:05:02.955673 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:05:03.957292 kubelet[2599]: E0813 01:05:03.957253 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:05:08.035602 kubelet[2599]: I0813 01:05:08.035563 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:08.035602 kubelet[2599]: I0813 01:05:08.035603 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:08.037828 kubelet[2599]: I0813 01:05:08.037811 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:08.047572 kubelet[2599]: I0813 01:05:08.047549 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:08.047651 kubelet[2599]: I0813 01:05:08.047635 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:05:08.047677 kubelet[2599]: E0813 01:05:08.047667 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:05:08.047701 kubelet[2599]: E0813 01:05:08.047680 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:05:08.047701 kubelet[2599]: E0813 01:05:08.047689 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:05:08.047701 kubelet[2599]: E0813 01:05:08.047697 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:05:08.047764 kubelet[2599]: E0813 01:05:08.047704 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:05:08.047764 kubelet[2599]: E0813 01:05:08.047712 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:05:08.047764 kubelet[2599]: E0813 01:05:08.047721 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:05:08.047764 kubelet[2599]: E0813 01:05:08.047728 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:05:08.047764 kubelet[2599]: I0813 01:05:08.047736 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:18.062676 kubelet[2599]: I0813 01:05:18.062649 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:18.062676 kubelet[2599]: I0813 01:05:18.062687 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:18.066413 kubelet[2599]: I0813 01:05:18.066177 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:18.081949 kubelet[2599]: I0813 01:05:18.081724 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:18.081949 kubelet[2599]: I0813 01:05:18.081809 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:05:18.081949 kubelet[2599]: E0813 01:05:18.081839 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:05:18.081949 kubelet[2599]: E0813 01:05:18.081849 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:05:18.081949 kubelet[2599]: E0813 01:05:18.081857 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:05:18.081949 kubelet[2599]: E0813 01:05:18.081865 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:05:18.081949 kubelet[2599]: E0813 01:05:18.081874 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:05:18.081949 kubelet[2599]: E0813 01:05:18.081884 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:05:18.081949 kubelet[2599]: E0813 01:05:18.081892 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:05:18.081949 kubelet[2599]: E0813 01:05:18.081900 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:05:18.081949 kubelet[2599]: I0813 01:05:18.081909 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:28.104509 kubelet[2599]: I0813 01:05:28.104446 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:28.104509 kubelet[2599]: I0813 01:05:28.104502 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:28.108388 kubelet[2599]: I0813 01:05:28.107617 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:28.120680 kubelet[2599]: I0813 01:05:28.120650 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:28.120823 kubelet[2599]: I0813 01:05:28.120796 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:05:28.120920 kubelet[2599]: E0813 01:05:28.120836 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:05:28.120920 kubelet[2599]: E0813 01:05:28.120852 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:05:28.120920 kubelet[2599]: E0813 01:05:28.120862 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:05:28.120920 kubelet[2599]: E0813 01:05:28.120871 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:05:28.120920 kubelet[2599]: E0813 01:05:28.120881 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:05:28.120920 kubelet[2599]: E0813 01:05:28.120891 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:05:28.120920 kubelet[2599]: E0813 01:05:28.120901 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:05:28.120920 kubelet[2599]: E0813 01:05:28.120910 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:05:28.120920 kubelet[2599]: I0813 01:05:28.120920 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:38.139683 kubelet[2599]: I0813 01:05:38.139580 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:38.139683 kubelet[2599]: I0813 01:05:38.139628 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:38.142912 kubelet[2599]: I0813 01:05:38.142508 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:38.153992 kubelet[2599]: I0813 01:05:38.153962 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:38.154162 kubelet[2599]: I0813 01:05:38.154092 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:05:38.154162 kubelet[2599]: E0813 01:05:38.154126 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:05:38.154162 kubelet[2599]: E0813 01:05:38.154138 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:05:38.154162 kubelet[2599]: E0813 01:05:38.154147 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:05:38.154162 kubelet[2599]: E0813 01:05:38.154157 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:05:38.154162 kubelet[2599]: E0813 01:05:38.154166 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:05:38.154439 kubelet[2599]: E0813 01:05:38.154177 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:05:38.154439 kubelet[2599]: E0813 01:05:38.154186 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:05:38.154439 kubelet[2599]: E0813 01:05:38.154195 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:05:38.154439 kubelet[2599]: I0813 01:05:38.154204 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:48.177148 kubelet[2599]: I0813 01:05:48.177105 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:48.177148 kubelet[2599]: I0813 01:05:48.177145 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:48.179218 kubelet[2599]: I0813 01:05:48.179197 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:48.188768 kubelet[2599]: I0813 01:05:48.188747 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:48.188850 kubelet[2599]: I0813 01:05:48.188833 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-proxy-qzrh6","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:05:48.188889 kubelet[2599]: E0813 01:05:48.188865 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:05:48.188889 kubelet[2599]: E0813 01:05:48.188877 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:05:48.188889 kubelet[2599]: E0813 01:05:48.188885 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:05:48.188955 kubelet[2599]: E0813 01:05:48.188893 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:05:48.188955 kubelet[2599]: E0813 01:05:48.188901 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:05:48.188955 kubelet[2599]: E0813 01:05:48.188909 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:05:48.188955 kubelet[2599]: E0813 01:05:48.188918 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:05:48.188955 kubelet[2599]: E0813 01:05:48.188926 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:05:48.188955 kubelet[2599]: I0813 01:05:48.188935 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:50.825185 kubelet[2599]: E0813 01:05:50.822376 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:05:58.207142 kubelet[2599]: I0813 01:05:58.207038 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:58.207142 kubelet[2599]: I0813 01:05:58.207136 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:58.207637 kubelet[2599]: I0813 01:05:58.207235 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:05:58.207637 kubelet[2599]: E0813 01:05:58.207265 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:05:58.207637 kubelet[2599]: E0813 01:05:58.207277 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:05:58.207637 kubelet[2599]: E0813 01:05:58.207286 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:05:58.207637 kubelet[2599]: E0813 01:05:58.207293 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:05:58.207637 kubelet[2599]: E0813 01:05:58.207301 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:05:58.207637 kubelet[2599]: E0813 01:05:58.207310 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:05:58.207637 kubelet[2599]: E0813 01:05:58.207317 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:05:58.207637 kubelet[2599]: E0813 01:05:58.207328 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:05:58.207637 kubelet[2599]: I0813 01:05:58.207337 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:05.826097 kubelet[2599]: E0813 01:06:05.826019 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:06:06.822199 kubelet[2599]: E0813 01:06:06.822023 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:06:06.822199 kubelet[2599]: E0813 01:06:06.822023 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:06:08.221567 kubelet[2599]: I0813 01:06:08.221533 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:08.221567 kubelet[2599]: I0813 01:06:08.221567 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:08.223081 kubelet[2599]: I0813 01:06:08.223056 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:08.232106 kubelet[2599]: I0813 01:06:08.232084 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:08.232188 kubelet[2599]: I0813 01:06:08.232173 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:06:08.232222 kubelet[2599]: E0813 01:06:08.232203 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:06:08.232222 kubelet[2599]: E0813 01:06:08.232215 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:06:08.232222 kubelet[2599]: E0813 01:06:08.232222 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:06:08.232289 kubelet[2599]: E0813 01:06:08.232231 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:06:08.232289 kubelet[2599]: E0813 01:06:08.232238 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:06:08.232289 kubelet[2599]: E0813 01:06:08.232246 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:06:08.232289 kubelet[2599]: E0813 01:06:08.232253 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:06:08.232289 kubelet[2599]: E0813 01:06:08.232261 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:06:08.232289 kubelet[2599]: I0813 01:06:08.232269 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:08.822283 kubelet[2599]: E0813 01:06:08.822224 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:06:13.822178 kubelet[2599]: E0813 01:06:13.821832 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:06:18.247601 kubelet[2599]: I0813 01:06:18.247574 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:18.247601 kubelet[2599]: I0813 01:06:18.247607 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:18.250248 kubelet[2599]: I0813 01:06:18.250215 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:18.259497 kubelet[2599]: I0813 01:06:18.259481 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:18.259580 kubelet[2599]: I0813 01:06:18.259561 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-proxy-qzrh6","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:06:18.259629 kubelet[2599]: E0813 01:06:18.259597 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:06:18.259629 kubelet[2599]: E0813 01:06:18.259608 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:06:18.259629 kubelet[2599]: E0813 01:06:18.259616 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:06:18.259629 kubelet[2599]: E0813 01:06:18.259624 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:06:18.259629 kubelet[2599]: E0813 01:06:18.259632 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:06:18.259629 kubelet[2599]: E0813 01:06:18.259642 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:06:18.259783 kubelet[2599]: E0813 01:06:18.259649 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:06:18.259783 kubelet[2599]: E0813 01:06:18.259656 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:06:18.259783 kubelet[2599]: I0813 01:06:18.259665 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:27.822531 kubelet[2599]: E0813 01:06:27.822289 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:06:28.279160 kubelet[2599]: I0813 01:06:28.279121 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:28.279160 kubelet[2599]: I0813 01:06:28.279166 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:28.281255 kubelet[2599]: I0813 01:06:28.281235 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:28.290091 kubelet[2599]: I0813 01:06:28.290074 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:28.290218 kubelet[2599]: I0813 01:06:28.290200 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:06:28.290261 kubelet[2599]: E0813 01:06:28.290234 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:06:28.290261 kubelet[2599]: E0813 01:06:28.290244 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:06:28.290261 kubelet[2599]: E0813 01:06:28.290252 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:06:28.290261 kubelet[2599]: E0813 01:06:28.290261 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:06:28.290345 kubelet[2599]: E0813 01:06:28.290269 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:06:28.290345 kubelet[2599]: E0813 01:06:28.290277 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:06:28.290345 kubelet[2599]: E0813 01:06:28.290285 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:06:28.290345 kubelet[2599]: E0813 01:06:28.290293 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:06:28.290345 kubelet[2599]: I0813 01:06:28.290301 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:32.821502 kubelet[2599]: E0813 01:06:32.821469 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:06:36.557240 systemd[1]: Started sshd@7-172.234.214.143:22-139.178.89.65:45488.service - OpenSSH per-connection server daemon (139.178.89.65:45488). Aug 13 01:06:36.894036 sshd[3988]: Accepted publickey for core from 139.178.89.65 port 45488 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:06:36.896455 sshd-session[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:36.901124 systemd-logind[1460]: New session 8 of user core. Aug 13 01:06:36.906212 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:06:37.216057 sshd[3990]: Connection closed by 139.178.89.65 port 45488 Aug 13 01:06:37.217229 sshd-session[3988]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:37.220677 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:06:37.221615 systemd[1]: sshd@7-172.234.214.143:22-139.178.89.65:45488.service: Deactivated successfully. Aug 13 01:06:37.223722 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:06:37.224693 systemd-logind[1460]: Removed session 8. Aug 13 01:06:38.306790 kubelet[2599]: I0813 01:06:38.306761 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:38.306790 kubelet[2599]: I0813 01:06:38.306795 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:38.309233 kubelet[2599]: I0813 01:06:38.309132 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:38.312667 kubelet[2599]: I0813 01:06:38.312359 2599 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" size=320368 runtimeHandler="" Aug 13 01:06:38.313152 containerd[1485]: time="2025-08-13T01:06:38.313022517Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:06:38.315138 containerd[1485]: time="2025-08-13T01:06:38.314901309Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause:3.10\"" Aug 13 01:06:38.316019 containerd[1485]: time="2025-08-13T01:06:38.315624343Z" level=info msg="ImageDelete event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\"" Aug 13 01:06:38.317136 containerd[1485]: time="2025-08-13T01:06:38.317110380Z" level=info msg="ImageDelete event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:06:38.320159 containerd[1485]: time="2025-08-13T01:06:38.320138601Z" level=info msg="RemoveImage \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" returns successfully" Aug 13 01:06:38.320315 kubelet[2599]: I0813 01:06:38.320291 2599 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" Aug 13 01:06:38.320435 containerd[1485]: time="2025-08-13T01:06:38.320415639Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:06:38.321000 containerd[1485]: time="2025-08-13T01:06:38.320977144Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:06:38.321474 containerd[1485]: time="2025-08-13T01:06:38.321456140Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" Aug 13 01:06:38.322751 containerd[1485]: time="2025-08-13T01:06:38.321863025Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:06:38.386873 containerd[1485]: time="2025-08-13T01:06:38.386841783Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" Aug 13 01:06:38.396081 kubelet[2599]: I0813 01:06:38.396034 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:38.396220 kubelet[2599]: I0813 01:06:38.396204 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:06:38.396262 kubelet[2599]: E0813 01:06:38.396238 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:06:38.396262 kubelet[2599]: E0813 01:06:38.396249 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:06:38.396262 kubelet[2599]: E0813 01:06:38.396258 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:06:38.396321 kubelet[2599]: E0813 01:06:38.396266 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:06:38.396321 kubelet[2599]: E0813 01:06:38.396274 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:06:38.396321 kubelet[2599]: E0813 01:06:38.396282 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:06:38.396321 kubelet[2599]: E0813 01:06:38.396289 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:06:38.396321 kubelet[2599]: E0813 01:06:38.396296 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:06:38.396321 kubelet[2599]: I0813 01:06:38.396305 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:42.288669 systemd[1]: Started sshd@8-172.234.214.143:22-139.178.89.65:46366.service - OpenSSH per-connection server daemon (139.178.89.65:46366). Aug 13 01:06:42.618778 sshd[4006]: Accepted publickey for core from 139.178.89.65 port 46366 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:06:42.620469 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:42.625104 systemd-logind[1460]: New session 9 of user core. Aug 13 01:06:42.633173 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:06:42.923514 sshd[4008]: Connection closed by 139.178.89.65 port 46366 Aug 13 01:06:42.924342 sshd-session[4006]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:42.928160 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:06:42.929210 systemd[1]: sshd@8-172.234.214.143:22-139.178.89.65:46366.service: Deactivated successfully. Aug 13 01:06:42.932183 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:06:42.933415 systemd-logind[1460]: Removed session 9. Aug 13 01:06:47.987199 systemd[1]: Started sshd@9-172.234.214.143:22-139.178.89.65:46370.service - OpenSSH per-connection server daemon (139.178.89.65:46370). Aug 13 01:06:48.328157 sshd[4023]: Accepted publickey for core from 139.178.89.65 port 46370 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:06:48.330378 sshd-session[4023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:48.336600 systemd-logind[1460]: New session 10 of user core. Aug 13 01:06:48.355188 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:06:48.412311 kubelet[2599]: I0813 01:06:48.411946 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:48.412311 kubelet[2599]: I0813 01:06:48.411990 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:48.415433 kubelet[2599]: I0813 01:06:48.414674 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:48.431570 kubelet[2599]: I0813 01:06:48.431135 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:48.431570 kubelet[2599]: I0813 01:06:48.431223 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:06:48.431570 kubelet[2599]: E0813 01:06:48.431250 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:06:48.431570 kubelet[2599]: E0813 01:06:48.431260 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:06:48.431570 kubelet[2599]: E0813 01:06:48.431268 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:06:48.431570 kubelet[2599]: E0813 01:06:48.431277 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:06:48.431570 kubelet[2599]: E0813 01:06:48.431285 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:06:48.431570 kubelet[2599]: E0813 01:06:48.431511 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:06:48.431570 kubelet[2599]: E0813 01:06:48.431520 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:06:48.431570 kubelet[2599]: E0813 01:06:48.431527 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:06:48.431570 kubelet[2599]: I0813 01:06:48.431536 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:48.632248 sshd[4025]: Connection closed by 139.178.89.65 port 46370 Aug 13 01:06:48.633084 sshd-session[4023]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:48.636871 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:06:48.637729 systemd[1]: sshd@9-172.234.214.143:22-139.178.89.65:46370.service: Deactivated successfully. Aug 13 01:06:48.639692 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:06:48.640946 systemd-logind[1460]: Removed session 10. Aug 13 01:06:53.708281 systemd[1]: Started sshd@10-172.234.214.143:22-139.178.89.65:40966.service - OpenSSH per-connection server daemon (139.178.89.65:40966). Aug 13 01:06:54.043774 sshd[4037]: Accepted publickey for core from 139.178.89.65 port 40966 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:06:54.045078 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:54.049510 systemd-logind[1460]: New session 11 of user core. Aug 13 01:06:54.053164 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:06:54.342273 sshd[4040]: Connection closed by 139.178.89.65 port 40966 Aug 13 01:06:54.342921 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:54.346662 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:06:54.347604 systemd[1]: sshd@10-172.234.214.143:22-139.178.89.65:40966.service: Deactivated successfully. Aug 13 01:06:54.349461 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:06:54.350260 systemd-logind[1460]: Removed session 11. Aug 13 01:06:54.401258 systemd[1]: Started sshd@11-172.234.214.143:22-139.178.89.65:40978.service - OpenSSH per-connection server daemon (139.178.89.65:40978). Aug 13 01:06:54.724969 sshd[4053]: Accepted publickey for core from 139.178.89.65 port 40978 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:06:54.726983 sshd-session[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:54.735405 systemd-logind[1460]: New session 12 of user core. Aug 13 01:06:54.743190 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:06:55.051348 sshd[4055]: Connection closed by 139.178.89.65 port 40978 Aug 13 01:06:55.052933 sshd-session[4053]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:55.056407 systemd[1]: sshd@11-172.234.214.143:22-139.178.89.65:40978.service: Deactivated successfully. Aug 13 01:06:55.058847 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:06:55.059668 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:06:55.060529 systemd-logind[1460]: Removed session 12. Aug 13 01:06:55.117247 systemd[1]: Started sshd@12-172.234.214.143:22-139.178.89.65:40990.service - OpenSSH per-connection server daemon (139.178.89.65:40990). Aug 13 01:06:55.442214 sshd[4065]: Accepted publickey for core from 139.178.89.65 port 40990 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:06:55.443833 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:55.448266 systemd-logind[1460]: New session 13 of user core. Aug 13 01:06:55.452174 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:06:55.742992 sshd[4067]: Connection closed by 139.178.89.65 port 40990 Aug 13 01:06:55.743764 sshd-session[4065]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:55.747241 systemd[1]: sshd@12-172.234.214.143:22-139.178.89.65:40990.service: Deactivated successfully. Aug 13 01:06:55.750018 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:06:55.752301 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:06:55.753735 systemd-logind[1460]: Removed session 13. Aug 13 01:06:58.447262 kubelet[2599]: I0813 01:06:58.447226 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:58.447641 kubelet[2599]: I0813 01:06:58.447273 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:58.449562 kubelet[2599]: I0813 01:06:58.449184 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:58.467187 kubelet[2599]: I0813 01:06:58.466922 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:58.467187 kubelet[2599]: I0813 01:06:58.467074 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:06:58.467187 kubelet[2599]: E0813 01:06:58.467101 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:06:58.467187 kubelet[2599]: E0813 01:06:58.467112 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:06:58.467187 kubelet[2599]: E0813 01:06:58.467119 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:06:58.467187 kubelet[2599]: E0813 01:06:58.467127 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:06:58.467187 kubelet[2599]: E0813 01:06:58.467134 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:06:58.467187 kubelet[2599]: E0813 01:06:58.467141 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:06:58.467187 kubelet[2599]: E0813 01:06:58.467149 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:06:58.467187 kubelet[2599]: E0813 01:06:58.467156 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:06:58.467187 kubelet[2599]: I0813 01:06:58.467165 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:59.823212 kubelet[2599]: E0813 01:06:59.822778 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:07:00.807239 systemd[1]: Started sshd@13-172.234.214.143:22-139.178.89.65:47782.service - OpenSSH per-connection server daemon (139.178.89.65:47782). Aug 13 01:07:01.132351 sshd[4079]: Accepted publickey for core from 139.178.89.65 port 47782 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:01.134110 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:01.140982 systemd-logind[1460]: New session 14 of user core. Aug 13 01:07:01.149236 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:07:01.433494 sshd[4081]: Connection closed by 139.178.89.65 port 47782 Aug 13 01:07:01.434460 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:01.438917 systemd[1]: sshd@13-172.234.214.143:22-139.178.89.65:47782.service: Deactivated successfully. Aug 13 01:07:01.441472 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:07:01.442398 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:07:01.444815 systemd-logind[1460]: Removed session 14. Aug 13 01:07:01.506266 systemd[1]: Started sshd@14-172.234.214.143:22-139.178.89.65:47784.service - OpenSSH per-connection server daemon (139.178.89.65:47784). Aug 13 01:07:01.836954 sshd[4093]: Accepted publickey for core from 139.178.89.65 port 47784 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:01.837585 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:01.843215 systemd-logind[1460]: New session 15 of user core. Aug 13 01:07:01.853197 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:07:02.182792 sshd[4095]: Connection closed by 139.178.89.65 port 47784 Aug 13 01:07:02.183615 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:02.188224 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:07:02.189240 systemd[1]: sshd@14-172.234.214.143:22-139.178.89.65:47784.service: Deactivated successfully. Aug 13 01:07:02.191687 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:07:02.192645 systemd-logind[1460]: Removed session 15. Aug 13 01:07:02.249133 systemd[1]: Started sshd@15-172.234.214.143:22-139.178.89.65:47786.service - OpenSSH per-connection server daemon (139.178.89.65:47786). Aug 13 01:07:02.595765 sshd[4105]: Accepted publickey for core from 139.178.89.65 port 47786 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:02.597525 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:02.603125 systemd-logind[1460]: New session 16 of user core. Aug 13 01:07:02.612166 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:07:03.815959 sshd[4107]: Connection closed by 139.178.89.65 port 47786 Aug 13 01:07:03.816824 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:03.820543 systemd[1]: sshd@15-172.234.214.143:22-139.178.89.65:47786.service: Deactivated successfully. Aug 13 01:07:03.824298 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:07:03.824528 systemd[1]: session-16.scope: Consumed 410ms CPU time, 65.3M memory peak. Aug 13 01:07:03.826561 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:07:03.828207 systemd-logind[1460]: Removed session 16. Aug 13 01:07:03.883357 systemd[1]: Started sshd@16-172.234.214.143:22-139.178.89.65:47788.service - OpenSSH per-connection server daemon (139.178.89.65:47788). Aug 13 01:07:04.215584 sshd[4124]: Accepted publickey for core from 139.178.89.65 port 47788 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:04.217443 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:04.222982 systemd-logind[1460]: New session 17 of user core. Aug 13 01:07:04.228183 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:07:04.635878 sshd[4126]: Connection closed by 139.178.89.65 port 47788 Aug 13 01:07:04.636484 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:04.640570 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:07:04.641402 systemd[1]: sshd@16-172.234.214.143:22-139.178.89.65:47788.service: Deactivated successfully. Aug 13 01:07:04.643612 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:07:04.644442 systemd-logind[1460]: Removed session 17. Aug 13 01:07:04.702257 systemd[1]: Started sshd@17-172.234.214.143:22-139.178.89.65:47800.service - OpenSSH per-connection server daemon (139.178.89.65:47800). Aug 13 01:07:05.022342 sshd[4136]: Accepted publickey for core from 139.178.89.65 port 47800 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:05.024697 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:05.031572 systemd-logind[1460]: New session 18 of user core. Aug 13 01:07:05.038183 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:07:05.320850 sshd[4138]: Connection closed by 139.178.89.65 port 47800 Aug 13 01:07:05.322139 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:05.327551 systemd[1]: sshd@17-172.234.214.143:22-139.178.89.65:47800.service: Deactivated successfully. Aug 13 01:07:05.331195 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:07:05.332434 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:07:05.333597 systemd-logind[1460]: Removed session 18. Aug 13 01:07:08.482504 kubelet[2599]: I0813 01:07:08.482421 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:08.482504 kubelet[2599]: I0813 01:07:08.482454 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:08.487749 kubelet[2599]: I0813 01:07:08.486595 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:08.499000 kubelet[2599]: I0813 01:07:08.498975 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:08.499113 kubelet[2599]: I0813 01:07:08.499097 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:07:08.499163 kubelet[2599]: E0813 01:07:08.499127 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:07:08.499163 kubelet[2599]: E0813 01:07:08.499137 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:07:08.499163 kubelet[2599]: E0813 01:07:08.499146 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:07:08.499163 kubelet[2599]: E0813 01:07:08.499153 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:07:08.499163 kubelet[2599]: E0813 01:07:08.499161 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:07:08.499268 kubelet[2599]: E0813 01:07:08.499169 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:07:08.499268 kubelet[2599]: E0813 01:07:08.499177 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:07:08.499268 kubelet[2599]: E0813 01:07:08.499185 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:07:08.499268 kubelet[2599]: I0813 01:07:08.499193 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:10.399489 systemd[1]: Started sshd@18-172.234.214.143:22-139.178.89.65:56172.service - OpenSSH per-connection server daemon (139.178.89.65:56172). Aug 13 01:07:10.732914 sshd[4152]: Accepted publickey for core from 139.178.89.65 port 56172 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:10.733520 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:10.737563 systemd-logind[1460]: New session 19 of user core. Aug 13 01:07:10.741154 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:07:11.031029 sshd[4154]: Connection closed by 139.178.89.65 port 56172 Aug 13 01:07:11.031846 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:11.036485 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:07:11.037449 systemd[1]: sshd@18-172.234.214.143:22-139.178.89.65:56172.service: Deactivated successfully. Aug 13 01:07:11.039781 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:07:11.040955 systemd-logind[1460]: Removed session 19. Aug 13 01:07:16.098298 systemd[1]: Started sshd@19-172.234.214.143:22-139.178.89.65:56182.service - OpenSSH per-connection server daemon (139.178.89.65:56182). Aug 13 01:07:16.426077 sshd[4169]: Accepted publickey for core from 139.178.89.65 port 56182 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:16.427587 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:16.432332 systemd-logind[1460]: New session 20 of user core. Aug 13 01:07:16.436743 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:07:16.725901 sshd[4171]: Connection closed by 139.178.89.65 port 56182 Aug 13 01:07:16.726392 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:16.732274 systemd[1]: sshd@19-172.234.214.143:22-139.178.89.65:56182.service: Deactivated successfully. Aug 13 01:07:16.736553 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:07:16.737989 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:07:16.739562 systemd-logind[1460]: Removed session 20. Aug 13 01:07:16.821754 kubelet[2599]: E0813 01:07:16.821721 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:07:17.822255 kubelet[2599]: E0813 01:07:17.822101 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:07:18.520458 kubelet[2599]: I0813 01:07:18.520421 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:18.520671 kubelet[2599]: I0813 01:07:18.520650 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:18.522516 kubelet[2599]: I0813 01:07:18.522495 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:18.531459 kubelet[2599]: I0813 01:07:18.531439 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:18.531568 kubelet[2599]: I0813 01:07:18.531549 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:07:18.531603 kubelet[2599]: E0813 01:07:18.531578 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:07:18.531603 kubelet[2599]: E0813 01:07:18.531589 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:07:18.531603 kubelet[2599]: E0813 01:07:18.531597 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:07:18.531662 kubelet[2599]: E0813 01:07:18.531605 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:07:18.531662 kubelet[2599]: E0813 01:07:18.531613 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:07:18.531662 kubelet[2599]: E0813 01:07:18.531620 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:07:18.531662 kubelet[2599]: E0813 01:07:18.531627 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:07:18.531662 kubelet[2599]: E0813 01:07:18.531635 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:07:18.531662 kubelet[2599]: I0813 01:07:18.531644 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:21.793255 systemd[1]: Started sshd@20-172.234.214.143:22-139.178.89.65:54868.service - OpenSSH per-connection server daemon (139.178.89.65:54868). Aug 13 01:07:22.128770 sshd[4183]: Accepted publickey for core from 139.178.89.65 port 54868 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:22.130117 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:22.134091 systemd-logind[1460]: New session 21 of user core. Aug 13 01:07:22.143164 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:07:22.427697 sshd[4185]: Connection closed by 139.178.89.65 port 54868 Aug 13 01:07:22.428737 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:22.432303 systemd[1]: sshd@20-172.234.214.143:22-139.178.89.65:54868.service: Deactivated successfully. Aug 13 01:07:22.435031 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:07:22.436106 systemd-logind[1460]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:07:22.436929 systemd-logind[1460]: Removed session 21. Aug 13 01:07:24.821568 kubelet[2599]: E0813 01:07:24.821532 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:07:27.495314 systemd[1]: Started sshd@21-172.234.214.143:22-139.178.89.65:54872.service - OpenSSH per-connection server daemon (139.178.89.65:54872). Aug 13 01:07:27.823109 kubelet[2599]: E0813 01:07:27.821984 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:07:27.832194 sshd[4205]: Accepted publickey for core from 139.178.89.65 port 54872 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:27.834189 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:27.838722 systemd-logind[1460]: New session 22 of user core. Aug 13 01:07:27.846223 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:07:28.134738 sshd[4207]: Connection closed by 139.178.89.65 port 54872 Aug 13 01:07:28.135608 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:28.139437 systemd-logind[1460]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:07:28.140284 systemd[1]: sshd@21-172.234.214.143:22-139.178.89.65:54872.service: Deactivated successfully. Aug 13 01:07:28.142555 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:07:28.143686 systemd-logind[1460]: Removed session 22. Aug 13 01:07:28.557455 kubelet[2599]: I0813 01:07:28.557423 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:28.557590 kubelet[2599]: I0813 01:07:28.557468 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:28.559233 kubelet[2599]: I0813 01:07:28.559210 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:28.569833 kubelet[2599]: I0813 01:07:28.569813 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:28.569961 kubelet[2599]: I0813 01:07:28.569941 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:07:28.570015 kubelet[2599]: E0813 01:07:28.569977 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:07:28.570015 kubelet[2599]: E0813 01:07:28.569991 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:07:28.570015 kubelet[2599]: E0813 01:07:28.570000 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:07:28.570015 kubelet[2599]: E0813 01:07:28.570010 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:07:28.570131 kubelet[2599]: E0813 01:07:28.570020 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:07:28.570131 kubelet[2599]: E0813 01:07:28.570029 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:07:28.570131 kubelet[2599]: E0813 01:07:28.570038 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:07:28.570131 kubelet[2599]: E0813 01:07:28.570063 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:07:28.570131 kubelet[2599]: I0813 01:07:28.570073 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:33.200232 systemd[1]: Started sshd@22-172.234.214.143:22-139.178.89.65:43414.service - OpenSSH per-connection server daemon (139.178.89.65:43414). Aug 13 01:07:33.531534 sshd[4219]: Accepted publickey for core from 139.178.89.65 port 43414 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:33.533146 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:33.537261 systemd-logind[1460]: New session 23 of user core. Aug 13 01:07:33.550170 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:07:33.822323 kubelet[2599]: E0813 01:07:33.821654 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:07:33.836683 sshd[4221]: Connection closed by 139.178.89.65 port 43414 Aug 13 01:07:33.837439 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:33.843773 systemd-logind[1460]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:07:33.844803 systemd[1]: sshd@22-172.234.214.143:22-139.178.89.65:43414.service: Deactivated successfully. Aug 13 01:07:33.851195 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:07:33.853638 systemd-logind[1460]: Removed session 23. Aug 13 01:07:38.586788 kubelet[2599]: I0813 01:07:38.586745 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:38.586788 kubelet[2599]: I0813 01:07:38.586786 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:38.588875 kubelet[2599]: I0813 01:07:38.588852 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:38.601545 kubelet[2599]: I0813 01:07:38.601528 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:38.601676 kubelet[2599]: I0813 01:07:38.601660 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:07:38.601725 kubelet[2599]: E0813 01:07:38.601692 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:07:38.601725 kubelet[2599]: E0813 01:07:38.601704 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:07:38.601725 kubelet[2599]: E0813 01:07:38.601711 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:07:38.601725 kubelet[2599]: E0813 01:07:38.601719 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:07:38.601725 kubelet[2599]: E0813 01:07:38.601727 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:07:38.601829 kubelet[2599]: E0813 01:07:38.601735 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:07:38.601829 kubelet[2599]: E0813 01:07:38.601743 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:07:38.601829 kubelet[2599]: E0813 01:07:38.601750 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:07:38.601829 kubelet[2599]: I0813 01:07:38.601759 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:38.909458 systemd[1]: Started sshd@23-172.234.214.143:22-139.178.89.65:43416.service - OpenSSH per-connection server daemon (139.178.89.65:43416). Aug 13 01:07:39.245336 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 43416 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:39.248958 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:39.257085 systemd-logind[1460]: New session 24 of user core. Aug 13 01:07:39.264302 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:07:39.557080 sshd[4237]: Connection closed by 139.178.89.65 port 43416 Aug 13 01:07:39.558988 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:39.564700 systemd-logind[1460]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:07:39.565928 systemd[1]: sshd@23-172.234.214.143:22-139.178.89.65:43416.service: Deactivated successfully. Aug 13 01:07:39.568871 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:07:39.570427 systemd-logind[1460]: Removed session 24. Aug 13 01:07:44.625792 systemd[1]: Started sshd@24-172.234.214.143:22-139.178.89.65:48558.service - OpenSSH per-connection server daemon (139.178.89.65:48558). Aug 13 01:07:44.945502 sshd[4249]: Accepted publickey for core from 139.178.89.65 port 48558 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:44.948350 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:44.955966 systemd-logind[1460]: New session 25 of user core. Aug 13 01:07:44.962160 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:07:45.239403 sshd[4251]: Connection closed by 139.178.89.65 port 48558 Aug 13 01:07:45.241245 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:45.244984 systemd-logind[1460]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:07:45.245845 systemd[1]: sshd@24-172.234.214.143:22-139.178.89.65:48558.service: Deactivated successfully. Aug 13 01:07:45.247805 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:07:45.248906 systemd-logind[1460]: Removed session 25. Aug 13 01:07:48.619265 kubelet[2599]: I0813 01:07:48.619216 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:48.619265 kubelet[2599]: I0813 01:07:48.619271 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:48.627399 kubelet[2599]: I0813 01:07:48.627375 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:48.641227 kubelet[2599]: I0813 01:07:48.641180 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:48.641460 kubelet[2599]: I0813 01:07:48.641371 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:07:48.641460 kubelet[2599]: E0813 01:07:48.641424 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:07:48.641460 kubelet[2599]: E0813 01:07:48.641438 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:07:48.641460 kubelet[2599]: E0813 01:07:48.641446 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:07:48.641460 kubelet[2599]: E0813 01:07:48.641456 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:07:48.641460 kubelet[2599]: E0813 01:07:48.641466 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:07:48.641627 kubelet[2599]: E0813 01:07:48.641476 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:07:48.641627 kubelet[2599]: E0813 01:07:48.641486 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:07:48.641627 kubelet[2599]: E0813 01:07:48.641495 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:07:48.641627 kubelet[2599]: I0813 01:07:48.641504 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:50.312316 systemd[1]: Started sshd@25-172.234.214.143:22-139.178.89.65:34772.service - OpenSSH per-connection server daemon (139.178.89.65:34772). Aug 13 01:07:50.646402 sshd[4265]: Accepted publickey for core from 139.178.89.65 port 34772 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:50.648255 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:50.653778 systemd-logind[1460]: New session 26 of user core. Aug 13 01:07:50.661202 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:07:50.822276 kubelet[2599]: E0813 01:07:50.822236 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:07:50.956834 sshd[4267]: Connection closed by 139.178.89.65 port 34772 Aug 13 01:07:50.958252 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:50.962597 systemd[1]: sshd@25-172.234.214.143:22-139.178.89.65:34772.service: Deactivated successfully. Aug 13 01:07:50.965178 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:07:50.966009 systemd-logind[1460]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:07:50.967291 systemd-logind[1460]: Removed session 26. Aug 13 01:07:54.821579 kubelet[2599]: E0813 01:07:54.821516 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:07:56.024124 systemd[1]: Started sshd@26-172.234.214.143:22-139.178.89.65:34778.service - OpenSSH per-connection server daemon (139.178.89.65:34778). Aug 13 01:07:56.362872 sshd[4279]: Accepted publickey for core from 139.178.89.65 port 34778 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:07:56.364918 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:56.370461 systemd-logind[1460]: New session 27 of user core. Aug 13 01:07:56.375182 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:07:56.668994 sshd[4281]: Connection closed by 139.178.89.65 port 34778 Aug 13 01:07:56.669956 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:56.674317 systemd-logind[1460]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:07:56.675054 systemd[1]: sshd@26-172.234.214.143:22-139.178.89.65:34778.service: Deactivated successfully. Aug 13 01:07:56.677471 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:07:56.679145 systemd-logind[1460]: Removed session 27. Aug 13 01:07:58.657293 kubelet[2599]: I0813 01:07:58.657261 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:58.657293 kubelet[2599]: I0813 01:07:58.657299 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:58.658457 kubelet[2599]: I0813 01:07:58.658433 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:58.666960 kubelet[2599]: I0813 01:07:58.666944 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:58.667101 kubelet[2599]: I0813 01:07:58.667083 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:07:58.667149 kubelet[2599]: E0813 01:07:58.667115 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:07:58.667149 kubelet[2599]: E0813 01:07:58.667127 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:07:58.667149 kubelet[2599]: E0813 01:07:58.667134 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:07:58.667149 kubelet[2599]: E0813 01:07:58.667143 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:07:58.667149 kubelet[2599]: E0813 01:07:58.667150 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:07:58.667273 kubelet[2599]: E0813 01:07:58.667158 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:07:58.667273 kubelet[2599]: E0813 01:07:58.667166 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:07:58.667273 kubelet[2599]: E0813 01:07:58.667174 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:07:58.667273 kubelet[2599]: I0813 01:07:58.667182 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:01.735255 systemd[1]: Started sshd@27-172.234.214.143:22-139.178.89.65:38148.service - OpenSSH per-connection server daemon (139.178.89.65:38148). Aug 13 01:08:02.066984 sshd[4294]: Accepted publickey for core from 139.178.89.65 port 38148 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:02.068255 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:02.072735 systemd-logind[1460]: New session 28 of user core. Aug 13 01:08:02.077154 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:08:02.367029 sshd[4297]: Connection closed by 139.178.89.65 port 38148 Aug 13 01:08:02.367826 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:02.371774 systemd-logind[1460]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:08:02.372770 systemd[1]: sshd@27-172.234.214.143:22-139.178.89.65:38148.service: Deactivated successfully. Aug 13 01:08:02.374668 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:08:02.376163 systemd-logind[1460]: Removed session 28. Aug 13 01:08:04.822073 kubelet[2599]: E0813 01:08:04.822014 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:08:07.433390 systemd[1]: Started sshd@28-172.234.214.143:22-139.178.89.65:38162.service - OpenSSH per-connection server daemon (139.178.89.65:38162). Aug 13 01:08:07.770076 sshd[4309]: Accepted publickey for core from 139.178.89.65 port 38162 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:07.771357 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:07.775437 systemd-logind[1460]: New session 29 of user core. Aug 13 01:08:07.779160 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:08:08.076430 sshd[4311]: Connection closed by 139.178.89.65 port 38162 Aug 13 01:08:08.077358 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:08.081330 systemd-logind[1460]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:08:08.082164 systemd[1]: sshd@28-172.234.214.143:22-139.178.89.65:38162.service: Deactivated successfully. Aug 13 01:08:08.084315 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:08:08.085465 systemd-logind[1460]: Removed session 29. Aug 13 01:08:08.684308 kubelet[2599]: I0813 01:08:08.684271 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:08.684308 kubelet[2599]: I0813 01:08:08.684307 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:08:08.685567 kubelet[2599]: I0813 01:08:08.685542 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:08:08.695125 kubelet[2599]: I0813 01:08:08.695109 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:08.695238 kubelet[2599]: I0813 01:08:08.695208 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:08:08.695272 kubelet[2599]: E0813 01:08:08.695238 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:08:08.695272 kubelet[2599]: E0813 01:08:08.695249 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:08:08.695272 kubelet[2599]: E0813 01:08:08.695257 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:08:08.695272 kubelet[2599]: E0813 01:08:08.695265 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:08:08.695272 kubelet[2599]: E0813 01:08:08.695274 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:08:08.695407 kubelet[2599]: E0813 01:08:08.695282 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:08:08.695407 kubelet[2599]: E0813 01:08:08.695290 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:08:08.695407 kubelet[2599]: E0813 01:08:08.695297 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:08:08.695407 kubelet[2599]: I0813 01:08:08.695307 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:13.145247 systemd[1]: Started sshd@29-172.234.214.143:22-139.178.89.65:40420.service - OpenSSH per-connection server daemon (139.178.89.65:40420). Aug 13 01:08:13.469626 sshd[4323]: Accepted publickey for core from 139.178.89.65 port 40420 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:13.471577 sshd-session[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:13.477156 systemd-logind[1460]: New session 30 of user core. Aug 13 01:08:13.479217 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:08:13.763510 sshd[4327]: Connection closed by 139.178.89.65 port 40420 Aug 13 01:08:13.764272 sshd-session[4323]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:13.767657 systemd[1]: sshd@29-172.234.214.143:22-139.178.89.65:40420.service: Deactivated successfully. Aug 13 01:08:13.769675 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:08:13.771349 systemd-logind[1460]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:08:13.772454 systemd-logind[1460]: Removed session 30. Aug 13 01:08:18.711112 kubelet[2599]: I0813 01:08:18.711064 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:18.711112 kubelet[2599]: I0813 01:08:18.711102 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:08:18.712600 kubelet[2599]: I0813 01:08:18.712578 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:08:18.722160 kubelet[2599]: I0813 01:08:18.722138 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:18.722266 kubelet[2599]: I0813 01:08:18.722244 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:08:18.722294 kubelet[2599]: E0813 01:08:18.722276 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:08:18.722294 kubelet[2599]: E0813 01:08:18.722286 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:08:18.722347 kubelet[2599]: E0813 01:08:18.722295 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:08:18.722347 kubelet[2599]: E0813 01:08:18.722303 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:08:18.722347 kubelet[2599]: E0813 01:08:18.722311 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:08:18.722347 kubelet[2599]: E0813 01:08:18.722318 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:08:18.722347 kubelet[2599]: E0813 01:08:18.722326 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:08:18.722347 kubelet[2599]: E0813 01:08:18.722334 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:08:18.722347 kubelet[2599]: I0813 01:08:18.722346 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:18.831277 systemd[1]: Started sshd@30-172.234.214.143:22-139.178.89.65:40426.service - OpenSSH per-connection server daemon (139.178.89.65:40426). Aug 13 01:08:19.152746 sshd[4341]: Accepted publickey for core from 139.178.89.65 port 40426 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:19.154447 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:19.158501 systemd-logind[1460]: New session 31 of user core. Aug 13 01:08:19.170153 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:08:19.446951 sshd[4343]: Connection closed by 139.178.89.65 port 40426 Aug 13 01:08:19.447919 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:19.452358 systemd[1]: sshd@30-172.234.214.143:22-139.178.89.65:40426.service: Deactivated successfully. Aug 13 01:08:19.454303 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:08:19.454956 systemd-logind[1460]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:08:19.455739 systemd-logind[1460]: Removed session 31. Aug 13 01:08:20.822221 kubelet[2599]: E0813 01:08:20.822185 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:08:23.821887 kubelet[2599]: E0813 01:08:23.821390 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:08:24.514587 systemd[1]: Started sshd@31-172.234.214.143:22-139.178.89.65:49536.service - OpenSSH per-connection server daemon (139.178.89.65:49536). Aug 13 01:08:24.848373 sshd[4355]: Accepted publickey for core from 139.178.89.65 port 49536 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:24.849805 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:24.854684 systemd-logind[1460]: New session 32 of user core. Aug 13 01:08:24.863232 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:08:25.148609 sshd[4357]: Connection closed by 139.178.89.65 port 49536 Aug 13 01:08:25.149330 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:25.152102 systemd[1]: sshd@31-172.234.214.143:22-139.178.89.65:49536.service: Deactivated successfully. Aug 13 01:08:25.154013 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:08:25.155358 systemd-logind[1460]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:08:25.156213 systemd-logind[1460]: Removed session 32. Aug 13 01:08:28.743061 kubelet[2599]: I0813 01:08:28.743022 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:28.743571 kubelet[2599]: I0813 01:08:28.743077 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:08:28.744516 kubelet[2599]: I0813 01:08:28.744496 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:08:28.757917 kubelet[2599]: I0813 01:08:28.757899 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:28.758036 kubelet[2599]: I0813 01:08:28.758020 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:08:28.758114 kubelet[2599]: E0813 01:08:28.758068 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:08:28.758114 kubelet[2599]: E0813 01:08:28.758080 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:08:28.758114 kubelet[2599]: E0813 01:08:28.758088 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:08:28.758114 kubelet[2599]: E0813 01:08:28.758097 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:08:28.758114 kubelet[2599]: E0813 01:08:28.758107 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:08:28.758114 kubelet[2599]: E0813 01:08:28.758115 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:08:28.758245 kubelet[2599]: E0813 01:08:28.758122 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:08:28.758245 kubelet[2599]: E0813 01:08:28.758130 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:08:28.758245 kubelet[2599]: I0813 01:08:28.758139 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:30.219299 systemd[1]: Started sshd@32-172.234.214.143:22-139.178.89.65:35380.service - OpenSSH per-connection server daemon (139.178.89.65:35380). Aug 13 01:08:30.549743 sshd[4369]: Accepted publickey for core from 139.178.89.65 port 35380 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:30.550277 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:30.555180 systemd-logind[1460]: New session 33 of user core. Aug 13 01:08:30.564180 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:08:30.845978 sshd[4371]: Connection closed by 139.178.89.65 port 35380 Aug 13 01:08:30.846426 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:30.850327 systemd[1]: sshd@32-172.234.214.143:22-139.178.89.65:35380.service: Deactivated successfully. Aug 13 01:08:30.852207 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:08:30.852911 systemd-logind[1460]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:08:30.854638 systemd-logind[1460]: Removed session 33. Aug 13 01:08:35.910438 systemd[1]: Started sshd@33-172.234.214.143:22-139.178.89.65:35396.service - OpenSSH per-connection server daemon (139.178.89.65:35396). Aug 13 01:08:36.235662 sshd[4383]: Accepted publickey for core from 139.178.89.65 port 35396 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:36.237505 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:36.243263 systemd-logind[1460]: New session 34 of user core. Aug 13 01:08:36.248282 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:08:36.552094 sshd[4385]: Connection closed by 139.178.89.65 port 35396 Aug 13 01:08:36.553348 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:36.557366 systemd-logind[1460]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:08:36.558283 systemd[1]: sshd@33-172.234.214.143:22-139.178.89.65:35396.service: Deactivated successfully. Aug 13 01:08:36.560320 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:08:36.561081 systemd-logind[1460]: Removed session 34. Aug 13 01:08:38.774115 kubelet[2599]: I0813 01:08:38.774084 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:38.774115 kubelet[2599]: I0813 01:08:38.774118 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:08:38.775350 kubelet[2599]: I0813 01:08:38.775327 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:08:38.785015 kubelet[2599]: I0813 01:08:38.784993 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:38.785344 kubelet[2599]: I0813 01:08:38.785321 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:08:38.785385 kubelet[2599]: E0813 01:08:38.785357 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:08:38.785385 kubelet[2599]: E0813 01:08:38.785368 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:08:38.785385 kubelet[2599]: E0813 01:08:38.785376 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:08:38.785385 kubelet[2599]: E0813 01:08:38.785384 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:08:38.785467 kubelet[2599]: E0813 01:08:38.785392 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:08:38.785467 kubelet[2599]: E0813 01:08:38.785400 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:08:38.785467 kubelet[2599]: E0813 01:08:38.785407 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:08:38.785467 kubelet[2599]: E0813 01:08:38.785415 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:08:38.785467 kubelet[2599]: I0813 01:08:38.785428 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:41.620261 systemd[1]: Started sshd@34-172.234.214.143:22-139.178.89.65:57354.service - OpenSSH per-connection server daemon (139.178.89.65:57354). Aug 13 01:08:41.959096 sshd[4399]: Accepted publickey for core from 139.178.89.65 port 57354 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:41.960761 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:41.966134 systemd-logind[1460]: New session 35 of user core. Aug 13 01:08:41.969187 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:08:42.273071 sshd[4401]: Connection closed by 139.178.89.65 port 57354 Aug 13 01:08:42.272061 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:42.276489 systemd[1]: sshd@34-172.234.214.143:22-139.178.89.65:57354.service: Deactivated successfully. Aug 13 01:08:42.278391 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:08:42.279020 systemd-logind[1460]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:08:42.279994 systemd-logind[1460]: Removed session 35. Aug 13 01:08:47.336122 systemd[1]: Started sshd@35-172.234.214.143:22-139.178.89.65:57362.service - OpenSSH per-connection server daemon (139.178.89.65:57362). Aug 13 01:08:47.667760 sshd[4415]: Accepted publickey for core from 139.178.89.65 port 57362 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:47.670273 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:47.675290 systemd-logind[1460]: New session 36 of user core. Aug 13 01:08:47.682229 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:08:47.822500 kubelet[2599]: E0813 01:08:47.822232 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:08:47.959884 sshd[4417]: Connection closed by 139.178.89.65 port 57362 Aug 13 01:08:47.960486 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:47.964450 systemd[1]: sshd@35-172.234.214.143:22-139.178.89.65:57362.service: Deactivated successfully. Aug 13 01:08:47.964557 systemd-logind[1460]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:08:47.966713 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:08:47.968208 systemd-logind[1460]: Removed session 36. Aug 13 01:08:48.805926 kubelet[2599]: I0813 01:08:48.805898 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:48.805926 kubelet[2599]: I0813 01:08:48.805932 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:08:48.807338 kubelet[2599]: I0813 01:08:48.807317 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:08:48.816555 kubelet[2599]: I0813 01:08:48.816529 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:48.816635 kubelet[2599]: I0813 01:08:48.816620 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:08:48.816672 kubelet[2599]: E0813 01:08:48.816647 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:08:48.816672 kubelet[2599]: E0813 01:08:48.816657 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:08:48.816672 kubelet[2599]: E0813 01:08:48.816665 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:08:48.816672 kubelet[2599]: E0813 01:08:48.816673 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:08:48.816748 kubelet[2599]: E0813 01:08:48.816682 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:08:48.816748 kubelet[2599]: E0813 01:08:48.816689 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:08:48.816748 kubelet[2599]: E0813 01:08:48.816696 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:08:48.816748 kubelet[2599]: E0813 01:08:48.816705 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:08:48.816748 kubelet[2599]: I0813 01:08:48.816713 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:50.821822 kubelet[2599]: E0813 01:08:50.821782 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:08:51.822472 kubelet[2599]: E0813 01:08:51.821704 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:08:53.024251 systemd[1]: Started sshd@36-172.234.214.143:22-139.178.89.65:44840.service - OpenSSH per-connection server daemon (139.178.89.65:44840). Aug 13 01:08:53.351930 sshd[4429]: Accepted publickey for core from 139.178.89.65 port 44840 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:53.353951 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:53.358930 systemd-logind[1460]: New session 37 of user core. Aug 13 01:08:53.368198 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:08:53.654092 sshd[4431]: Connection closed by 139.178.89.65 port 44840 Aug 13 01:08:53.654943 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:53.659842 systemd[1]: sshd@36-172.234.214.143:22-139.178.89.65:44840.service: Deactivated successfully. Aug 13 01:08:53.662838 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:08:53.663688 systemd-logind[1460]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:08:53.665280 systemd-logind[1460]: Removed session 37. Aug 13 01:08:58.724307 systemd[1]: Started sshd@37-172.234.214.143:22-139.178.89.65:44856.service - OpenSSH per-connection server daemon (139.178.89.65:44856). Aug 13 01:08:58.836238 kubelet[2599]: I0813 01:08:58.836197 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:58.836986 kubelet[2599]: I0813 01:08:58.836253 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:08:58.838516 kubelet[2599]: I0813 01:08:58.838483 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:08:58.850637 kubelet[2599]: I0813 01:08:58.850604 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:58.850842 kubelet[2599]: I0813 01:08:58.850810 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:08:58.850881 kubelet[2599]: E0813 01:08:58.850857 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:08:58.850881 kubelet[2599]: E0813 01:08:58.850873 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:08:58.850927 kubelet[2599]: E0813 01:08:58.850885 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:08:58.850927 kubelet[2599]: E0813 01:08:58.850899 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:08:58.850927 kubelet[2599]: E0813 01:08:58.850910 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:08:58.850927 kubelet[2599]: E0813 01:08:58.850922 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:08:58.851021 kubelet[2599]: E0813 01:08:58.850934 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:08:58.851021 kubelet[2599]: E0813 01:08:58.850947 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:08:58.851021 kubelet[2599]: I0813 01:08:58.850958 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:59.060504 sshd[4443]: Accepted publickey for core from 139.178.89.65 port 44856 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:08:59.062334 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:59.067212 systemd-logind[1460]: New session 38 of user core. Aug 13 01:08:59.075174 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:08:59.365794 sshd[4445]: Connection closed by 139.178.89.65 port 44856 Aug 13 01:08:59.367298 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:59.371999 systemd[1]: sshd@37-172.234.214.143:22-139.178.89.65:44856.service: Deactivated successfully. Aug 13 01:08:59.374869 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:08:59.375753 systemd-logind[1460]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:08:59.377510 systemd-logind[1460]: Removed session 38. Aug 13 01:09:03.822101 kubelet[2599]: E0813 01:09:03.821426 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:09:04.435830 systemd[1]: Started sshd@38-172.234.214.143:22-139.178.89.65:37094.service - OpenSSH per-connection server daemon (139.178.89.65:37094). Aug 13 01:09:04.768151 sshd[4456]: Accepted publickey for core from 139.178.89.65 port 37094 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:04.769863 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:04.776107 systemd-logind[1460]: New session 39 of user core. Aug 13 01:09:04.782209 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:09:05.062606 sshd[4458]: Connection closed by 139.178.89.65 port 37094 Aug 13 01:09:05.063499 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:05.067151 systemd-logind[1460]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:09:05.068032 systemd[1]: sshd@38-172.234.214.143:22-139.178.89.65:37094.service: Deactivated successfully. Aug 13 01:09:05.070185 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:09:05.071101 systemd-logind[1460]: Removed session 39. Aug 13 01:09:08.869410 kubelet[2599]: I0813 01:09:08.869371 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:08.869410 kubelet[2599]: I0813 01:09:08.869412 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:09:08.871109 kubelet[2599]: I0813 01:09:08.871087 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:09:08.880316 kubelet[2599]: I0813 01:09:08.880295 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:08.880428 kubelet[2599]: I0813 01:09:08.880411 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:09:08.880462 kubelet[2599]: E0813 01:09:08.880444 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:09:08.880462 kubelet[2599]: E0813 01:09:08.880455 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:09:08.880462 kubelet[2599]: E0813 01:09:08.880465 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:09:08.880462 kubelet[2599]: E0813 01:09:08.880473 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:09:08.880567 kubelet[2599]: E0813 01:09:08.880483 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:09:08.880567 kubelet[2599]: E0813 01:09:08.880491 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:09:08.880567 kubelet[2599]: E0813 01:09:08.880500 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:09:08.880567 kubelet[2599]: E0813 01:09:08.880507 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:09:08.880567 kubelet[2599]: I0813 01:09:08.880516 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:09:10.126317 systemd[1]: Started sshd@39-172.234.214.143:22-139.178.89.65:46944.service - OpenSSH per-connection server daemon (139.178.89.65:46944). Aug 13 01:09:10.444792 sshd[4471]: Accepted publickey for core from 139.178.89.65 port 46944 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:10.446864 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:10.452847 systemd-logind[1460]: New session 40 of user core. Aug 13 01:09:10.461189 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:09:10.742934 sshd[4473]: Connection closed by 139.178.89.65 port 46944 Aug 13 01:09:10.743580 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:10.746967 systemd[1]: sshd@39-172.234.214.143:22-139.178.89.65:46944.service: Deactivated successfully. Aug 13 01:09:10.749491 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:09:10.750975 systemd-logind[1460]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:09:10.752512 systemd-logind[1460]: Removed session 40. Aug 13 01:09:12.821184 kubelet[2599]: E0813 01:09:12.821140 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:09:13.821525 kubelet[2599]: E0813 01:09:13.821282 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:09:15.805283 systemd[1]: Started sshd@40-172.234.214.143:22-139.178.89.65:46958.service - OpenSSH per-connection server daemon (139.178.89.65:46958). Aug 13 01:09:16.131297 sshd[4487]: Accepted publickey for core from 139.178.89.65 port 46958 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:16.133261 sshd-session[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:16.139302 systemd-logind[1460]: New session 41 of user core. Aug 13 01:09:16.143191 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:09:16.428871 sshd[4489]: Connection closed by 139.178.89.65 port 46958 Aug 13 01:09:16.429771 sshd-session[4487]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:16.434409 systemd[1]: sshd@40-172.234.214.143:22-139.178.89.65:46958.service: Deactivated successfully. Aug 13 01:09:16.437266 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:09:16.438884 systemd-logind[1460]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:09:16.440033 systemd-logind[1460]: Removed session 41. Aug 13 01:09:18.897658 kubelet[2599]: I0813 01:09:18.897618 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:18.897658 kubelet[2599]: I0813 01:09:18.897660 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:09:18.898817 kubelet[2599]: I0813 01:09:18.898804 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:09:18.908150 kubelet[2599]: I0813 01:09:18.908129 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:18.908269 kubelet[2599]: I0813 01:09:18.908247 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:09:18.908307 kubelet[2599]: E0813 01:09:18.908280 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:09:18.908307 kubelet[2599]: E0813 01:09:18.908290 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:09:18.908307 kubelet[2599]: E0813 01:09:18.908299 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:09:18.908307 kubelet[2599]: E0813 01:09:18.908307 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:09:18.908385 kubelet[2599]: E0813 01:09:18.908315 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:09:18.908385 kubelet[2599]: E0813 01:09:18.908322 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:09:18.908385 kubelet[2599]: E0813 01:09:18.908330 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:09:18.908385 kubelet[2599]: E0813 01:09:18.908338 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:09:18.908385 kubelet[2599]: I0813 01:09:18.908348 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:09:21.503252 systemd[1]: Started sshd@41-172.234.214.143:22-139.178.89.65:47394.service - OpenSSH per-connection server daemon (139.178.89.65:47394). Aug 13 01:09:21.834177 sshd[4501]: Accepted publickey for core from 139.178.89.65 port 47394 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:21.836612 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:21.842391 systemd-logind[1460]: New session 42 of user core. Aug 13 01:09:21.848163 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:09:22.138434 sshd[4503]: Connection closed by 139.178.89.65 port 47394 Aug 13 01:09:22.139410 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:22.142864 systemd[1]: sshd@41-172.234.214.143:22-139.178.89.65:47394.service: Deactivated successfully. Aug 13 01:09:22.145060 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:09:22.147636 systemd-logind[1460]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:09:22.148926 systemd-logind[1460]: Removed session 42. Aug 13 01:09:27.198955 systemd[1]: Started sshd@42-172.234.214.143:22-139.178.89.65:47398.service - OpenSSH per-connection server daemon (139.178.89.65:47398). Aug 13 01:09:27.526788 sshd[4515]: Accepted publickey for core from 139.178.89.65 port 47398 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:27.528734 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:27.534914 systemd-logind[1460]: New session 43 of user core. Aug 13 01:09:27.539251 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:09:27.830696 sshd[4517]: Connection closed by 139.178.89.65 port 47398 Aug 13 01:09:27.831918 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:27.836332 systemd[1]: sshd@42-172.234.214.143:22-139.178.89.65:47398.service: Deactivated successfully. Aug 13 01:09:27.839248 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:09:27.840067 systemd-logind[1460]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:09:27.841189 systemd-logind[1460]: Removed session 43. Aug 13 01:09:28.927551 kubelet[2599]: I0813 01:09:28.927486 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:28.927551 kubelet[2599]: I0813 01:09:28.927531 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:09:28.928999 kubelet[2599]: I0813 01:09:28.928978 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:09:28.941165 kubelet[2599]: I0813 01:09:28.941133 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:28.941297 kubelet[2599]: I0813 01:09:28.941274 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:09:28.941341 kubelet[2599]: E0813 01:09:28.941313 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:09:28.941341 kubelet[2599]: E0813 01:09:28.941327 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:09:28.941341 kubelet[2599]: E0813 01:09:28.941335 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:09:28.941423 kubelet[2599]: E0813 01:09:28.941346 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:09:28.941423 kubelet[2599]: E0813 01:09:28.941357 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:09:28.941423 kubelet[2599]: E0813 01:09:28.941366 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:09:28.941423 kubelet[2599]: E0813 01:09:28.941374 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:09:28.941423 kubelet[2599]: E0813 01:09:28.941383 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:09:28.941423 kubelet[2599]: I0813 01:09:28.941393 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:09:32.894764 systemd[1]: Started sshd@43-172.234.214.143:22-139.178.89.65:55766.service - OpenSSH per-connection server daemon (139.178.89.65:55766). Aug 13 01:09:33.227256 sshd[4529]: Accepted publickey for core from 139.178.89.65 port 55766 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:33.229094 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:33.237231 systemd-logind[1460]: New session 44 of user core. Aug 13 01:09:33.244382 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:09:33.532791 sshd[4531]: Connection closed by 139.178.89.65 port 55766 Aug 13 01:09:33.534123 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:33.538547 systemd-logind[1460]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:09:33.539759 systemd[1]: sshd@43-172.234.214.143:22-139.178.89.65:55766.service: Deactivated successfully. Aug 13 01:09:33.543089 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:09:33.544360 systemd-logind[1460]: Removed session 44. Aug 13 01:09:33.822866 kubelet[2599]: E0813 01:09:33.822174 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:09:37.833283 kubelet[2599]: I0813 01:09:37.833182 2599 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=87 highThreshold=85 amountToFree=140077465 lowThreshold=80 Aug 13 01:09:37.833283 kubelet[2599]: E0813 01:09:37.833265 2599 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="Failed to garbage collect required amount of images. Attempted to free 140077465 bytes, but only found 0 bytes eligible to free." Aug 13 01:09:38.602378 systemd[1]: Started sshd@44-172.234.214.143:22-139.178.89.65:55778.service - OpenSSH per-connection server daemon (139.178.89.65:55778). Aug 13 01:09:38.939575 sshd[4546]: Accepted publickey for core from 139.178.89.65 port 55778 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:38.941765 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:38.950325 systemd-logind[1460]: New session 45 of user core. Aug 13 01:09:38.958279 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:09:38.969017 kubelet[2599]: I0813 01:09:38.968996 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:38.969376 kubelet[2599]: I0813 01:09:38.969362 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:09:38.970505 kubelet[2599]: I0813 01:09:38.970492 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:09:38.981131 kubelet[2599]: I0813 01:09:38.981115 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:38.981307 kubelet[2599]: I0813 01:09:38.981292 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:09:38.981369 kubelet[2599]: E0813 01:09:38.981359 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:09:38.981502 kubelet[2599]: E0813 01:09:38.981411 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:09:38.981502 kubelet[2599]: E0813 01:09:38.981423 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:09:38.981502 kubelet[2599]: E0813 01:09:38.981433 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:09:38.981502 kubelet[2599]: E0813 01:09:38.981459 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:09:38.981502 kubelet[2599]: E0813 01:09:38.981467 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:09:38.981502 kubelet[2599]: E0813 01:09:38.981475 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:09:38.981502 kubelet[2599]: E0813 01:09:38.981482 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:09:38.981502 kubelet[2599]: I0813 01:09:38.981491 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:09:39.238396 sshd[4548]: Connection closed by 139.178.89.65 port 55778 Aug 13 01:09:39.239067 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:39.242687 systemd[1]: sshd@44-172.234.214.143:22-139.178.89.65:55778.service: Deactivated successfully. Aug 13 01:09:39.244758 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:09:39.246815 systemd-logind[1460]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:09:39.247921 systemd-logind[1460]: Removed session 45. Aug 13 01:09:40.821633 kubelet[2599]: E0813 01:09:40.821518 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:09:44.304296 systemd[1]: Started sshd@45-172.234.214.143:22-139.178.89.65:39002.service - OpenSSH per-connection server daemon (139.178.89.65:39002). Aug 13 01:09:44.628110 sshd[4560]: Accepted publickey for core from 139.178.89.65 port 39002 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:44.630071 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:44.636094 systemd-logind[1460]: New session 46 of user core. Aug 13 01:09:44.648198 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:09:44.920121 sshd[4562]: Connection closed by 139.178.89.65 port 39002 Aug 13 01:09:44.920931 sshd-session[4560]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:44.925373 systemd-logind[1460]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:09:44.926438 systemd[1]: sshd@45-172.234.214.143:22-139.178.89.65:39002.service: Deactivated successfully. Aug 13 01:09:44.928525 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:09:44.929934 systemd-logind[1460]: Removed session 46. Aug 13 01:09:48.996148 kubelet[2599]: I0813 01:09:48.996097 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:48.996148 kubelet[2599]: I0813 01:09:48.996135 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:09:48.997203 kubelet[2599]: I0813 01:09:48.997184 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:09:49.005602 kubelet[2599]: I0813 01:09:49.005583 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:49.005718 kubelet[2599]: I0813 01:09:49.005699 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:09:49.005746 kubelet[2599]: E0813 01:09:49.005730 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:09:49.005746 kubelet[2599]: E0813 01:09:49.005742 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:09:49.005800 kubelet[2599]: E0813 01:09:49.005750 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:09:49.005800 kubelet[2599]: E0813 01:09:49.005759 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:09:49.005800 kubelet[2599]: E0813 01:09:49.005767 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:09:49.005800 kubelet[2599]: E0813 01:09:49.005775 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:09:49.005800 kubelet[2599]: E0813 01:09:49.005782 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:09:49.005800 kubelet[2599]: E0813 01:09:49.005790 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:09:49.005800 kubelet[2599]: I0813 01:09:49.005799 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:09:49.981250 systemd[1]: Started sshd@46-172.234.214.143:22-139.178.89.65:51006.service - OpenSSH per-connection server daemon (139.178.89.65:51006). Aug 13 01:09:50.313244 sshd[4575]: Accepted publickey for core from 139.178.89.65 port 51006 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:50.314897 sshd-session[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:50.320193 systemd-logind[1460]: New session 47 of user core. Aug 13 01:09:50.326227 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:09:50.602591 sshd[4577]: Connection closed by 139.178.89.65 port 51006 Aug 13 01:09:50.603520 sshd-session[4575]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:50.607847 systemd[1]: sshd@46-172.234.214.143:22-139.178.89.65:51006.service: Deactivated successfully. Aug 13 01:09:50.610616 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:09:50.611883 systemd-logind[1460]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:09:50.613544 systemd-logind[1460]: Removed session 47. Aug 13 01:09:55.669286 systemd[1]: Started sshd@47-172.234.214.143:22-139.178.89.65:51012.service - OpenSSH per-connection server daemon (139.178.89.65:51012). Aug 13 01:09:55.997460 sshd[4589]: Accepted publickey for core from 139.178.89.65 port 51012 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:09:55.999628 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:09:56.005023 systemd-logind[1460]: New session 48 of user core. Aug 13 01:09:56.015206 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:09:56.308422 sshd[4591]: Connection closed by 139.178.89.65 port 51012 Aug 13 01:09:56.309536 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Aug 13 01:09:56.313360 systemd[1]: sshd@47-172.234.214.143:22-139.178.89.65:51012.service: Deactivated successfully. Aug 13 01:09:56.315951 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:09:56.317704 systemd-logind[1460]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:09:56.318810 systemd-logind[1460]: Removed session 48. Aug 13 01:09:59.023553 kubelet[2599]: I0813 01:09:59.023513 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:59.023553 kubelet[2599]: I0813 01:09:59.023554 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:09:59.025601 kubelet[2599]: I0813 01:09:59.025567 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:09:59.035557 kubelet[2599]: I0813 01:09:59.035536 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:09:59.035689 kubelet[2599]: I0813 01:09:59.035665 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:09:59.035728 kubelet[2599]: E0813 01:09:59.035700 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:09:59.035728 kubelet[2599]: E0813 01:09:59.035711 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:09:59.035728 kubelet[2599]: E0813 01:09:59.035719 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:09:59.035728 kubelet[2599]: E0813 01:09:59.035728 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:09:59.035802 kubelet[2599]: E0813 01:09:59.035735 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:09:59.035802 kubelet[2599]: E0813 01:09:59.035743 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:09:59.035802 kubelet[2599]: E0813 01:09:59.035751 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:09:59.035802 kubelet[2599]: E0813 01:09:59.035759 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:09:59.035802 kubelet[2599]: I0813 01:09:59.035768 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:10:01.369320 systemd[1]: Started sshd@48-172.234.214.143:22-139.178.89.65:37684.service - OpenSSH per-connection server daemon (139.178.89.65:37684). Aug 13 01:10:01.702707 sshd[4603]: Accepted publickey for core from 139.178.89.65 port 37684 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:10:01.704571 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:10:01.710138 systemd-logind[1460]: New session 49 of user core. Aug 13 01:10:01.715184 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 01:10:01.821862 kubelet[2599]: E0813 01:10:01.821566 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:02.006556 sshd[4605]: Connection closed by 139.178.89.65 port 37684 Aug 13 01:10:02.007208 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:02.011200 systemd-logind[1460]: Session 49 logged out. Waiting for processes to exit. Aug 13 01:10:02.012642 systemd[1]: sshd@48-172.234.214.143:22-139.178.89.65:37684.service: Deactivated successfully. Aug 13 01:10:02.015065 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 01:10:02.016615 systemd-logind[1460]: Removed session 49. Aug 13 01:10:07.081268 systemd[1]: Started sshd@49-172.234.214.143:22-139.178.89.65:37686.service - OpenSSH per-connection server daemon (139.178.89.65:37686). Aug 13 01:10:07.416709 sshd[4617]: Accepted publickey for core from 139.178.89.65 port 37686 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:10:07.418549 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:10:07.423948 systemd-logind[1460]: New session 50 of user core. Aug 13 01:10:07.427175 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 01:10:07.718753 sshd[4619]: Connection closed by 139.178.89.65 port 37686 Aug 13 01:10:07.719526 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:07.724324 systemd-logind[1460]: Session 50 logged out. Waiting for processes to exit. Aug 13 01:10:07.725393 systemd[1]: sshd@49-172.234.214.143:22-139.178.89.65:37686.service: Deactivated successfully. Aug 13 01:10:07.727250 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 01:10:07.728281 systemd-logind[1460]: Removed session 50. Aug 13 01:10:09.055823 kubelet[2599]: I0813 01:10:09.055782 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:10:09.055823 kubelet[2599]: I0813 01:10:09.055824 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:10:09.057236 kubelet[2599]: I0813 01:10:09.057207 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:10:09.067030 kubelet[2599]: I0813 01:10:09.067010 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:10:09.067172 kubelet[2599]: I0813 01:10:09.067150 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:10:09.067207 kubelet[2599]: E0813 01:10:09.067181 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:10:09.067207 kubelet[2599]: E0813 01:10:09.067193 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:10:09.067207 kubelet[2599]: E0813 01:10:09.067201 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:10:09.067269 kubelet[2599]: E0813 01:10:09.067209 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:10:09.067269 kubelet[2599]: E0813 01:10:09.067218 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:10:09.067269 kubelet[2599]: E0813 01:10:09.067226 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:10:09.067269 kubelet[2599]: E0813 01:10:09.067234 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:10:09.067269 kubelet[2599]: E0813 01:10:09.067242 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:10:09.067269 kubelet[2599]: I0813 01:10:09.067251 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:10:12.791396 systemd[1]: Started sshd@50-172.234.214.143:22-139.178.89.65:39574.service - OpenSSH per-connection server daemon (139.178.89.65:39574). Aug 13 01:10:12.821977 kubelet[2599]: E0813 01:10:12.821947 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:13.121789 sshd[4631]: Accepted publickey for core from 139.178.89.65 port 39574 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:10:13.123986 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:10:13.129385 systemd-logind[1460]: New session 51 of user core. Aug 13 01:10:13.135171 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 01:10:13.424398 sshd[4633]: Connection closed by 139.178.89.65 port 39574 Aug 13 01:10:13.425536 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:13.429201 systemd-logind[1460]: Session 51 logged out. Waiting for processes to exit. Aug 13 01:10:13.429955 systemd[1]: sshd@50-172.234.214.143:22-139.178.89.65:39574.service: Deactivated successfully. Aug 13 01:10:13.432854 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 01:10:13.433801 systemd-logind[1460]: Removed session 51. Aug 13 01:10:14.821434 kubelet[2599]: E0813 01:10:14.821385 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:18.490249 systemd[1]: Started sshd@51-172.234.214.143:22-139.178.89.65:39582.service - OpenSSH per-connection server daemon (139.178.89.65:39582). Aug 13 01:10:18.809581 sshd[4648]: Accepted publickey for core from 139.178.89.65 port 39582 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:10:18.811288 sshd-session[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:10:18.816825 systemd-logind[1460]: New session 52 of user core. Aug 13 01:10:18.820198 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 01:10:19.082125 kubelet[2599]: I0813 01:10:19.082011 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:10:19.082481 kubelet[2599]: I0813 01:10:19.082464 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:10:19.083563 kubelet[2599]: I0813 01:10:19.083502 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:10:19.091836 kubelet[2599]: I0813 01:10:19.091816 2599 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:10:19.091917 kubelet[2599]: I0813 01:10:19.091902 2599 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-5d85765b45-x724r","kube-system/coredns-7c65d6cfc9-bghtd","kube-system/coredns-7c65d6cfc9-f7p9h","kube-system/cilium-tp96b","kube-system/kube-controller-manager-172-234-214-143","kube-system/kube-proxy-qzrh6","kube-system/kube-apiserver-172-234-214-143","kube-system/kube-scheduler-172-234-214-143"] Aug 13 01:10:19.091948 kubelet[2599]: E0813 01:10:19.091931 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-5d85765b45-x724r" Aug 13 01:10:19.091948 kubelet[2599]: E0813 01:10:19.091942 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-bghtd" Aug 13 01:10:19.091997 kubelet[2599]: E0813 01:10:19.091949 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-f7p9h" Aug 13 01:10:19.091997 kubelet[2599]: E0813 01:10:19.091957 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-tp96b" Aug 13 01:10:19.091997 kubelet[2599]: E0813 01:10:19.091965 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-214-143" Aug 13 01:10:19.091997 kubelet[2599]: E0813 01:10:19.091972 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qzrh6" Aug 13 01:10:19.091997 kubelet[2599]: E0813 01:10:19.091979 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-214-143" Aug 13 01:10:19.091997 kubelet[2599]: E0813 01:10:19.091987 2599 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-214-143" Aug 13 01:10:19.091997 kubelet[2599]: I0813 01:10:19.091995 2599 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:10:19.111263 sshd[4650]: Connection closed by 139.178.89.65 port 39582 Aug 13 01:10:19.111765 sshd-session[4648]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:19.115221 systemd[1]: sshd@51-172.234.214.143:22-139.178.89.65:39582.service: Deactivated successfully. Aug 13 01:10:19.117271 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 01:10:19.118026 systemd-logind[1460]: Session 52 logged out. Waiting for processes to exit. Aug 13 01:10:19.119356 systemd-logind[1460]: Removed session 52. Aug 13 01:10:24.176246 systemd[1]: Started sshd@52-172.234.214.143:22-139.178.89.65:58410.service - OpenSSH per-connection server daemon (139.178.89.65:58410). Aug 13 01:10:24.499534 sshd[4662]: Accepted publickey for core from 139.178.89.65 port 58410 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:10:24.501082 sshd-session[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:10:24.506843 systemd-logind[1460]: New session 53 of user core. Aug 13 01:10:24.514184 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 01:10:24.793797 sshd[4664]: Connection closed by 139.178.89.65 port 58410 Aug 13 01:10:24.794641 sshd-session[4662]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:24.798478 systemd[1]: sshd@52-172.234.214.143:22-139.178.89.65:58410.service: Deactivated successfully. Aug 13 01:10:24.801733 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 01:10:24.803285 systemd-logind[1460]: Session 53 logged out. Waiting for processes to exit. Aug 13 01:10:24.804961 systemd-logind[1460]: Removed session 53. Aug 13 01:10:24.821656 kubelet[2599]: E0813 01:10:24.821629 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:24.862271 systemd[1]: Started sshd@53-172.234.214.143:22-139.178.89.65:58420.service - OpenSSH per-connection server daemon (139.178.89.65:58420). Aug 13 01:10:25.197653 sshd[4676]: Accepted publickey for core from 139.178.89.65 port 58420 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:10:25.199683 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:10:25.205679 systemd-logind[1460]: New session 54 of user core. Aug 13 01:10:25.213182 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 01:10:26.674957 kubelet[2599]: I0813 01:10:26.673562 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bghtd" podStartSLOduration=342.673537716 podStartE2EDuration="5m42.673537716s" podCreationTimestamp="2025-08-13 01:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:05:01.970481502 +0000 UTC m=+24.257510359" watchObservedRunningTime="2025-08-13 01:10:26.673537716 +0000 UTC m=+348.960566573" Aug 13 01:10:26.689758 containerd[1485]: time="2025-08-13T01:10:26.688035949Z" level=info msg="StopContainer for \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\" with timeout 30 (s)" Aug 13 01:10:26.691000 containerd[1485]: time="2025-08-13T01:10:26.690976938Z" level=info msg="Stop container \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\" with signal terminated" Aug 13 01:10:26.709694 systemd[1]: cri-containerd-33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd.scope: Deactivated successfully. Aug 13 01:10:26.725356 containerd[1485]: time="2025-08-13T01:10:26.725285364Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:10:26.736878 containerd[1485]: time="2025-08-13T01:10:26.736693419Z" level=info msg="StopContainer for \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\" with timeout 2 (s)" Aug 13 01:10:26.737550 containerd[1485]: time="2025-08-13T01:10:26.737460897Z" level=info msg="Stop container \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\" with signal terminated" Aug 13 01:10:26.742647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd-rootfs.mount: Deactivated successfully. Aug 13 01:10:26.747777 systemd-networkd[1391]: lxc_health: Link DOWN Aug 13 01:10:26.747788 systemd-networkd[1391]: lxc_health: Lost carrier Aug 13 01:10:26.758899 containerd[1485]: time="2025-08-13T01:10:26.758714664Z" level=info msg="shim disconnected" id=33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd namespace=k8s.io Aug 13 01:10:26.758899 containerd[1485]: time="2025-08-13T01:10:26.758767203Z" level=warning msg="cleaning up after shim disconnected" id=33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd namespace=k8s.io Aug 13 01:10:26.758899 containerd[1485]: time="2025-08-13T01:10:26.758777963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:10:26.767433 systemd[1]: cri-containerd-6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264.scope: Deactivated successfully. Aug 13 01:10:26.767794 systemd[1]: cri-containerd-6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264.scope: Consumed 7.128s CPU time, 124.4M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 01:10:26.786862 containerd[1485]: time="2025-08-13T01:10:26.786500985Z" level=info msg="StopContainer for \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\" returns successfully" Aug 13 01:10:26.787855 containerd[1485]: time="2025-08-13T01:10:26.787655651Z" level=info msg="StopPodSandbox for \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\"" Aug 13 01:10:26.787855 containerd[1485]: time="2025-08-13T01:10:26.787686451Z" level=info msg="Container to stop \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:10:26.797400 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7-shm.mount: Deactivated successfully. Aug 13 01:10:26.799461 systemd[1]: cri-containerd-5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7.scope: Deactivated successfully. Aug 13 01:10:26.810556 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264-rootfs.mount: Deactivated successfully. Aug 13 01:10:26.822001 containerd[1485]: time="2025-08-13T01:10:26.821797827Z" level=info msg="shim disconnected" id=6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264 namespace=k8s.io Aug 13 01:10:26.822001 containerd[1485]: time="2025-08-13T01:10:26.821857227Z" level=warning msg="cleaning up after shim disconnected" id=6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264 namespace=k8s.io Aug 13 01:10:26.822001 containerd[1485]: time="2025-08-13T01:10:26.821866367Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:10:26.848464 containerd[1485]: time="2025-08-13T01:10:26.848346614Z" level=info msg="StopContainer for \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\" returns successfully" Aug 13 01:10:26.849257 containerd[1485]: time="2025-08-13T01:10:26.848899022Z" level=info msg="StopPodSandbox for \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\"" Aug 13 01:10:26.849257 containerd[1485]: time="2025-08-13T01:10:26.848928022Z" level=info msg="Container to stop \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:10:26.849257 containerd[1485]: time="2025-08-13T01:10:26.848964122Z" level=info msg="Container to stop \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:10:26.849257 containerd[1485]: time="2025-08-13T01:10:26.848973212Z" level=info msg="Container to stop \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:10:26.849257 containerd[1485]: time="2025-08-13T01:10:26.848983062Z" level=info msg="Container to stop \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:10:26.849257 containerd[1485]: time="2025-08-13T01:10:26.848991582Z" level=info msg="Container to stop \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:10:26.851211 containerd[1485]: time="2025-08-13T01:10:26.851154643Z" level=info msg="shim disconnected" id=5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7 namespace=k8s.io Aug 13 01:10:26.851211 containerd[1485]: time="2025-08-13T01:10:26.851202303Z" level=warning msg="cleaning up after shim disconnected" id=5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7 namespace=k8s.io Aug 13 01:10:26.851211 containerd[1485]: time="2025-08-13T01:10:26.851212823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:10:26.858537 systemd[1]: cri-containerd-06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae.scope: Deactivated successfully. Aug 13 01:10:26.870248 containerd[1485]: time="2025-08-13T01:10:26.870213688Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:10:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:10:26.872747 containerd[1485]: time="2025-08-13T01:10:26.872577989Z" level=info msg="TearDown network for sandbox \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\" successfully" Aug 13 01:10:26.872747 containerd[1485]: time="2025-08-13T01:10:26.872614429Z" level=info msg="StopPodSandbox for \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\" returns successfully" Aug 13 01:10:26.898698 containerd[1485]: time="2025-08-13T01:10:26.898623488Z" level=info msg="shim disconnected" id=06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae namespace=k8s.io Aug 13 01:10:26.898698 containerd[1485]: time="2025-08-13T01:10:26.898695208Z" level=warning msg="cleaning up after shim disconnected" id=06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae namespace=k8s.io Aug 13 01:10:26.899034 containerd[1485]: time="2025-08-13T01:10:26.898705338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:10:26.912659 containerd[1485]: time="2025-08-13T01:10:26.912615093Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:10:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:10:26.914185 containerd[1485]: time="2025-08-13T01:10:26.914073637Z" level=info msg="TearDown network for sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" successfully" Aug 13 01:10:26.914185 containerd[1485]: time="2025-08-13T01:10:26.914100797Z" level=info msg="StopPodSandbox for \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" returns successfully" Aug 13 01:10:26.929396 kubelet[2599]: I0813 01:10:26.929123 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48b2e51e-7897-4b6b-9af1-5b6cd55d1227-cilium-config-path\") pod \"48b2e51e-7897-4b6b-9af1-5b6cd55d1227\" (UID: \"48b2e51e-7897-4b6b-9af1-5b6cd55d1227\") " Aug 13 01:10:26.930631 kubelet[2599]: I0813 01:10:26.929755 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5zmj\" (UniqueName: \"kubernetes.io/projected/48b2e51e-7897-4b6b-9af1-5b6cd55d1227-kube-api-access-g5zmj\") pod \"48b2e51e-7897-4b6b-9af1-5b6cd55d1227\" (UID: \"48b2e51e-7897-4b6b-9af1-5b6cd55d1227\") " Aug 13 01:10:26.937427 kubelet[2599]: I0813 01:10:26.937385 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b2e51e-7897-4b6b-9af1-5b6cd55d1227-kube-api-access-g5zmj" (OuterVolumeSpecName: "kube-api-access-g5zmj") pod "48b2e51e-7897-4b6b-9af1-5b6cd55d1227" (UID: "48b2e51e-7897-4b6b-9af1-5b6cd55d1227"). InnerVolumeSpecName "kube-api-access-g5zmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:10:26.937844 kubelet[2599]: I0813 01:10:26.937805 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b2e51e-7897-4b6b-9af1-5b6cd55d1227-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "48b2e51e-7897-4b6b-9af1-5b6cd55d1227" (UID: "48b2e51e-7897-4b6b-9af1-5b6cd55d1227"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:10:27.030241 kubelet[2599]: I0813 01:10:27.030180 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-xtables-lock\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030318 kubelet[2599]: I0813 01:10:27.030248 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrwmn\" (UniqueName: \"kubernetes.io/projected/0619f9c5-cbd0-435a-928c-03a8fb6c230e-kube-api-access-rrwmn\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030318 kubelet[2599]: I0813 01:10:27.030272 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-hostproc\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030318 kubelet[2599]: I0813 01:10:27.030287 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-bpf-maps\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030318 kubelet[2599]: I0813 01:10:27.030307 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0619f9c5-cbd0-435a-928c-03a8fb6c230e-clustermesh-secrets\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030430 kubelet[2599]: I0813 01:10:27.030323 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-host-proc-sys-kernel\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030430 kubelet[2599]: I0813 01:10:27.030342 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cni-path\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030430 kubelet[2599]: I0813 01:10:27.030359 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-config-path\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030430 kubelet[2599]: I0813 01:10:27.030377 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-run\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030430 kubelet[2599]: I0813 01:10:27.030390 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-etc-cni-netd\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030430 kubelet[2599]: I0813 01:10:27.030405 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0619f9c5-cbd0-435a-928c-03a8fb6c230e-hubble-tls\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030628 kubelet[2599]: I0813 01:10:27.030421 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-lib-modules\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030628 kubelet[2599]: I0813 01:10:27.030436 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-host-proc-sys-net\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030628 kubelet[2599]: I0813 01:10:27.030457 2599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-cgroup\") pod \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\" (UID: \"0619f9c5-cbd0-435a-928c-03a8fb6c230e\") " Aug 13 01:10:27.030628 kubelet[2599]: I0813 01:10:27.030506 2599 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48b2e51e-7897-4b6b-9af1-5b6cd55d1227-cilium-config-path\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.030628 kubelet[2599]: I0813 01:10:27.030517 2599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5zmj\" (UniqueName: \"kubernetes.io/projected/48b2e51e-7897-4b6b-9af1-5b6cd55d1227-kube-api-access-g5zmj\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.030628 kubelet[2599]: I0813 01:10:27.030592 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.030824 kubelet[2599]: I0813 01:10:27.030635 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.033999 kubelet[2599]: I0813 01:10:27.033452 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.033999 kubelet[2599]: I0813 01:10:27.033519 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.033999 kubelet[2599]: I0813 01:10:27.033788 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-hostproc" (OuterVolumeSpecName: "hostproc") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.033999 kubelet[2599]: I0813 01:10:27.033810 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.036162 kubelet[2599]: I0813 01:10:27.035846 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.036162 kubelet[2599]: I0813 01:10:27.035883 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.036762 kubelet[2599]: I0813 01:10:27.036721 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.037372 kubelet[2599]: I0813 01:10:27.036880 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cni-path" (OuterVolumeSpecName: "cni-path") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:10:27.038551 kubelet[2599]: I0813 01:10:27.038533 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:10:27.038719 kubelet[2599]: I0813 01:10:27.038666 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0619f9c5-cbd0-435a-928c-03a8fb6c230e-kube-api-access-rrwmn" (OuterVolumeSpecName: "kube-api-access-rrwmn") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "kube-api-access-rrwmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:10:27.040023 kubelet[2599]: I0813 01:10:27.039984 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0619f9c5-cbd0-435a-928c-03a8fb6c230e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:10:27.040151 kubelet[2599]: I0813 01:10:27.040121 2599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0619f9c5-cbd0-435a-928c-03a8fb6c230e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0619f9c5-cbd0-435a-928c-03a8fb6c230e" (UID: "0619f9c5-cbd0-435a-928c-03a8fb6c230e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:10:27.135071 kubelet[2599]: I0813 01:10:27.135016 2599 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-etc-cni-netd\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135071 kubelet[2599]: I0813 01:10:27.135054 2599 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0619f9c5-cbd0-435a-928c-03a8fb6c230e-hubble-tls\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135071 kubelet[2599]: I0813 01:10:27.135065 2599 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-lib-modules\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135071 kubelet[2599]: I0813 01:10:27.135074 2599 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-host-proc-sys-net\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135288 kubelet[2599]: I0813 01:10:27.135090 2599 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-cgroup\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135288 kubelet[2599]: I0813 01:10:27.135099 2599 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-xtables-lock\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135288 kubelet[2599]: I0813 01:10:27.135109 2599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrwmn\" (UniqueName: \"kubernetes.io/projected/0619f9c5-cbd0-435a-928c-03a8fb6c230e-kube-api-access-rrwmn\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135288 kubelet[2599]: I0813 01:10:27.135118 2599 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-hostproc\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135288 kubelet[2599]: I0813 01:10:27.135132 2599 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-host-proc-sys-kernel\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135288 kubelet[2599]: I0813 01:10:27.135141 2599 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cni-path\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135288 kubelet[2599]: I0813 01:10:27.135148 2599 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-config-path\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135288 kubelet[2599]: I0813 01:10:27.135157 2599 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-cilium-run\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135507 kubelet[2599]: I0813 01:10:27.135165 2599 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0619f9c5-cbd0-435a-928c-03a8fb6c230e-bpf-maps\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.135507 kubelet[2599]: I0813 01:10:27.135172 2599 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0619f9c5-cbd0-435a-928c-03a8fb6c230e-clustermesh-secrets\") on node \"172-234-214-143\" DevicePath \"\"" Aug 13 01:10:27.576843 kubelet[2599]: I0813 01:10:27.576611 2599 scope.go:117] "RemoveContainer" containerID="33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd" Aug 13 01:10:27.580147 containerd[1485]: time="2025-08-13T01:10:27.580014771Z" level=info msg="RemoveContainer for \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\"" Aug 13 01:10:27.585230 containerd[1485]: time="2025-08-13T01:10:27.585168822Z" level=info msg="RemoveContainer for \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\" returns successfully" Aug 13 01:10:27.586777 kubelet[2599]: I0813 01:10:27.586756 2599 scope.go:117] "RemoveContainer" containerID="33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd" Aug 13 01:10:27.587126 containerd[1485]: time="2025-08-13T01:10:27.586956355Z" level=error msg="ContainerStatus for \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\": not found" Aug 13 01:10:27.587475 kubelet[2599]: E0813 01:10:27.587452 2599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\": not found" containerID="33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd" Aug 13 01:10:27.587624 kubelet[2599]: I0813 01:10:27.587481 2599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd"} err="failed to get container status \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"33d181215ef4f16ff62334ecad17b45cdc31317b860babd042c0dec889470dbd\": not found" Aug 13 01:10:27.587624 kubelet[2599]: I0813 01:10:27.587551 2599 scope.go:117] "RemoveContainer" containerID="6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264" Aug 13 01:10:27.588999 systemd[1]: Removed slice kubepods-besteffort-pod48b2e51e_7897_4b6b_9af1_5b6cd55d1227.slice - libcontainer container kubepods-besteffort-pod48b2e51e_7897_4b6b_9af1_5b6cd55d1227.slice. Aug 13 01:10:27.590667 containerd[1485]: time="2025-08-13T01:10:27.590387901Z" level=info msg="RemoveContainer for \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\"" Aug 13 01:10:27.591420 systemd[1]: Removed slice kubepods-burstable-pod0619f9c5_cbd0_435a_928c_03a8fb6c230e.slice - libcontainer container kubepods-burstable-pod0619f9c5_cbd0_435a_928c_03a8fb6c230e.slice. Aug 13 01:10:27.591603 systemd[1]: kubepods-burstable-pod0619f9c5_cbd0_435a_928c_03a8fb6c230e.slice: Consumed 7.211s CPU time, 124.8M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 01:10:27.594263 containerd[1485]: time="2025-08-13T01:10:27.594205376Z" level=info msg="RemoveContainer for \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\" returns successfully" Aug 13 01:10:27.595081 kubelet[2599]: I0813 01:10:27.595065 2599 scope.go:117] "RemoveContainer" containerID="b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4" Aug 13 01:10:27.596366 containerd[1485]: time="2025-08-13T01:10:27.596074229Z" level=info msg="RemoveContainer for \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\"" Aug 13 01:10:27.600088 containerd[1485]: time="2025-08-13T01:10:27.599530666Z" level=info msg="RemoveContainer for \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\" returns successfully" Aug 13 01:10:27.600538 kubelet[2599]: I0813 01:10:27.600522 2599 scope.go:117] "RemoveContainer" containerID="d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57" Aug 13 01:10:27.601915 containerd[1485]: time="2025-08-13T01:10:27.601712577Z" level=info msg="RemoveContainer for \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\"" Aug 13 01:10:27.605088 containerd[1485]: time="2025-08-13T01:10:27.604906495Z" level=info msg="RemoveContainer for \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\" returns successfully" Aug 13 01:10:27.605129 kubelet[2599]: I0813 01:10:27.605032 2599 scope.go:117] "RemoveContainer" containerID="198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5" Aug 13 01:10:27.606521 containerd[1485]: time="2025-08-13T01:10:27.606502108Z" level=info msg="RemoveContainer for \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\"" Aug 13 01:10:27.608960 containerd[1485]: time="2025-08-13T01:10:27.608933469Z" level=info msg="RemoveContainer for \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\" returns successfully" Aug 13 01:10:27.609136 kubelet[2599]: I0813 01:10:27.609080 2599 scope.go:117] "RemoveContainer" containerID="1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f" Aug 13 01:10:27.610448 containerd[1485]: time="2025-08-13T01:10:27.610409033Z" level=info msg="RemoveContainer for \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\"" Aug 13 01:10:27.613442 containerd[1485]: time="2025-08-13T01:10:27.613418192Z" level=info msg="RemoveContainer for \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\" returns successfully" Aug 13 01:10:27.613638 kubelet[2599]: I0813 01:10:27.613624 2599 scope.go:117] "RemoveContainer" containerID="6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264" Aug 13 01:10:27.613921 containerd[1485]: time="2025-08-13T01:10:27.613886930Z" level=error msg="ContainerStatus for \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\": not found" Aug 13 01:10:27.614036 kubelet[2599]: E0813 01:10:27.614010 2599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\": not found" containerID="6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264" Aug 13 01:10:27.614106 kubelet[2599]: I0813 01:10:27.614034 2599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264"} err="failed to get container status \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d1a64a45892fcee777fbb8145b23c94527d4ac109a1d4fcbd276bbbe63db264\": not found" Aug 13 01:10:27.614106 kubelet[2599]: I0813 01:10:27.614069 2599 scope.go:117] "RemoveContainer" containerID="b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4" Aug 13 01:10:27.614333 containerd[1485]: time="2025-08-13T01:10:27.614212428Z" level=error msg="ContainerStatus for \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\": not found" Aug 13 01:10:27.614435 kubelet[2599]: E0813 01:10:27.614416 2599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\": not found" containerID="b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4" Aug 13 01:10:27.614435 kubelet[2599]: I0813 01:10:27.614437 2599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4"} err="failed to get container status \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b379a8487762e02193cb013c65f7bb691568d3c5451a10a3ff374ab63f6ffbd4\": not found" Aug 13 01:10:27.614435 kubelet[2599]: I0813 01:10:27.614452 2599 scope.go:117] "RemoveContainer" containerID="d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57" Aug 13 01:10:27.614615 containerd[1485]: time="2025-08-13T01:10:27.614574077Z" level=error msg="ContainerStatus for \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\": not found" Aug 13 01:10:27.614828 kubelet[2599]: E0813 01:10:27.614691 2599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\": not found" containerID="d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57" Aug 13 01:10:27.614828 kubelet[2599]: I0813 01:10:27.614713 2599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57"} err="failed to get container status \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0d9152956c2a133c4f87cb7a8090bce89893205b53e418614150694c86d5d57\": not found" Aug 13 01:10:27.614828 kubelet[2599]: I0813 01:10:27.614732 2599 scope.go:117] "RemoveContainer" containerID="198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5" Aug 13 01:10:27.614922 containerd[1485]: time="2025-08-13T01:10:27.614848566Z" level=error msg="ContainerStatus for \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\": not found" Aug 13 01:10:27.615054 kubelet[2599]: E0813 01:10:27.615026 2599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\": not found" containerID="198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5" Aug 13 01:10:27.615120 kubelet[2599]: I0813 01:10:27.615096 2599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5"} err="failed to get container status \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\": rpc error: code = NotFound desc = an error occurred when try to find container \"198114ca8afe2a85ce8cc9d9821288dbf057f184f6e3ce12e1f113bc14aa4aa5\": not found" Aug 13 01:10:27.615120 kubelet[2599]: I0813 01:10:27.615112 2599 scope.go:117] "RemoveContainer" containerID="1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f" Aug 13 01:10:27.615356 containerd[1485]: time="2025-08-13T01:10:27.615270595Z" level=error msg="ContainerStatus for \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\": not found" Aug 13 01:10:27.615385 kubelet[2599]: E0813 01:10:27.615365 2599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\": not found" containerID="1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f" Aug 13 01:10:27.615407 kubelet[2599]: I0813 01:10:27.615380 2599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f"} err="failed to get container status \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d0f4437895f819cd28aca80281847f6ddd16e4619dff11f15a3f4a2bedf5d7f\": not found" Aug 13 01:10:27.701900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7-rootfs.mount: Deactivated successfully. Aug 13 01:10:27.702030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae-rootfs.mount: Deactivated successfully. Aug 13 01:10:27.702130 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae-shm.mount: Deactivated successfully. Aug 13 01:10:27.702210 systemd[1]: var-lib-kubelet-pods-48b2e51e\x2d7897\x2d4b6b\x2d9af1\x2d5b6cd55d1227-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg5zmj.mount: Deactivated successfully. Aug 13 01:10:27.702293 systemd[1]: var-lib-kubelet-pods-0619f9c5\x2dcbd0\x2d435a\x2d928c\x2d03a8fb6c230e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 01:10:27.702374 systemd[1]: var-lib-kubelet-pods-0619f9c5\x2dcbd0\x2d435a\x2d928c\x2d03a8fb6c230e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drrwmn.mount: Deactivated successfully. Aug 13 01:10:27.702453 systemd[1]: var-lib-kubelet-pods-0619f9c5\x2dcbd0\x2d435a\x2d928c\x2d03a8fb6c230e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 01:10:27.823786 kubelet[2599]: I0813 01:10:27.823740 2599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0619f9c5-cbd0-435a-928c-03a8fb6c230e" path="/var/lib/kubelet/pods/0619f9c5-cbd0-435a-928c-03a8fb6c230e/volumes" Aug 13 01:10:27.824591 kubelet[2599]: I0813 01:10:27.824564 2599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b2e51e-7897-4b6b-9af1-5b6cd55d1227" path="/var/lib/kubelet/pods/48b2e51e-7897-4b6b-9af1-5b6cd55d1227/volumes" Aug 13 01:10:27.987987 kubelet[2599]: E0813 01:10:27.987947 2599 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:10:28.691280 sshd[4678]: Connection closed by 139.178.89.65 port 58420 Aug 13 01:10:28.692134 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:28.697266 systemd-logind[1460]: Session 54 logged out. Waiting for processes to exit. Aug 13 01:10:28.697971 systemd[1]: sshd@53-172.234.214.143:22-139.178.89.65:58420.service: Deactivated successfully. Aug 13 01:10:28.700314 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 01:10:28.701276 systemd-logind[1460]: Removed session 54. Aug 13 01:10:28.753277 systemd[1]: Started sshd@54-172.234.214.143:22-139.178.89.65:58428.service - OpenSSH per-connection server daemon (139.178.89.65:58428). Aug 13 01:10:29.083224 sshd[4845]: Accepted publickey for core from 139.178.89.65 port 58428 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:10:29.085767 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:10:29.092114 systemd-logind[1460]: New session 55 of user core. Aug 13 01:10:29.099223 systemd[1]: Started session-55.scope - Session 55 of User core. Aug 13 01:10:29.110791 kubelet[2599]: I0813 01:10:29.110758 2599 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:10:29.111133 kubelet[2599]: I0813 01:10:29.110820 2599 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:10:29.112170 containerd[1485]: time="2025-08-13T01:10:29.112137687Z" level=info msg="StopPodSandbox for \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\"" Aug 13 01:10:29.112440 containerd[1485]: time="2025-08-13T01:10:29.112233337Z" level=info msg="TearDown network for sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" successfully" Aug 13 01:10:29.112440 containerd[1485]: time="2025-08-13T01:10:29.112285727Z" level=info msg="StopPodSandbox for \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" returns successfully" Aug 13 01:10:29.112717 containerd[1485]: time="2025-08-13T01:10:29.112646886Z" level=info msg="RemovePodSandbox for \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\"" Aug 13 01:10:29.112717 containerd[1485]: time="2025-08-13T01:10:29.112683225Z" level=info msg="Forcibly stopping sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\"" Aug 13 01:10:29.112799 containerd[1485]: time="2025-08-13T01:10:29.112740385Z" level=info msg="TearDown network for sandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" successfully" Aug 13 01:10:29.115843 containerd[1485]: time="2025-08-13T01:10:29.115816043Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:10:29.115877 containerd[1485]: time="2025-08-13T01:10:29.115857883Z" level=info msg="RemovePodSandbox \"06e5f99e5f95ddb4e0775fdca722039a7091a7ce279672bdd5236c38ee1abfae\" returns successfully" Aug 13 01:10:29.116196 containerd[1485]: time="2025-08-13T01:10:29.116174312Z" level=info msg="StopPodSandbox for \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\"" Aug 13 01:10:29.116251 containerd[1485]: time="2025-08-13T01:10:29.116237272Z" level=info msg="TearDown network for sandbox \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\" successfully" Aug 13 01:10:29.116278 containerd[1485]: time="2025-08-13T01:10:29.116248381Z" level=info msg="StopPodSandbox for \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\" returns successfully" Aug 13 01:10:29.117885 containerd[1485]: time="2025-08-13T01:10:29.116547970Z" level=info msg="RemovePodSandbox for \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\"" Aug 13 01:10:29.117885 containerd[1485]: time="2025-08-13T01:10:29.116572580Z" level=info msg="Forcibly stopping sandbox \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\"" Aug 13 01:10:29.117885 containerd[1485]: time="2025-08-13T01:10:29.116620680Z" level=info msg="TearDown network for sandbox \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\" successfully" Aug 13 01:10:29.119160 containerd[1485]: time="2025-08-13T01:10:29.119138241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 01:10:29.119254 containerd[1485]: time="2025-08-13T01:10:29.119167741Z" level=info msg="RemovePodSandbox \"5fa073d8279cc1096049090f7058bdaa45c287a594730bb3f57d0f9b50b4d7e7\" returns successfully" Aug 13 01:10:29.119804 kubelet[2599]: I0813 01:10:29.119789 2599 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:10:29.120944 kubelet[2599]: I0813 01:10:29.120920 2599 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b" size=166719855 runtimeHandler="" Aug 13 01:10:29.121265 containerd[1485]: time="2025-08-13T01:10:29.121118503Z" level=info msg="RemoveImage \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:10:29.121935 containerd[1485]: time="2025-08-13T01:10:29.121909310Z" level=info msg="ImageDelete event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:10:29.123183 containerd[1485]: time="2025-08-13T01:10:29.123121456Z" level=info msg="ImageDelete event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:10:29.358772 containerd[1485]: time="2025-08-13T01:10:29.357980602Z" level=info msg="RemoveImage \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" returns successfully" Aug 13 01:10:29.359573 kubelet[2599]: I0813 01:10:29.359352 2599 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c" size=18897442 runtimeHandler="" Aug 13 01:10:29.360126 containerd[1485]: time="2025-08-13T01:10:29.359890115Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:10:29.362274 containerd[1485]: time="2025-08-13T01:10:29.362251746Z" level=info msg="ImageDelete event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 01:10:29.363840 containerd[1485]: time="2025-08-13T01:10:29.363821080Z" level=info msg="ImageDelete event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 01:10:29.379014 containerd[1485]: time="2025-08-13T01:10:29.378987622Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" returns successfully" Aug 13 01:10:29.394151 kubelet[2599]: I0813 01:10:29.394129 2599 eviction_manager.go:376] "Eviction manager: able to reduce resource pressure without evicting pods." resourceName="ephemeral-storage" Aug 13 01:10:29.670161 kubelet[2599]: E0813 01:10:29.668251 2599 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48b2e51e-7897-4b6b-9af1-5b6cd55d1227" containerName="cilium-operator" Aug 13 01:10:29.670161 kubelet[2599]: E0813 01:10:29.668285 2599 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0619f9c5-cbd0-435a-928c-03a8fb6c230e" containerName="clean-cilium-state" Aug 13 01:10:29.670161 kubelet[2599]: E0813 01:10:29.668292 2599 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0619f9c5-cbd0-435a-928c-03a8fb6c230e" containerName="cilium-agent" Aug 13 01:10:29.670161 kubelet[2599]: E0813 01:10:29.668299 2599 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0619f9c5-cbd0-435a-928c-03a8fb6c230e" containerName="mount-cgroup" Aug 13 01:10:29.670161 kubelet[2599]: E0813 01:10:29.668306 2599 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0619f9c5-cbd0-435a-928c-03a8fb6c230e" containerName="apply-sysctl-overwrites" Aug 13 01:10:29.670161 kubelet[2599]: E0813 01:10:29.668311 2599 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0619f9c5-cbd0-435a-928c-03a8fb6c230e" containerName="mount-bpf-fs" Aug 13 01:10:29.670161 kubelet[2599]: I0813 01:10:29.668337 2599 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b2e51e-7897-4b6b-9af1-5b6cd55d1227" containerName="cilium-operator" Aug 13 01:10:29.670161 kubelet[2599]: I0813 01:10:29.668343 2599 memory_manager.go:354] "RemoveStaleState removing state" podUID="0619f9c5-cbd0-435a-928c-03a8fb6c230e" containerName="cilium-agent" Aug 13 01:10:29.677129 systemd[1]: Created slice kubepods-burstable-pode669c37f_5466_44d7_93d5_40032c869e48.slice - libcontainer container kubepods-burstable-pode669c37f_5466_44d7_93d5_40032c869e48.slice. Aug 13 01:10:29.697211 sshd[4847]: Connection closed by 139.178.89.65 port 58428 Aug 13 01:10:29.696565 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:29.702240 systemd-logind[1460]: Session 55 logged out. Waiting for processes to exit. Aug 13 01:10:29.702903 systemd[1]: sshd@54-172.234.214.143:22-139.178.89.65:58428.service: Deactivated successfully. Aug 13 01:10:29.709732 systemd[1]: session-55.scope: Deactivated successfully. Aug 13 01:10:29.713234 systemd-logind[1460]: Removed session 55. Aug 13 01:10:29.749473 kubelet[2599]: I0813 01:10:29.749093 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e669c37f-5466-44d7-93d5-40032c869e48-clustermesh-secrets\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749473 kubelet[2599]: I0813 01:10:29.749136 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e669c37f-5466-44d7-93d5-40032c869e48-cilium-ipsec-secrets\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749473 kubelet[2599]: I0813 01:10:29.749159 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-host-proc-sys-net\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749473 kubelet[2599]: I0813 01:10:29.749177 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwkq\" (UniqueName: \"kubernetes.io/projected/e669c37f-5466-44d7-93d5-40032c869e48-kube-api-access-qhwkq\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749473 kubelet[2599]: I0813 01:10:29.749198 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-hostproc\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749716 kubelet[2599]: I0813 01:10:29.749213 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-cilium-cgroup\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749716 kubelet[2599]: I0813 01:10:29.749227 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-xtables-lock\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749716 kubelet[2599]: I0813 01:10:29.749243 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-bpf-maps\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749716 kubelet[2599]: I0813 01:10:29.749259 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e669c37f-5466-44d7-93d5-40032c869e48-hubble-tls\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749716 kubelet[2599]: I0813 01:10:29.749273 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-cni-path\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749716 kubelet[2599]: I0813 01:10:29.749288 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e669c37f-5466-44d7-93d5-40032c869e48-cilium-config-path\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749850 kubelet[2599]: I0813 01:10:29.749304 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-host-proc-sys-kernel\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749850 kubelet[2599]: I0813 01:10:29.749320 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-cilium-run\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749850 kubelet[2599]: I0813 01:10:29.749338 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-etc-cni-netd\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.749850 kubelet[2599]: I0813 01:10:29.749357 2599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e669c37f-5466-44d7-93d5-40032c869e48-lib-modules\") pod \"cilium-gxn6m\" (UID: \"e669c37f-5466-44d7-93d5-40032c869e48\") " pod="kube-system/cilium-gxn6m" Aug 13 01:10:29.760283 systemd[1]: Started sshd@55-172.234.214.143:22-139.178.89.65:53030.service - OpenSSH per-connection server daemon (139.178.89.65:53030). Aug 13 01:10:29.993355 kubelet[2599]: E0813 01:10:29.993319 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:29.994686 containerd[1485]: time="2025-08-13T01:10:29.994422116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxn6m,Uid:e669c37f-5466-44d7-93d5-40032c869e48,Namespace:kube-system,Attempt:0,}" Aug 13 01:10:30.016745 containerd[1485]: time="2025-08-13T01:10:30.016634241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:10:30.016745 containerd[1485]: time="2025-08-13T01:10:30.016680941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:10:30.016745 containerd[1485]: time="2025-08-13T01:10:30.016695811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:10:30.017247 containerd[1485]: time="2025-08-13T01:10:30.016769251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:10:30.038197 systemd[1]: Started cri-containerd-84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a.scope - libcontainer container 84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a. Aug 13 01:10:30.068925 containerd[1485]: time="2025-08-13T01:10:30.068821041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxn6m,Uid:e669c37f-5466-44d7-93d5-40032c869e48,Namespace:kube-system,Attempt:0,} returns sandbox id \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\"" Aug 13 01:10:30.071835 kubelet[2599]: E0813 01:10:30.071255 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:30.072678 containerd[1485]: time="2025-08-13T01:10:30.072653797Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 01:10:30.086604 sshd[4858]: Accepted publickey for core from 139.178.89.65 port 53030 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:10:30.089171 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:10:30.095218 systemd-logind[1460]: New session 56 of user core. Aug 13 01:10:30.106207 systemd[1]: Started session-56.scope - Session 56 of User core. Aug 13 01:10:30.331167 sshd[4905]: Connection closed by 139.178.89.65 port 53030 Aug 13 01:10:30.333011 sshd-session[4858]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:30.340424 systemd[1]: sshd@55-172.234.214.143:22-139.178.89.65:53030.service: Deactivated successfully. Aug 13 01:10:30.341644 systemd-logind[1460]: Session 56 logged out. Waiting for processes to exit. Aug 13 01:10:30.345019 systemd[1]: session-56.scope: Deactivated successfully. Aug 13 01:10:30.346874 systemd-logind[1460]: Removed session 56. Aug 13 01:10:30.397285 systemd[1]: Started sshd@56-172.234.214.143:22-139.178.89.65:53036.service - OpenSSH per-connection server daemon (139.178.89.65:53036). Aug 13 01:10:30.725966 sshd[4912]: Accepted publickey for core from 139.178.89.65 port 53036 ssh2: RSA SHA256:oX6/jK8RmEU+rk8eVm22B42TdBvg0UX27UDC2BuKWWY Aug 13 01:10:30.727917 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:10:30.734264 systemd-logind[1460]: New session 57 of user core. Aug 13 01:10:30.740216 systemd[1]: Started session-57.scope - Session 57 of User core. Aug 13 01:10:32.989836 kubelet[2599]: E0813 01:10:32.989783 2599 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:10:34.218795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434985783.mount: Deactivated successfully. Aug 13 01:10:34.924547 kubelet[2599]: I0813 01:10:34.924472 2599 setters.go:600] "Node became not ready" node="172-234-214-143" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T01:10:34Z","lastTransitionTime":"2025-08-13T01:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 01:10:35.595792 containerd[1485]: time="2025-08-13T01:10:35.595726083Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:10:35.596584 containerd[1485]: time="2025-08-13T01:10:35.596514570Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 01:10:35.597857 containerd[1485]: time="2025-08-13T01:10:35.596990029Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:10:35.598686 containerd[1485]: time="2025-08-13T01:10:35.598256374Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.525391738s" Aug 13 01:10:35.598686 containerd[1485]: time="2025-08-13T01:10:35.598280993Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 01:10:35.600917 containerd[1485]: time="2025-08-13T01:10:35.600893824Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 01:10:35.625609 containerd[1485]: time="2025-08-13T01:10:35.625570082Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43764c7c69dbed50de1651cb49543fdd6e31213af590d4d40595a54b184e8f41\"" Aug 13 01:10:35.626068 containerd[1485]: time="2025-08-13T01:10:35.626014100Z" level=info msg="StartContainer for \"43764c7c69dbed50de1651cb49543fdd6e31213af590d4d40595a54b184e8f41\"" Aug 13 01:10:35.660168 systemd[1]: Started cri-containerd-43764c7c69dbed50de1651cb49543fdd6e31213af590d4d40595a54b184e8f41.scope - libcontainer container 43764c7c69dbed50de1651cb49543fdd6e31213af590d4d40595a54b184e8f41. Aug 13 01:10:35.685918 containerd[1485]: time="2025-08-13T01:10:35.685885345Z" level=info msg="StartContainer for \"43764c7c69dbed50de1651cb49543fdd6e31213af590d4d40595a54b184e8f41\" returns successfully" Aug 13 01:10:35.694031 systemd[1]: cri-containerd-43764c7c69dbed50de1651cb49543fdd6e31213af590d4d40595a54b184e8f41.scope: Deactivated successfully. Aug 13 01:10:35.714646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43764c7c69dbed50de1651cb49543fdd6e31213af590d4d40595a54b184e8f41-rootfs.mount: Deactivated successfully. Aug 13 01:10:35.773178 containerd[1485]: time="2025-08-13T01:10:35.773098710Z" level=info msg="shim disconnected" id=43764c7c69dbed50de1651cb49543fdd6e31213af590d4d40595a54b184e8f41 namespace=k8s.io Aug 13 01:10:35.773178 containerd[1485]: time="2025-08-13T01:10:35.773174960Z" level=warning msg="cleaning up after shim disconnected" id=43764c7c69dbed50de1651cb49543fdd6e31213af590d4d40595a54b184e8f41 namespace=k8s.io Aug 13 01:10:35.773178 containerd[1485]: time="2025-08-13T01:10:35.773183410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:10:36.624515 kubelet[2599]: E0813 01:10:36.624474 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:36.628837 containerd[1485]: time="2025-08-13T01:10:36.628592639Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 01:10:36.637770 containerd[1485]: time="2025-08-13T01:10:36.637742484Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da1d81123b791c85e99ce85b691ddebc98caf83c38c063c84986e9e1884cecfc\"" Aug 13 01:10:36.645285 containerd[1485]: time="2025-08-13T01:10:36.641185752Z" level=info msg="StartContainer for \"da1d81123b791c85e99ce85b691ddebc98caf83c38c063c84986e9e1884cecfc\"" Aug 13 01:10:36.673161 systemd[1]: Started cri-containerd-da1d81123b791c85e99ce85b691ddebc98caf83c38c063c84986e9e1884cecfc.scope - libcontainer container da1d81123b791c85e99ce85b691ddebc98caf83c38c063c84986e9e1884cecfc. Aug 13 01:10:36.696523 containerd[1485]: time="2025-08-13T01:10:36.696489016Z" level=info msg="StartContainer for \"da1d81123b791c85e99ce85b691ddebc98caf83c38c063c84986e9e1884cecfc\" returns successfully" Aug 13 01:10:36.705222 systemd[1]: cri-containerd-da1d81123b791c85e99ce85b691ddebc98caf83c38c063c84986e9e1884cecfc.scope: Deactivated successfully. Aug 13 01:10:36.722698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da1d81123b791c85e99ce85b691ddebc98caf83c38c063c84986e9e1884cecfc-rootfs.mount: Deactivated successfully. Aug 13 01:10:36.725492 containerd[1485]: time="2025-08-13T01:10:36.725434597Z" level=info msg="shim disconnected" id=da1d81123b791c85e99ce85b691ddebc98caf83c38c063c84986e9e1884cecfc namespace=k8s.io Aug 13 01:10:36.725492 containerd[1485]: time="2025-08-13T01:10:36.725487047Z" level=warning msg="cleaning up after shim disconnected" id=da1d81123b791c85e99ce85b691ddebc98caf83c38c063c84986e9e1884cecfc namespace=k8s.io Aug 13 01:10:36.725781 containerd[1485]: time="2025-08-13T01:10:36.725495217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:10:37.628041 kubelet[2599]: E0813 01:10:37.628002 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:37.631216 containerd[1485]: time="2025-08-13T01:10:37.631175573Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 01:10:37.646489 containerd[1485]: time="2025-08-13T01:10:37.646389657Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"158f5cd119441640de5c9a7ac2cb94cb27abfb6f5eb801670d7d720a0379410d\"" Aug 13 01:10:37.647660 containerd[1485]: time="2025-08-13T01:10:37.646936155Z" level=info msg="StartContainer for \"158f5cd119441640de5c9a7ac2cb94cb27abfb6f5eb801670d7d720a0379410d\"" Aug 13 01:10:37.683470 systemd[1]: Started cri-containerd-158f5cd119441640de5c9a7ac2cb94cb27abfb6f5eb801670d7d720a0379410d.scope - libcontainer container 158f5cd119441640de5c9a7ac2cb94cb27abfb6f5eb801670d7d720a0379410d. Aug 13 01:10:37.724820 containerd[1485]: time="2025-08-13T01:10:37.724769597Z" level=info msg="StartContainer for \"158f5cd119441640de5c9a7ac2cb94cb27abfb6f5eb801670d7d720a0379410d\" returns successfully" Aug 13 01:10:37.727833 systemd[1]: cri-containerd-158f5cd119441640de5c9a7ac2cb94cb27abfb6f5eb801670d7d720a0379410d.scope: Deactivated successfully. Aug 13 01:10:37.749840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-158f5cd119441640de5c9a7ac2cb94cb27abfb6f5eb801670d7d720a0379410d-rootfs.mount: Deactivated successfully. Aug 13 01:10:37.751865 containerd[1485]: time="2025-08-13T01:10:37.751816746Z" level=info msg="shim disconnected" id=158f5cd119441640de5c9a7ac2cb94cb27abfb6f5eb801670d7d720a0379410d namespace=k8s.io Aug 13 01:10:37.752154 containerd[1485]: time="2025-08-13T01:10:37.751865146Z" level=warning msg="cleaning up after shim disconnected" id=158f5cd119441640de5c9a7ac2cb94cb27abfb6f5eb801670d7d720a0379410d namespace=k8s.io Aug 13 01:10:37.752154 containerd[1485]: time="2025-08-13T01:10:37.751875166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:10:37.822169 kubelet[2599]: E0813 01:10:37.821676 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:37.991176 kubelet[2599]: E0813 01:10:37.991143 2599 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 01:10:38.631902 kubelet[2599]: E0813 01:10:38.631760 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:38.634356 containerd[1485]: time="2025-08-13T01:10:38.634323223Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 01:10:38.651699 containerd[1485]: time="2025-08-13T01:10:38.651658828Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"534b7e8b1c1999cd8615f9b1ea5cdf2008a20549cbabc34f69b2bfd23db5dc4c\"" Aug 13 01:10:38.653448 containerd[1485]: time="2025-08-13T01:10:38.653423942Z" level=info msg="StartContainer for \"534b7e8b1c1999cd8615f9b1ea5cdf2008a20549cbabc34f69b2bfd23db5dc4c\"" Aug 13 01:10:38.704228 systemd[1]: Started cri-containerd-534b7e8b1c1999cd8615f9b1ea5cdf2008a20549cbabc34f69b2bfd23db5dc4c.scope - libcontainer container 534b7e8b1c1999cd8615f9b1ea5cdf2008a20549cbabc34f69b2bfd23db5dc4c. Aug 13 01:10:38.735820 systemd[1]: cri-containerd-534b7e8b1c1999cd8615f9b1ea5cdf2008a20549cbabc34f69b2bfd23db5dc4c.scope: Deactivated successfully. Aug 13 01:10:38.737289 containerd[1485]: time="2025-08-13T01:10:38.737147132Z" level=info msg="StartContainer for \"534b7e8b1c1999cd8615f9b1ea5cdf2008a20549cbabc34f69b2bfd23db5dc4c\" returns successfully" Aug 13 01:10:38.755091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-534b7e8b1c1999cd8615f9b1ea5cdf2008a20549cbabc34f69b2bfd23db5dc4c-rootfs.mount: Deactivated successfully. Aug 13 01:10:38.757605 containerd[1485]: time="2025-08-13T01:10:38.757522237Z" level=info msg="shim disconnected" id=534b7e8b1c1999cd8615f9b1ea5cdf2008a20549cbabc34f69b2bfd23db5dc4c namespace=k8s.io Aug 13 01:10:38.757605 containerd[1485]: time="2025-08-13T01:10:38.757588557Z" level=warning msg="cleaning up after shim disconnected" id=534b7e8b1c1999cd8615f9b1ea5cdf2008a20549cbabc34f69b2bfd23db5dc4c namespace=k8s.io Aug 13 01:10:38.757605 containerd[1485]: time="2025-08-13T01:10:38.757596887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:10:38.770113 containerd[1485]: time="2025-08-13T01:10:38.770086171Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:10:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:10:39.635127 kubelet[2599]: E0813 01:10:39.635095 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:39.638790 containerd[1485]: time="2025-08-13T01:10:39.638628843Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 01:10:39.654489 containerd[1485]: time="2025-08-13T01:10:39.654461875Z" level=info msg="CreateContainer within sandbox \"84c9d93f9081d1af66439bbf0e2c78d9798c664c9a6fffc313d741919be3320a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2\"" Aug 13 01:10:39.655267 containerd[1485]: time="2025-08-13T01:10:39.655240971Z" level=info msg="StartContainer for \"9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2\"" Aug 13 01:10:39.693174 systemd[1]: Started cri-containerd-9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2.scope - libcontainer container 9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2. Aug 13 01:10:39.723394 containerd[1485]: time="2025-08-13T01:10:39.723363641Z" level=info msg="StartContainer for \"9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2\" returns successfully" Aug 13 01:10:40.152208 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 01:10:40.640591 kubelet[2599]: E0813 01:10:40.640214 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:40.652337 systemd[1]: run-containerd-runc-k8s.io-9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2-runc.Lc5bSD.mount: Deactivated successfully. Aug 13 01:10:40.655487 kubelet[2599]: I0813 01:10:40.655363 2599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gxn6m" podStartSLOduration=6.128489749 podStartE2EDuration="11.655348102s" podCreationTimestamp="2025-08-13 01:10:29 +0000 UTC" firstStartedPulling="2025-08-13 01:10:30.072309388 +0000 UTC m=+352.359338245" lastFinishedPulling="2025-08-13 01:10:35.599167741 +0000 UTC m=+357.886196598" observedRunningTime="2025-08-13 01:10:40.655262743 +0000 UTC m=+362.942291610" watchObservedRunningTime="2025-08-13 01:10:40.655348102 +0000 UTC m=+362.942376969" Aug 13 01:10:41.352230 systemd[1]: run-containerd-runc-k8s.io-9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2-runc.3ydqOT.mount: Deactivated successfully. Aug 13 01:10:41.644328 kubelet[2599]: E0813 01:10:41.642888 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:42.643622 kubelet[2599]: E0813 01:10:42.643593 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:42.965695 systemd-networkd[1391]: lxc_health: Link UP Aug 13 01:10:42.983409 systemd-networkd[1391]: lxc_health: Gained carrier Aug 13 01:10:43.482598 systemd[1]: run-containerd-runc-k8s.io-9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2-runc.Im8MRP.mount: Deactivated successfully. Aug 13 01:10:43.996718 kubelet[2599]: E0813 01:10:43.995465 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:44.608121 systemd-networkd[1391]: lxc_health: Gained IPv6LL Aug 13 01:10:44.647089 kubelet[2599]: E0813 01:10:44.647062 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:45.614786 systemd[1]: run-containerd-runc-k8s.io-9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2-runc.JREres.mount: Deactivated successfully. Aug 13 01:10:45.649931 kubelet[2599]: E0813 01:10:45.649892 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:46.821693 kubelet[2599]: E0813 01:10:46.821632 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:10:47.733798 systemd[1]: run-containerd-runc-k8s.io-9ecd7d694c017e2ceec51902806d2226be61651f7eebae59db4c32af78e4f0a2-runc.bArevb.mount: Deactivated successfully. Aug 13 01:10:49.950358 sshd[4914]: Connection closed by 139.178.89.65 port 53036 Aug 13 01:10:49.951273 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Aug 13 01:10:49.955939 systemd[1]: sshd@56-172.234.214.143:22-139.178.89.65:53036.service: Deactivated successfully. Aug 13 01:10:49.960789 systemd[1]: session-57.scope: Deactivated successfully. Aug 13 01:10:49.963004 systemd-logind[1460]: Session 57 logged out. Waiting for processes to exit. Aug 13 01:10:49.965011 systemd-logind[1460]: Removed session 57. Aug 13 01:10:52.822397 kubelet[2599]: E0813 01:10:52.822215 2599 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9"