May 27 03:29:26.888222 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 03:29:26.888246 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:29:26.888255 kernel: BIOS-provided physical RAM map: May 27 03:29:26.888264 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable May 27 03:29:26.888270 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved May 27 03:29:26.888276 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 03:29:26.888282 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable May 27 03:29:26.888288 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved May 27 03:29:26.888294 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 03:29:26.888300 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 03:29:26.888306 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 03:29:26.888312 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 03:29:26.888510 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable May 27 03:29:26.888516 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 27 03:29:26.888524 kernel: NX (Execute Disable) protection: active May 27 03:29:26.888530 kernel: APIC: Static calls initialized May 27 03:29:26.888537 kernel: SMBIOS 2.8 present. May 27 03:29:26.888545 kernel: DMI: Linode Compute Instance, BIOS Not Specified May 27 03:29:26.888552 kernel: DMI: Memory slots populated: 1/1 May 27 03:29:26.888558 kernel: Hypervisor detected: KVM May 27 03:29:26.888564 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 03:29:26.888571 kernel: kvm-clock: using sched offset of 5969939240 cycles May 27 03:29:26.888577 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 03:29:26.888584 kernel: tsc: Detected 2000.000 MHz processor May 27 03:29:26.888591 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 03:29:26.888598 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 03:29:26.888605 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 May 27 03:29:26.888613 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 03:29:26.888620 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 03:29:26.888627 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 May 27 03:29:26.888633 kernel: Using GB pages for direct mapping May 27 03:29:26.888640 kernel: ACPI: Early table checksum verification disabled May 27 03:29:26.888646 kernel: ACPI: RSDP 0x00000000000F51B0 000014 (v00 BOCHS ) May 27 03:29:26.888653 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:29:26.888659 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:29:26.888666 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:29:26.888675 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 27 03:29:26.888681 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:29:26.888688 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:29:26.888695 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:29:26.888704 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 03:29:26.888711 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] May 27 03:29:26.888720 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] May 27 03:29:26.888728 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 27 03:29:26.888734 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] May 27 03:29:26.888741 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] May 27 03:29:26.888748 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] May 27 03:29:26.888755 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] May 27 03:29:26.888762 kernel: No NUMA configuration found May 27 03:29:26.888769 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] May 27 03:29:26.888778 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] May 27 03:29:26.888785 kernel: Zone ranges: May 27 03:29:26.888792 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 03:29:26.888799 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 27 03:29:26.888806 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] May 27 03:29:26.888812 kernel: Device empty May 27 03:29:26.888820 kernel: Movable zone start for each node May 27 03:29:26.888826 kernel: Early memory node ranges May 27 03:29:26.888833 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 03:29:26.888840 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] May 27 03:29:26.888849 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] May 27 03:29:26.888856 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] May 27 03:29:26.888863 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 03:29:26.888870 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 03:29:26.888877 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 27 03:29:26.888884 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 03:29:26.888891 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 03:29:26.888898 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 03:29:26.888905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 03:29:26.888914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 03:29:26.888921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 03:29:26.888928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 03:29:26.888934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 03:29:26.888941 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 03:29:26.888948 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 27 03:29:26.888955 kernel: TSC deadline timer available May 27 03:29:26.888962 kernel: CPU topo: Max. logical packages: 1 May 27 03:29:26.888969 kernel: CPU topo: Max. logical dies: 1 May 27 03:29:26.888978 kernel: CPU topo: Max. dies per package: 1 May 27 03:29:26.888985 kernel: CPU topo: Max. threads per core: 1 May 27 03:29:26.888991 kernel: CPU topo: Num. cores per package: 2 May 27 03:29:26.888998 kernel: CPU topo: Num. threads per package: 2 May 27 03:29:26.889005 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 27 03:29:26.889012 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 03:29:26.889019 kernel: kvm-guest: KVM setup pv remote TLB flush May 27 03:29:26.889026 kernel: kvm-guest: setup PV sched yield May 27 03:29:26.889033 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 03:29:26.889041 kernel: Booting paravirtualized kernel on KVM May 27 03:29:26.889048 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 03:29:26.889055 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 27 03:29:26.889062 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 27 03:29:26.889069 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 27 03:29:26.889097 kernel: pcpu-alloc: [0] 0 1 May 27 03:29:26.889104 kernel: kvm-guest: PV spinlocks enabled May 27 03:29:26.889111 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 03:29:26.889119 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:29:26.889129 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 03:29:26.889136 kernel: random: crng init done May 27 03:29:26.889142 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 03:29:26.889149 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 03:29:26.889156 kernel: Fallback order for Node 0: 0 May 27 03:29:26.889163 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 27 03:29:26.889169 kernel: Policy zone: Normal May 27 03:29:26.889176 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 03:29:26.889353 kernel: software IO TLB: area num 2. May 27 03:29:26.889360 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 03:29:26.889367 kernel: ftrace: allocating 40081 entries in 157 pages May 27 03:29:26.889373 kernel: ftrace: allocated 157 pages with 5 groups May 27 03:29:26.889380 kernel: Dynamic Preempt: voluntary May 27 03:29:26.889387 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 03:29:26.889394 kernel: rcu: RCU event tracing is enabled. May 27 03:29:26.889401 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 03:29:26.889409 kernel: Trampoline variant of Tasks RCU enabled. May 27 03:29:26.889416 kernel: Rude variant of Tasks RCU enabled. May 27 03:29:26.889425 kernel: Tracing variant of Tasks RCU enabled. May 27 03:29:26.889431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 03:29:26.889438 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 03:29:26.889445 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:29:26.889459 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:29:26.889468 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 03:29:26.889475 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 27 03:29:26.889482 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 03:29:26.889489 kernel: Console: colour VGA+ 80x25 May 27 03:29:26.889496 kernel: printk: legacy console [tty0] enabled May 27 03:29:26.889503 kernel: printk: legacy console [ttyS0] enabled May 27 03:29:26.889512 kernel: ACPI: Core revision 20240827 May 27 03:29:26.889520 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 27 03:29:26.889527 kernel: APIC: Switch to symmetric I/O mode setup May 27 03:29:26.889534 kernel: x2apic enabled May 27 03:29:26.889541 kernel: APIC: Switched APIC routing to: physical x2apic May 27 03:29:26.889550 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 27 03:29:26.889558 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 27 03:29:26.889565 kernel: kvm-guest: setup PV IPIs May 27 03:29:26.889572 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 27 03:29:26.889579 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 27 03:29:26.889586 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) May 27 03:29:26.889594 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 03:29:26.889601 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 27 03:29:26.889608 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 27 03:29:26.889617 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 03:29:26.889624 kernel: Spectre V2 : Mitigation: Retpolines May 27 03:29:26.889631 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 03:29:26.889639 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 27 03:29:26.889646 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 03:29:26.889653 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 03:29:26.889660 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 27 03:29:26.889668 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 27 03:29:26.889677 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 27 03:29:26.889684 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 03:29:26.889692 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 03:29:26.889699 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 03:29:26.889706 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' May 27 03:29:26.889713 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 03:29:26.889720 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 May 27 03:29:26.889727 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. May 27 03:29:26.889734 kernel: Freeing SMP alternatives memory: 32K May 27 03:29:26.889743 kernel: pid_max: default: 32768 minimum: 301 May 27 03:29:26.889750 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 03:29:26.889757 kernel: landlock: Up and running. May 27 03:29:26.889764 kernel: SELinux: Initializing. May 27 03:29:26.889772 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:29:26.889779 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 03:29:26.889786 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) May 27 03:29:26.889793 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 27 03:29:26.889800 kernel: ... version: 0 May 27 03:29:26.889809 kernel: ... bit width: 48 May 27 03:29:26.889816 kernel: ... generic registers: 6 May 27 03:29:26.889823 kernel: ... value mask: 0000ffffffffffff May 27 03:29:26.889830 kernel: ... max period: 00007fffffffffff May 27 03:29:26.889837 kernel: ... fixed-purpose events: 0 May 27 03:29:26.889844 kernel: ... event mask: 000000000000003f May 27 03:29:26.889851 kernel: signal: max sigframe size: 3376 May 27 03:29:26.889858 kernel: rcu: Hierarchical SRCU implementation. May 27 03:29:26.889866 kernel: rcu: Max phase no-delay instances is 400. May 27 03:29:26.889875 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 03:29:26.889882 kernel: smp: Bringing up secondary CPUs ... May 27 03:29:26.889889 kernel: smpboot: x86: Booting SMP configuration: May 27 03:29:26.889896 kernel: .... node #0, CPUs: #1 May 27 03:29:26.889903 kernel: smp: Brought up 1 node, 2 CPUs May 27 03:29:26.889910 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) May 27 03:29:26.889917 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 227288K reserved, 0K cma-reserved) May 27 03:29:26.889925 kernel: devtmpfs: initialized May 27 03:29:26.889932 kernel: x86/mm: Memory block size: 128MB May 27 03:29:26.889941 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 03:29:26.889948 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 03:29:26.889955 kernel: pinctrl core: initialized pinctrl subsystem May 27 03:29:26.889962 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 03:29:26.889969 kernel: audit: initializing netlink subsys (disabled) May 27 03:29:26.889976 kernel: audit: type=2000 audit(1748316564.829:1): state=initialized audit_enabled=0 res=1 May 27 03:29:26.889983 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 03:29:26.889990 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 03:29:26.889997 kernel: cpuidle: using governor menu May 27 03:29:26.890007 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 03:29:26.890014 kernel: dca service started, version 1.12.1 May 27 03:29:26.890022 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 03:29:26.890029 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 27 03:29:26.890036 kernel: PCI: Using configuration type 1 for base access May 27 03:29:26.890043 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 03:29:26.890050 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 03:29:26.890057 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 03:29:26.890064 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 03:29:26.890073 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 03:29:26.891041 kernel: ACPI: Added _OSI(Module Device) May 27 03:29:26.891050 kernel: ACPI: Added _OSI(Processor Device) May 27 03:29:26.891057 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 03:29:26.891063 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 03:29:26.891070 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 03:29:26.891091 kernel: ACPI: Interpreter enabled May 27 03:29:26.891098 kernel: ACPI: PM: (supports S0 S3 S5) May 27 03:29:26.891104 kernel: ACPI: Using IOAPIC for interrupt routing May 27 03:29:26.891115 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 03:29:26.891122 kernel: PCI: Using E820 reservations for host bridge windows May 27 03:29:26.891129 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 03:29:26.891135 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 03:29:26.891321 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 03:29:26.891435 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 27 03:29:26.891542 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 27 03:29:26.891551 kernel: PCI host bridge to bus 0000:00 May 27 03:29:26.891692 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 03:29:26.891793 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 03:29:26.891889 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 03:29:26.891985 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 27 03:29:26.892107 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 03:29:26.892212 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] May 27 03:29:26.892314 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 03:29:26.892442 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 03:29:26.892572 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint May 27 03:29:26.892681 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] May 27 03:29:26.892786 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] May 27 03:29:26.892890 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] May 27 03:29:26.892995 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 03:29:26.895029 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint May 27 03:29:26.895383 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] May 27 03:29:26.895497 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] May 27 03:29:26.895605 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] May 27 03:29:26.895724 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 03:29:26.895831 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] May 27 03:29:26.895944 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] May 27 03:29:26.896055 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] May 27 03:29:26.896186 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] May 27 03:29:26.896599 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 03:29:26.896712 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 03:29:26.896831 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 03:29:26.896936 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] May 27 03:29:26.897048 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] May 27 03:29:26.898481 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 03:29:26.899225 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 03:29:26.899240 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 03:29:26.899248 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 03:29:26.899256 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 03:29:26.899263 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 03:29:26.899270 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 03:29:26.899281 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 03:29:26.899289 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 03:29:26.899296 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 03:29:26.899304 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 03:29:26.899311 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 03:29:26.899318 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 03:29:26.899526 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 03:29:26.899534 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 03:29:26.899541 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 03:29:26.899551 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 03:29:26.899558 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 03:29:26.899565 kernel: iommu: Default domain type: Translated May 27 03:29:26.899572 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 03:29:26.899579 kernel: PCI: Using ACPI for IRQ routing May 27 03:29:26.899586 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 03:29:26.899594 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] May 27 03:29:26.899601 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] May 27 03:29:26.899721 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 03:29:26.899834 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 03:29:26.899940 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 03:29:26.899950 kernel: vgaarb: loaded May 27 03:29:26.899957 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 27 03:29:26.899964 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 27 03:29:26.899971 kernel: clocksource: Switched to clocksource kvm-clock May 27 03:29:26.899979 kernel: VFS: Disk quotas dquot_6.6.0 May 27 03:29:26.899986 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 03:29:26.899997 kernel: pnp: PnP ACPI init May 27 03:29:26.900137 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 03:29:26.900350 kernel: pnp: PnP ACPI: found 5 devices May 27 03:29:26.900358 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 03:29:26.900365 kernel: NET: Registered PF_INET protocol family May 27 03:29:26.900372 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 03:29:26.900379 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 03:29:26.900386 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 03:29:26.900397 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 03:29:26.900404 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 03:29:26.900411 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 03:29:26.900418 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:29:26.900425 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 03:29:26.900432 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 03:29:26.900439 kernel: NET: Registered PF_XDP protocol family May 27 03:29:26.900541 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 03:29:26.900638 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 03:29:26.900738 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 03:29:26.900835 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 27 03:29:26.900931 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 03:29:26.901026 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] May 27 03:29:26.901036 kernel: PCI: CLS 0 bytes, default 64 May 27 03:29:26.901043 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 03:29:26.901050 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) May 27 03:29:26.901057 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns May 27 03:29:26.901068 kernel: Initialise system trusted keyrings May 27 03:29:26.901075 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 03:29:26.903142 kernel: Key type asymmetric registered May 27 03:29:26.903150 kernel: Asymmetric key parser 'x509' registered May 27 03:29:26.903157 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 03:29:26.903165 kernel: io scheduler mq-deadline registered May 27 03:29:26.903171 kernel: io scheduler kyber registered May 27 03:29:26.903178 kernel: io scheduler bfq registered May 27 03:29:26.903185 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 03:29:26.903192 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 03:29:26.903203 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 03:29:26.903211 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 03:29:26.903218 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 03:29:26.903225 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 03:29:26.903232 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 03:29:26.903239 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 03:29:26.903384 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 03:29:26.903396 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 03:29:26.903502 kernel: rtc_cmos 00:03: registered as rtc0 May 27 03:29:26.903602 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T03:29:26 UTC (1748316566) May 27 03:29:26.903707 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 27 03:29:26.903717 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 27 03:29:26.903724 kernel: NET: Registered PF_INET6 protocol family May 27 03:29:26.903731 kernel: Segment Routing with IPv6 May 27 03:29:26.903737 kernel: In-situ OAM (IOAM) with IPv6 May 27 03:29:26.903744 kernel: NET: Registered PF_PACKET protocol family May 27 03:29:26.903754 kernel: Key type dns_resolver registered May 27 03:29:26.903761 kernel: IPI shorthand broadcast: enabled May 27 03:29:26.903767 kernel: sched_clock: Marking stable (2831003180, 237293530)->(3119301510, -51004800) May 27 03:29:26.903774 kernel: registered taskstats version 1 May 27 03:29:26.903781 kernel: Loading compiled-in X.509 certificates May 27 03:29:26.903788 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 03:29:26.903795 kernel: Demotion targets for Node 0: null May 27 03:29:26.903801 kernel: Key type .fscrypt registered May 27 03:29:26.903808 kernel: Key type fscrypt-provisioning registered May 27 03:29:26.903817 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 03:29:26.903824 kernel: ima: Allocated hash algorithm: sha1 May 27 03:29:26.903831 kernel: ima: No architecture policies found May 27 03:29:26.903837 kernel: clk: Disabling unused clocks May 27 03:29:26.903844 kernel: Warning: unable to open an initial console. May 27 03:29:26.903852 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 03:29:26.903859 kernel: Write protecting the kernel read-only data: 24576k May 27 03:29:26.903865 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 03:29:26.903872 kernel: Run /init as init process May 27 03:29:26.903881 kernel: with arguments: May 27 03:29:26.903887 kernel: /init May 27 03:29:26.903894 kernel: with environment: May 27 03:29:26.903901 kernel: HOME=/ May 27 03:29:26.903907 kernel: TERM=linux May 27 03:29:26.903929 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 03:29:26.903940 systemd[1]: Successfully made /usr/ read-only. May 27 03:29:26.903950 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:29:26.903960 systemd[1]: Detected virtualization kvm. May 27 03:29:26.903968 systemd[1]: Detected architecture x86-64. May 27 03:29:26.903975 systemd[1]: Running in initrd. May 27 03:29:26.903982 systemd[1]: No hostname configured, using default hostname. May 27 03:29:26.903990 systemd[1]: Hostname set to . May 27 03:29:26.903997 systemd[1]: Initializing machine ID from random generator. May 27 03:29:26.904005 systemd[1]: Queued start job for default target initrd.target. May 27 03:29:26.904012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:29:26.904022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:29:26.904030 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 03:29:26.904038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:29:26.904046 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 03:29:26.904054 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 03:29:26.904062 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 03:29:26.904073 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 03:29:26.904100 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:29:26.904146 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:29:26.904154 systemd[1]: Reached target paths.target - Path Units. May 27 03:29:26.904161 systemd[1]: Reached target slices.target - Slice Units. May 27 03:29:26.904168 systemd[1]: Reached target swap.target - Swaps. May 27 03:29:26.904176 systemd[1]: Reached target timers.target - Timer Units. May 27 03:29:26.904187 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:29:26.904194 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:29:26.904204 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 03:29:26.904212 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 03:29:26.904219 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:29:26.904227 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:29:26.904234 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:29:26.904242 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:29:26.904251 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 03:29:26.904259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:29:26.904266 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 03:29:26.904274 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 03:29:26.904282 systemd[1]: Starting systemd-fsck-usr.service... May 27 03:29:26.904289 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:29:26.904297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:29:26.904304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:29:26.904314 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 03:29:26.904322 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:29:26.904352 systemd-journald[206]: Collecting audit messages is disabled. May 27 03:29:26.904375 systemd[1]: Finished systemd-fsck-usr.service. May 27 03:29:26.904383 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:29:26.904392 systemd-journald[206]: Journal started May 27 03:29:26.904412 systemd-journald[206]: Runtime Journal (/run/log/journal/28a732a4d6e546dba5e0d7efc6c7009e) is 8M, max 78.5M, 70.5M free. May 27 03:29:26.900112 systemd-modules-load[207]: Inserted module 'overlay' May 27 03:29:26.910138 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:29:26.930098 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 03:29:26.931755 kernel: Bridge firewalling registered May 27 03:29:26.930969 systemd-modules-load[207]: Inserted module 'br_netfilter' May 27 03:29:26.971789 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:29:26.972746 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:29:26.974175 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:29:26.979743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 03:29:26.983379 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:29:26.986378 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:29:26.996067 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:29:27.010916 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:29:27.012545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:29:27.017801 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:29:27.018329 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 03:29:27.022260 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 03:29:27.023797 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:29:27.031210 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:29:27.048015 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 03:29:27.077370 systemd-resolved[244]: Positive Trust Anchors: May 27 03:29:27.078100 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:29:27.078911 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:29:27.084058 systemd-resolved[244]: Defaulting to hostname 'linux'. May 27 03:29:27.088127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:29:27.088691 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:29:27.156135 kernel: SCSI subsystem initialized May 27 03:29:27.166108 kernel: Loading iSCSI transport class v2.0-870. May 27 03:29:27.177112 kernel: iscsi: registered transport (tcp) May 27 03:29:27.199745 kernel: iscsi: registered transport (qla4xxx) May 27 03:29:27.199807 kernel: QLogic iSCSI HBA Driver May 27 03:29:27.223942 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:29:27.239134 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:29:27.242886 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:29:27.310815 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 03:29:27.313062 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 03:29:27.372129 kernel: raid6: avx2x4 gen() 27137 MB/s May 27 03:29:27.390112 kernel: raid6: avx2x2 gen() 24950 MB/s May 27 03:29:27.408593 kernel: raid6: avx2x1 gen() 17633 MB/s May 27 03:29:27.408631 kernel: raid6: using algorithm avx2x4 gen() 27137 MB/s May 27 03:29:27.427442 kernel: raid6: .... xor() 3005 MB/s, rmw enabled May 27 03:29:27.427492 kernel: raid6: using avx2x2 recovery algorithm May 27 03:29:27.447121 kernel: xor: automatically using best checksumming function avx May 27 03:29:27.584138 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 03:29:27.593969 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 03:29:27.596380 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:29:27.623650 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 27 03:29:27.629106 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:29:27.632381 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 03:29:27.659718 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation May 27 03:29:27.693048 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:29:27.696012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:29:27.760982 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:29:27.766206 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 03:29:27.833108 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues May 27 03:29:27.833513 kernel: cryptd: max_cpu_qlen set to 1000 May 27 03:29:27.849107 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 03:29:27.866209 kernel: AES CTR mode by8 optimization enabled May 27 03:29:27.866233 kernel: scsi host0: Virtio SCSI HBA May 27 03:29:27.873110 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 27 03:29:27.883108 kernel: libata version 3.00 loaded. May 27 03:29:27.894330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:29:27.894459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:29:27.897451 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:29:27.906633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:29:27.908238 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:29:27.941109 kernel: ahci 0000:00:1f.2: version 3.0 May 27 03:29:28.010176 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 03:29:28.015101 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 03:29:28.015319 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 03:29:28.015452 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 03:29:28.023154 kernel: scsi host1: ahci May 27 03:29:28.023347 kernel: scsi host2: ahci May 27 03:29:28.040173 kernel: sd 0:0:0:0: Power-on or device reset occurred May 27 03:29:28.040654 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) May 27 03:29:28.041238 kernel: sd 0:0:0:0: [sda] Write Protect is off May 27 03:29:28.041381 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 27 03:29:28.043983 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 27 03:29:28.044661 kernel: scsi host3: ahci May 27 03:29:28.044919 kernel: scsi host4: ahci May 27 03:29:28.045499 kernel: scsi host5: ahci May 27 03:29:28.047593 kernel: scsi host6: ahci May 27 03:29:28.049099 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 May 27 03:29:28.049128 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 May 27 03:29:28.049139 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 May 27 03:29:28.049149 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 May 27 03:29:28.049173 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 May 27 03:29:28.050020 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 May 27 03:29:28.051157 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 03:29:28.051184 kernel: GPT:9289727 != 167739391 May 27 03:29:28.051195 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 03:29:28.051203 kernel: GPT:9289727 != 167739391 May 27 03:29:28.051212 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 03:29:28.051221 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:29:28.051230 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 27 03:29:28.118241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:29:28.368595 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 03:29:28.368689 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 03:29:28.368700 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 27 03:29:28.368710 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 03:29:28.368719 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 03:29:28.368728 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 03:29:28.428208 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 27 03:29:28.438558 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 27 03:29:28.457257 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 27 03:29:28.457865 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 27 03:29:28.459459 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 03:29:28.469293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 03:29:28.471452 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:29:28.472045 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:29:28.473335 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:29:28.475423 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 03:29:28.478991 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 03:29:28.487155 disk-uuid[630]: Primary Header is updated. May 27 03:29:28.487155 disk-uuid[630]: Secondary Entries is updated. May 27 03:29:28.487155 disk-uuid[630]: Secondary Header is updated. May 27 03:29:28.495187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:29:28.496196 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 03:29:29.511969 disk-uuid[631]: The operation has completed successfully. May 27 03:29:29.512817 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 27 03:29:29.564641 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 03:29:29.564761 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 03:29:29.593199 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 03:29:29.605579 sh[652]: Success May 27 03:29:29.624836 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 03:29:29.624869 kernel: device-mapper: uevent: version 1.0.3 May 27 03:29:29.625442 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 03:29:29.636105 kernel: device-mapper: verity: sha256 using shash "sha256-ni" May 27 03:29:29.680596 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 03:29:29.684143 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 03:29:29.695219 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 03:29:29.707281 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 03:29:29.707306 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (254:0) scanned by mount (664) May 27 03:29:29.713128 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 03:29:29.713151 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 03:29:29.715823 kernel: BTRFS info (device dm-0): using free-space-tree May 27 03:29:29.723788 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 03:29:29.724774 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 03:29:29.725692 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 03:29:29.726507 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 03:29:29.729229 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 03:29:29.755119 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (697) May 27 03:29:29.759393 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:29:29.759419 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:29:29.761867 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:29:29.774104 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:29:29.775401 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 03:29:29.778249 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 03:29:29.845550 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:29:29.849992 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:29:29.881377 ignition[769]: Ignition 2.21.0 May 27 03:29:29.882068 ignition[769]: Stage: fetch-offline May 27 03:29:29.882117 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 27 03:29:29.882126 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:29:29.882196 ignition[769]: parsed url from cmdline: "" May 27 03:29:29.882200 ignition[769]: no config URL provided May 27 03:29:29.882204 ignition[769]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:29:29.882212 ignition[769]: no config at "/usr/lib/ignition/user.ign" May 27 03:29:29.886927 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:29:29.882216 ignition[769]: failed to fetch config: resource requires networking May 27 03:29:29.882868 ignition[769]: Ignition finished successfully May 27 03:29:29.893701 systemd-networkd[835]: lo: Link UP May 27 03:29:29.893713 systemd-networkd[835]: lo: Gained carrier May 27 03:29:29.895069 systemd-networkd[835]: Enumeration completed May 27 03:29:29.895732 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:29:29.895996 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:29:29.896000 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:29:29.896497 systemd[1]: Reached target network.target - Network. May 27 03:29:29.898417 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 03:29:29.899117 systemd-networkd[835]: eth0: Link UP May 27 03:29:29.899121 systemd-networkd[835]: eth0: Gained carrier May 27 03:29:29.899129 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:29:29.922004 ignition[843]: Ignition 2.21.0 May 27 03:29:29.922022 ignition[843]: Stage: fetch May 27 03:29:29.922218 ignition[843]: no configs at "/usr/lib/ignition/base.d" May 27 03:29:29.922232 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:29:29.922559 ignition[843]: parsed url from cmdline: "" May 27 03:29:29.922564 ignition[843]: no config URL provided May 27 03:29:29.922571 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" May 27 03:29:29.922583 ignition[843]: no config at "/usr/lib/ignition/user.ign" May 27 03:29:29.922638 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #1 May 27 03:29:29.922851 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 27 03:29:30.123623 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #2 May 27 03:29:30.123838 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 27 03:29:30.367196 systemd-networkd[835]: eth0: DHCPv4 address 172.234.197.247/24, gateway 172.234.197.1 acquired from 23.213.14.182 May 27 03:29:30.524055 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #3 May 27 03:29:30.616691 ignition[843]: PUT result: OK May 27 03:29:30.616755 ignition[843]: GET http://169.254.169.254/v1/user-data: attempt #1 May 27 03:29:30.731196 ignition[843]: GET result: OK May 27 03:29:30.731527 ignition[843]: parsing config with SHA512: 3c45a9ce0228c59574ded145bc7834f4768ae15bc35f4f176fd7f210ecbb5214a076b4ef456b69de57cd6cc8564d6d5288103b6058ce6bdf90e6f642515c7b33 May 27 03:29:30.738024 unknown[843]: fetched base config from "system" May 27 03:29:30.738724 unknown[843]: fetched base config from "system" May 27 03:29:30.738974 ignition[843]: fetch: fetch complete May 27 03:29:30.738730 unknown[843]: fetched user config from "akamai" May 27 03:29:30.738979 ignition[843]: fetch: fetch passed May 27 03:29:30.739020 ignition[843]: Ignition finished successfully May 27 03:29:30.742171 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 03:29:30.744582 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 03:29:30.773918 ignition[850]: Ignition 2.21.0 May 27 03:29:30.773932 ignition[850]: Stage: kargs May 27 03:29:30.774061 ignition[850]: no configs at "/usr/lib/ignition/base.d" May 27 03:29:30.774070 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:29:30.775608 ignition[850]: kargs: kargs passed May 27 03:29:30.775647 ignition[850]: Ignition finished successfully May 27 03:29:30.778394 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 03:29:30.780590 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 03:29:30.804296 ignition[857]: Ignition 2.21.0 May 27 03:29:30.804310 ignition[857]: Stage: disks May 27 03:29:30.804600 ignition[857]: no configs at "/usr/lib/ignition/base.d" May 27 03:29:30.804610 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:29:30.805811 ignition[857]: disks: disks passed May 27 03:29:30.807605 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 03:29:30.805859 ignition[857]: Ignition finished successfully May 27 03:29:30.808306 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 03:29:30.809310 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 03:29:30.810289 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:29:30.811507 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:29:30.812494 systemd[1]: Reached target basic.target - Basic System. May 27 03:29:30.814468 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 03:29:30.842798 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 03:29:30.847277 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 03:29:30.849177 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 03:29:30.966098 kernel: EXT4-fs (sda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 03:29:30.967415 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 03:29:30.968478 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 03:29:30.970442 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:29:30.973148 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 03:29:30.974586 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 03:29:30.975469 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 03:29:30.975492 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:29:30.983775 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 03:29:30.985959 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 03:29:30.996099 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (874) May 27 03:29:30.999489 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:29:30.999519 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:29:31.003107 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:29:31.014783 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:29:31.042294 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory May 27 03:29:31.047386 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory May 27 03:29:31.053653 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory May 27 03:29:31.057837 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory May 27 03:29:31.158699 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 03:29:31.161895 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 03:29:31.164100 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 03:29:31.179063 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 03:29:31.182626 kernel: BTRFS info (device sda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:29:31.199425 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 03:29:31.212178 ignition[990]: INFO : Ignition 2.21.0 May 27 03:29:31.212178 ignition[990]: INFO : Stage: mount May 27 03:29:31.214608 ignition[990]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:29:31.214608 ignition[990]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:29:31.214608 ignition[990]: INFO : mount: mount passed May 27 03:29:31.214608 ignition[990]: INFO : Ignition finished successfully May 27 03:29:31.215861 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 03:29:31.217556 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 03:29:31.706280 systemd-networkd[835]: eth0: Gained IPv6LL May 27 03:29:31.969098 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 03:29:31.995103 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 (8:6) scanned by mount (1002) May 27 03:29:31.998575 kernel: BTRFS info (device sda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 03:29:31.998598 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 27 03:29:32.001847 kernel: BTRFS info (device sda6): using free-space-tree May 27 03:29:32.006525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 03:29:32.033952 ignition[1018]: INFO : Ignition 2.21.0 May 27 03:29:32.033952 ignition[1018]: INFO : Stage: files May 27 03:29:32.033952 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:29:32.033952 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:29:32.033952 ignition[1018]: DEBUG : files: compiled without relabeling support, skipping May 27 03:29:32.037813 ignition[1018]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 03:29:32.037813 ignition[1018]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 03:29:32.040004 ignition[1018]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 03:29:32.040845 ignition[1018]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 03:29:32.040845 ignition[1018]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 03:29:32.040536 unknown[1018]: wrote ssh authorized keys file for user: core May 27 03:29:32.043744 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:29:32.043744 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 May 27 03:29:32.291342 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 03:29:32.510050 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" May 27 03:29:32.510050 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:29:32.512861 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 03:29:32.819048 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 03:29:32.882004 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 03:29:32.882004 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 03:29:32.884049 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 03:29:32.884049 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 03:29:32.884049 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 03:29:32.884049 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:29:32.884049 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 03:29:32.884049 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:29:32.884049 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 03:29:32.890819 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:29:32.890819 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 03:29:32.890819 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:29:32.890819 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:29:32.890819 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:29:32.890819 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 May 27 03:29:33.400626 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 03:29:33.658173 ignition[1018]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" May 27 03:29:33.658173 ignition[1018]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 03:29:33.660917 ignition[1018]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:29:33.661995 ignition[1018]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 03:29:33.661995 ignition[1018]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 03:29:33.661995 ignition[1018]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 03:29:33.661995 ignition[1018]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 03:29:33.661995 ignition[1018]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 27 03:29:33.661995 ignition[1018]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 03:29:33.661995 ignition[1018]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 27 03:29:33.661995 ignition[1018]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 27 03:29:33.661995 ignition[1018]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 03:29:33.661995 ignition[1018]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 03:29:33.661995 ignition[1018]: INFO : files: files passed May 27 03:29:33.679310 ignition[1018]: INFO : Ignition finished successfully May 27 03:29:33.668982 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 03:29:33.673241 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 03:29:33.676196 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 03:29:33.685166 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 03:29:33.686110 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 03:29:33.694289 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:29:33.694289 initrd-setup-root-after-ignition[1048]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 03:29:33.696192 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 03:29:33.697487 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:29:33.699224 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 03:29:33.700528 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 03:29:33.755670 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 03:29:33.755787 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 03:29:33.756946 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 03:29:33.757814 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 03:29:33.759170 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 03:29:33.760018 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 03:29:33.792797 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:29:33.795534 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 03:29:33.812417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 03:29:33.813020 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:29:33.813710 systemd[1]: Stopped target timers.target - Timer Units. May 27 03:29:33.814916 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 03:29:33.815010 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 03:29:33.816388 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 03:29:33.817261 systemd[1]: Stopped target basic.target - Basic System. May 27 03:29:33.818182 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 03:29:33.819682 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 03:29:33.820819 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 03:29:33.821910 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 03:29:33.823223 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 03:29:33.824681 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 03:29:33.826062 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 03:29:33.827608 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 03:29:33.828921 systemd[1]: Stopped target swap.target - Swaps. May 27 03:29:33.830208 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 03:29:33.830338 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 03:29:33.831944 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 03:29:33.833062 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:29:33.834129 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 03:29:33.834693 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:29:33.835983 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 03:29:33.836138 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 03:29:33.837732 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 03:29:33.837871 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 03:29:33.838780 systemd[1]: ignition-files.service: Deactivated successfully. May 27 03:29:33.838906 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 03:29:33.840740 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 03:29:33.844353 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 03:29:33.847868 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 03:29:33.847980 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:29:33.848604 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 03:29:33.848699 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 03:29:33.855641 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 03:29:33.855758 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 03:29:33.870489 ignition[1072]: INFO : Ignition 2.21.0 May 27 03:29:33.871598 ignition[1072]: INFO : Stage: umount May 27 03:29:33.871598 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 03:29:33.871598 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" May 27 03:29:33.879781 ignition[1072]: INFO : umount: umount passed May 27 03:29:33.879781 ignition[1072]: INFO : Ignition finished successfully May 27 03:29:33.874233 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 03:29:33.877325 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 03:29:33.877426 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 03:29:33.879872 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 03:29:33.880362 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 03:29:33.881575 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 03:29:33.881624 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 03:29:33.882412 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 03:29:33.882460 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 03:29:33.883610 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 03:29:33.883658 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 03:29:33.884695 systemd[1]: Stopped target network.target - Network. May 27 03:29:33.885858 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 03:29:33.885922 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 03:29:33.886907 systemd[1]: Stopped target paths.target - Path Units. May 27 03:29:33.887880 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 03:29:33.891118 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:29:33.891693 systemd[1]: Stopped target slices.target - Slice Units. May 27 03:29:33.892970 systemd[1]: Stopped target sockets.target - Socket Units. May 27 03:29:33.894555 systemd[1]: iscsid.socket: Deactivated successfully. May 27 03:29:33.894595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 03:29:33.895700 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 03:29:33.895737 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 03:29:33.896865 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 03:29:33.896926 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 03:29:33.898133 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 03:29:33.898176 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 03:29:33.899170 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 03:29:33.899217 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 03:29:33.900341 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 03:29:33.901355 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 03:29:33.909215 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 03:29:33.909349 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 03:29:33.913680 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 03:29:33.913899 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 03:29:33.914008 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 03:29:33.916049 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 03:29:33.916963 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 03:29:33.918063 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 03:29:33.918155 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 03:29:33.920181 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 03:29:33.922496 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 03:29:33.922547 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 03:29:33.924620 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:29:33.924667 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:29:33.927216 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 03:29:33.927436 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 03:29:33.928017 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 03:29:33.928059 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:29:33.930742 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:29:33.933690 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 03:29:33.933751 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 03:29:33.948307 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 03:29:33.948615 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 03:29:33.950029 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 03:29:33.950201 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:29:33.951537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 03:29:33.951596 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 03:29:33.952394 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 03:29:33.952614 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:29:33.953702 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 03:29:33.953748 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 03:29:33.955651 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 03:29:33.955695 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 03:29:33.956973 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 03:29:33.957022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 03:29:33.958917 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 03:29:33.960885 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 03:29:33.960937 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:29:33.963899 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 03:29:33.963950 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:29:33.965907 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 03:29:33.965951 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:29:33.967159 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 03:29:33.967205 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:29:33.969976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 03:29:33.970024 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:29:33.972258 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 03:29:33.972312 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 03:29:33.972352 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 03:29:33.972395 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 03:29:33.973402 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 03:29:33.973505 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 03:29:33.975013 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 03:29:33.978190 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 03:29:33.992471 systemd[1]: Switching root. May 27 03:29:34.039827 systemd-journald[206]: Journal stopped May 27 03:29:35.143361 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). May 27 03:29:35.143576 kernel: SELinux: policy capability network_peer_controls=1 May 27 03:29:35.143588 kernel: SELinux: policy capability open_perms=1 May 27 03:29:35.143600 kernel: SELinux: policy capability extended_socket_class=1 May 27 03:29:35.143609 kernel: SELinux: policy capability always_check_network=0 May 27 03:29:35.143617 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 03:29:35.143626 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 03:29:35.143635 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 03:29:35.143643 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 03:29:35.143652 kernel: SELinux: policy capability userspace_initial_context=0 May 27 03:29:35.143663 kernel: audit: type=1403 audit(1748316574.201:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 03:29:35.143672 systemd[1]: Successfully loaded SELinux policy in 60.238ms. May 27 03:29:35.143683 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.796ms. May 27 03:29:35.143693 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 03:29:35.143703 systemd[1]: Detected virtualization kvm. May 27 03:29:35.143714 systemd[1]: Detected architecture x86-64. May 27 03:29:35.143723 systemd[1]: Detected first boot. May 27 03:29:35.143733 systemd[1]: Initializing machine ID from random generator. May 27 03:29:35.143742 zram_generator::config[1117]: No configuration found. May 27 03:29:35.143752 kernel: Guest personality initialized and is inactive May 27 03:29:35.143761 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 03:29:35.143770 kernel: Initialized host personality May 27 03:29:35.143781 kernel: NET: Registered PF_VSOCK protocol family May 27 03:29:35.143790 systemd[1]: Populated /etc with preset unit settings. May 27 03:29:35.143800 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 03:29:35.143809 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 03:29:35.143819 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 03:29:35.143828 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 03:29:35.143838 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 03:29:35.143849 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 03:29:35.143859 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 03:29:35.143868 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 03:29:35.143878 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 03:29:35.143887 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 03:29:35.143898 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 03:29:35.143907 systemd[1]: Created slice user.slice - User and Session Slice. May 27 03:29:35.143919 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 03:29:35.143928 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 03:29:35.143938 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 03:29:35.143948 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 03:29:35.143960 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 03:29:35.143970 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 03:29:35.143980 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 03:29:35.143990 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 03:29:35.144001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 03:29:35.144011 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 03:29:35.144021 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 03:29:35.144031 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 03:29:35.144040 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 03:29:35.144050 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 03:29:35.144060 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 03:29:35.144070 systemd[1]: Reached target slices.target - Slice Units. May 27 03:29:35.145882 systemd[1]: Reached target swap.target - Swaps. May 27 03:29:35.145900 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 03:29:35.145911 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 03:29:35.145921 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 03:29:35.145931 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 03:29:35.145945 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 03:29:35.145955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 03:29:35.145965 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 03:29:35.145974 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 03:29:35.145984 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 03:29:35.145994 systemd[1]: Mounting media.mount - External Media Directory... May 27 03:29:35.146004 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:29:35.146013 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 03:29:35.146025 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 03:29:35.146035 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 03:29:35.146045 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 03:29:35.146055 systemd[1]: Reached target machines.target - Containers. May 27 03:29:35.146066 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 03:29:35.146133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:29:35.146148 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 03:29:35.146158 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 03:29:35.146171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:29:35.146181 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:29:35.146190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:29:35.146200 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 03:29:35.146210 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:29:35.146220 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 03:29:35.146230 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 03:29:35.146239 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 03:29:35.146249 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 03:29:35.146261 systemd[1]: Stopped systemd-fsck-usr.service. May 27 03:29:35.146272 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:29:35.146282 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 03:29:35.146291 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 03:29:35.146301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 03:29:35.146311 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 03:29:35.146321 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 03:29:35.146330 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 03:29:35.146342 systemd[1]: verity-setup.service: Deactivated successfully. May 27 03:29:35.146352 systemd[1]: Stopped verity-setup.service. May 27 03:29:35.146362 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:29:35.146371 kernel: loop: module loaded May 27 03:29:35.146381 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 03:29:35.146569 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 03:29:35.146578 systemd[1]: Mounted media.mount - External Media Directory. May 27 03:29:35.146588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 03:29:35.146599 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 03:29:35.146609 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 03:29:35.146618 kernel: fuse: init (API version 7.41) May 27 03:29:35.146628 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 03:29:35.146637 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 03:29:35.146647 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 03:29:35.146657 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 03:29:35.146666 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:29:35.146676 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:29:35.146687 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:29:35.146697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:29:35.146707 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 03:29:35.146716 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 03:29:35.146726 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:29:35.146735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:29:35.146767 systemd-journald[1198]: Collecting audit messages is disabled. May 27 03:29:35.146790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 03:29:35.146800 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 03:29:35.146810 kernel: ACPI: bus type drm_connector registered May 27 03:29:35.146820 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:29:35.146835 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:29:35.146845 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 03:29:35.146856 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 03:29:35.146868 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 03:29:35.146878 systemd-journald[1198]: Journal started May 27 03:29:35.146897 systemd-journald[1198]: Runtime Journal (/run/log/journal/433a3bf5aaf642fdbf5121eb1b120632) is 8M, max 78.5M, 70.5M free. May 27 03:29:34.769470 systemd[1]: Queued start job for default target multi-user.target. May 27 03:29:35.152135 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 03:29:34.790070 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 27 03:29:34.790741 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 03:29:35.159015 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 03:29:35.164450 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 03:29:35.164481 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 03:29:35.169228 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 03:29:35.176985 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 03:29:35.177014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:29:35.185256 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 03:29:35.185286 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:29:35.190097 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 03:29:35.195938 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:29:35.201123 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:29:35.210145 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 03:29:35.219106 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 03:29:35.226223 systemd[1]: Started systemd-journald.service - Journal Service. May 27 03:29:35.230824 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 03:29:35.234593 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 03:29:35.235321 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 03:29:35.236675 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 03:29:35.252029 kernel: loop0: detected capacity change from 0 to 146240 May 27 03:29:35.271864 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 03:29:35.277044 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 03:29:35.281196 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 03:29:35.282432 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:29:35.308096 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 03:29:35.311310 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. May 27 03:29:35.311328 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. May 27 03:29:35.312672 systemd-journald[1198]: Time spent on flushing to /var/log/journal/433a3bf5aaf642fdbf5121eb1b120632 is 27.775ms for 1012 entries. May 27 03:29:35.312672 systemd-journald[1198]: System Journal (/var/log/journal/433a3bf5aaf642fdbf5121eb1b120632) is 8M, max 195.6M, 187.6M free. May 27 03:29:35.348636 systemd-journald[1198]: Received client request to flush runtime journal. May 27 03:29:35.348707 kernel: loop1: detected capacity change from 0 to 229808 May 27 03:29:35.323616 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 03:29:35.329995 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 03:29:35.332782 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 03:29:35.352260 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 03:29:35.376129 kernel: loop2: detected capacity change from 0 to 113872 May 27 03:29:35.391558 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 03:29:35.396201 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 03:29:35.422872 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. May 27 03:29:35.422891 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. May 27 03:29:35.428313 kernel: loop3: detected capacity change from 0 to 8 May 27 03:29:35.428075 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 03:29:35.447105 kernel: loop4: detected capacity change from 0 to 146240 May 27 03:29:35.465095 kernel: loop5: detected capacity change from 0 to 229808 May 27 03:29:35.492108 kernel: loop6: detected capacity change from 0 to 113872 May 27 03:29:35.506127 kernel: loop7: detected capacity change from 0 to 8 May 27 03:29:35.507891 (sd-merge)[1269]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. May 27 03:29:35.508879 (sd-merge)[1269]: Merged extensions into '/usr'. May 27 03:29:35.515186 systemd[1]: Reload requested from client PID 1223 ('systemd-sysext') (unit systemd-sysext.service)... May 27 03:29:35.515255 systemd[1]: Reloading... May 27 03:29:35.621296 zram_generator::config[1295]: No configuration found. May 27 03:29:35.721370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:29:35.808335 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 03:29:35.808648 systemd[1]: Reloading finished in 292 ms. May 27 03:29:35.814949 ldconfig[1219]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 03:29:35.824017 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 03:29:35.825188 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 03:29:35.838200 systemd[1]: Starting ensure-sysext.service... May 27 03:29:35.841190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 03:29:35.869950 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 03:29:35.869987 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 03:29:35.870728 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 03:29:35.871013 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 03:29:35.871919 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 03:29:35.872228 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 27 03:29:35.872342 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. May 27 03:29:35.875057 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... May 27 03:29:35.875164 systemd[1]: Reloading... May 27 03:29:35.879420 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:29:35.879434 systemd-tmpfiles[1339]: Skipping /boot May 27 03:29:35.894542 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. May 27 03:29:35.894555 systemd-tmpfiles[1339]: Skipping /boot May 27 03:29:35.962116 zram_generator::config[1363]: No configuration found. May 27 03:29:36.067648 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:29:36.138218 systemd[1]: Reloading finished in 262 ms. May 27 03:29:36.153918 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 03:29:36.169663 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 03:29:36.177841 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:29:36.194076 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 03:29:36.198257 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 03:29:36.201669 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 03:29:36.208221 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 03:29:36.211138 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 03:29:36.215022 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:29:36.216248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:29:36.217390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:29:36.223011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:29:36.232368 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:29:36.232995 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:29:36.233129 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:29:36.233217 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:29:36.239161 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 03:29:36.256433 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:29:36.256629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:29:36.256828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:29:36.256944 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:29:36.257060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:29:36.259134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:29:36.260122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:29:36.262398 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:29:36.262617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:29:36.266160 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 03:29:36.267451 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 03:29:36.268577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:29:36.268784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:29:36.277282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:29:36.277495 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 03:29:36.279276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 03:29:36.283198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 03:29:36.284791 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 03:29:36.291304 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 03:29:36.291993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 03:29:36.292150 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 03:29:36.294683 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 03:29:36.295287 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 03:29:36.303836 systemd[1]: Finished ensure-sysext.service. May 27 03:29:36.307768 systemd-udevd[1418]: Using default interface naming scheme 'v255'. May 27 03:29:36.311941 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 03:29:36.339799 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 03:29:36.353707 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 03:29:36.355424 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 03:29:36.364270 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 03:29:36.365325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 03:29:36.365730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 03:29:36.366620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 03:29:36.366838 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 03:29:36.370727 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 03:29:36.375274 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 03:29:36.376188 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 03:29:36.377414 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 03:29:36.383625 augenrules[1462]: No rules May 27 03:29:36.384671 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 03:29:36.385607 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 03:29:36.390839 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:29:36.391567 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:29:36.396420 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 03:29:36.396670 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 03:29:36.516352 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 03:29:36.539618 kernel: mousedev: PS/2 mouse device common for all mice May 27 03:29:36.554306 systemd-resolved[1414]: Positive Trust Anchors: May 27 03:29:36.554323 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 03:29:36.554351 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 03:29:36.558773 systemd-resolved[1414]: Defaulting to hostname 'linux'. May 27 03:29:36.560496 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 03:29:36.562189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 03:29:36.601133 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 27 03:29:36.606172 systemd-networkd[1468]: lo: Link UP May 27 03:29:36.606180 systemd-networkd[1468]: lo: Gained carrier May 27 03:29:36.608937 systemd-networkd[1468]: Enumeration completed May 27 03:29:36.609114 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 03:29:36.610035 systemd[1]: Reached target network.target - Network. May 27 03:29:36.613333 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 03:29:36.620149 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 03:29:36.620358 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 03:29:36.620519 kernel: ACPI: button: Power Button [PWRF] May 27 03:29:36.621386 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:29:36.621391 systemd-networkd[1468]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 03:29:36.622694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 03:29:36.626554 systemd-networkd[1468]: eth0: Link UP May 27 03:29:36.626736 systemd-networkd[1468]: eth0: Gained carrier May 27 03:29:36.626750 systemd-networkd[1468]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 03:29:36.651327 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 03:29:36.652663 systemd[1]: Reached target sysinit.target - System Initialization. May 27 03:29:36.654233 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 03:29:36.654958 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 03:29:36.655540 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 03:29:36.656113 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 03:29:36.656686 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 03:29:36.656718 systemd[1]: Reached target paths.target - Path Units. May 27 03:29:36.659197 systemd[1]: Reached target time-set.target - System Time Set. May 27 03:29:36.659891 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 03:29:36.660557 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 03:29:36.661137 systemd[1]: Reached target timers.target - Timer Units. May 27 03:29:36.662626 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 03:29:36.664884 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 03:29:36.670550 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 03:29:36.672307 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 03:29:36.672885 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 03:29:36.682992 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 03:29:36.684505 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 03:29:36.686208 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 03:29:36.687283 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 03:29:36.689914 systemd[1]: Reached target sockets.target - Socket Units. May 27 03:29:36.690592 systemd[1]: Reached target basic.target - Basic System. May 27 03:29:36.691223 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 03:29:36.691253 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 03:29:36.693648 systemd[1]: Starting containerd.service - containerd container runtime... May 27 03:29:36.698219 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 03:29:36.702289 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 03:29:36.704534 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 03:29:36.711190 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 03:29:36.713431 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 03:29:36.715131 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 03:29:36.721838 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 03:29:36.726528 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 03:29:36.730043 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 03:29:36.733280 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 03:29:36.736182 jq[1519]: false May 27 03:29:36.736874 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 03:29:36.759791 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 03:29:36.762897 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 03:29:36.763685 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 03:29:36.767298 systemd[1]: Starting update-engine.service - Update Engine... May 27 03:29:36.774176 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 03:29:36.780721 oslogin_cache_refresh[1521]: Refreshing passwd entry cache May 27 03:29:36.781385 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache May 27 03:29:36.792445 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 03:29:36.795182 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 03:29:36.795418 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 03:29:36.802145 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting May 27 03:29:36.809815 extend-filesystems[1520]: Found loop4 May 27 03:29:36.809815 extend-filesystems[1520]: Found loop5 May 27 03:29:36.809815 extend-filesystems[1520]: Found loop6 May 27 03:29:36.809815 extend-filesystems[1520]: Found loop7 May 27 03:29:36.809815 extend-filesystems[1520]: Found sda May 27 03:29:36.809815 extend-filesystems[1520]: Found sda1 May 27 03:29:36.809815 extend-filesystems[1520]: Found sda2 May 27 03:29:36.809815 extend-filesystems[1520]: Found sda3 May 27 03:29:36.809815 extend-filesystems[1520]: Found usr May 27 03:29:36.809815 extend-filesystems[1520]: Found sda4 May 27 03:29:36.809815 extend-filesystems[1520]: Found sda6 May 27 03:29:36.809815 extend-filesystems[1520]: Found sda7 May 27 03:29:36.809815 extend-filesystems[1520]: Found sda9 May 27 03:29:36.805543 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 03:29:36.819355 oslogin_cache_refresh[1521]: Failure getting users, quitting May 27 03:29:36.871099 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:29:36.871099 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache May 27 03:29:36.871099 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting May 27 03:29:36.871099 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:29:36.805768 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 03:29:36.871241 update_engine[1530]: I20250527 03:29:36.844393 1530 main.cc:92] Flatcar Update Engine starting May 27 03:29:36.819378 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 03:29:36.811427 systemd[1]: motdgen.service: Deactivated successfully. May 27 03:29:36.819420 oslogin_cache_refresh[1521]: Refreshing group entry cache May 27 03:29:36.811675 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 03:29:36.819904 oslogin_cache_refresh[1521]: Failure getting groups, quitting May 27 03:29:36.822346 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 03:29:36.819912 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 03:29:36.823687 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 03:29:36.872238 jq[1532]: true May 27 03:29:36.827408 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 03:29:36.829303 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 03:29:36.874874 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 03:29:36.897250 tar[1542]: linux-amd64/LICENSE May 27 03:29:36.899238 tar[1542]: linux-amd64/helm May 27 03:29:36.923947 jq[1553]: true May 27 03:29:36.931723 dbus-daemon[1517]: [system] SELinux support is enabled May 27 03:29:36.931865 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 03:29:36.936607 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 03:29:36.936632 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 03:29:36.940158 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 03:29:36.940180 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 03:29:36.968772 systemd[1]: Started update-engine.service - Update Engine. May 27 03:29:36.972684 update_engine[1530]: I20250527 03:29:36.972352 1530 update_check_scheduler.cc:74] Next update check in 11m19s May 27 03:29:36.973502 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 03:29:37.005240 systemd-logind[1527]: New seat seat0. May 27 03:29:37.007344 coreos-metadata[1516]: May 27 03:29:37.007 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 27 03:29:37.008978 systemd[1]: Started systemd-logind.service - User Login Management. May 27 03:29:37.056369 bash[1583]: Updated "/home/core/.ssh/authorized_keys" May 27 03:29:37.059664 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 03:29:37.065151 systemd[1]: Starting sshkeys.service... May 27 03:29:37.093678 systemd-networkd[1468]: eth0: DHCPv4 address 172.234.197.247/24, gateway 172.234.197.1 acquired from 23.213.14.182 May 27 03:29:37.094781 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. May 27 03:29:37.095963 dbus-daemon[1517]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1468 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 03:29:37.104545 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 03:29:37.124249 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 03:29:37.129233 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 03:29:37.206239 containerd[1552]: time="2025-05-27T03:29:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 03:29:38.493946 systemd-timesyncd[1445]: Contacted time server 72.30.35.88:123 (0.flatcar.pool.ntp.org). May 27 03:29:38.493990 systemd-timesyncd[1445]: Initial clock synchronization to Tue 2025-05-27 03:29:38.493600 UTC. May 27 03:29:38.495266 systemd-resolved[1414]: Clock change detected. Flushing caches. May 27 03:29:38.523798 coreos-metadata[1594]: May 27 03:29:38.523 INFO Putting http://169.254.169.254/v1/token: Attempt #1 May 27 03:29:38.536169 containerd[1552]: time="2025-05-27T03:29:38.536073354Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 03:29:38.547822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 03:29:38.571837 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 27 03:29:38.576498 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 03:29:38.585583 containerd[1552]: time="2025-05-27T03:29:38.583628234Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.6µs" May 27 03:29:38.585583 containerd[1552]: time="2025-05-27T03:29:38.584606394Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 03:29:38.585583 containerd[1552]: time="2025-05-27T03:29:38.584627874Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 03:29:38.585583 containerd[1552]: time="2025-05-27T03:29:38.585030954Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 03:29:38.585583 containerd[1552]: time="2025-05-27T03:29:38.585047364Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 03:29:38.585583 containerd[1552]: time="2025-05-27T03:29:38.585071344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:29:38.585583 containerd[1552]: time="2025-05-27T03:29:38.585159714Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 03:29:38.585583 containerd[1552]: time="2025-05-27T03:29:38.585171494Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:29:38.608295 containerd[1552]: time="2025-05-27T03:29:38.608067134Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 03:29:38.608295 containerd[1552]: time="2025-05-27T03:29:38.608100434Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:29:38.608295 containerd[1552]: time="2025-05-27T03:29:38.608114424Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 03:29:38.608295 containerd[1552]: time="2025-05-27T03:29:38.608122124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 03:29:38.608295 containerd[1552]: time="2025-05-27T03:29:38.608228504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 03:29:38.608521 containerd[1552]: time="2025-05-27T03:29:38.608449124Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:29:38.608521 containerd[1552]: time="2025-05-27T03:29:38.608486454Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 03:29:38.608521 containerd[1552]: time="2025-05-27T03:29:38.608496584Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 03:29:38.608615 containerd[1552]: time="2025-05-27T03:29:38.608528904Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 03:29:38.608777 containerd[1552]: time="2025-05-27T03:29:38.608751434Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 03:29:38.608845 containerd[1552]: time="2025-05-27T03:29:38.608822344Z" level=info msg="metadata content store policy set" policy=shared May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614325874Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614377834Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614392444Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614409444Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614421154Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614432264Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614443284Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614461264Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614473434Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614482224Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614490854Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 03:29:38.614606 containerd[1552]: time="2025-05-27T03:29:38.614501204Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 03:29:38.614819 containerd[1552]: time="2025-05-27T03:29:38.614697574Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 03:29:38.614819 containerd[1552]: time="2025-05-27T03:29:38.614719584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 03:29:38.614819 containerd[1552]: time="2025-05-27T03:29:38.614758524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 03:29:38.614819 containerd[1552]: time="2025-05-27T03:29:38.614770384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 03:29:38.614819 containerd[1552]: time="2025-05-27T03:29:38.614780174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 03:29:38.614819 containerd[1552]: time="2025-05-27T03:29:38.614789134Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 03:29:38.614819 containerd[1552]: time="2025-05-27T03:29:38.614798554Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 03:29:38.614819 containerd[1552]: time="2025-05-27T03:29:38.614807424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 03:29:38.615133 containerd[1552]: time="2025-05-27T03:29:38.614834844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 03:29:38.615133 containerd[1552]: time="2025-05-27T03:29:38.614852734Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 03:29:38.615133 containerd[1552]: time="2025-05-27T03:29:38.614863574Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 03:29:38.615133 containerd[1552]: time="2025-05-27T03:29:38.615125034Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 03:29:38.615198 containerd[1552]: time="2025-05-27T03:29:38.615138994Z" level=info msg="Start snapshots syncer" May 27 03:29:38.615198 containerd[1552]: time="2025-05-27T03:29:38.615182374Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 03:29:38.619066 containerd[1552]: time="2025-05-27T03:29:38.616691554Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 03:29:38.619208 containerd[1552]: time="2025-05-27T03:29:38.619084384Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 03:29:38.620101 systemd-logind[1527]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 03:29:38.621509 containerd[1552]: time="2025-05-27T03:29:38.621482244Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 03:29:38.629537 containerd[1552]: time="2025-05-27T03:29:38.629503064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 03:29:38.629624 containerd[1552]: time="2025-05-27T03:29:38.629539264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 03:29:38.629649 containerd[1552]: time="2025-05-27T03:29:38.629552184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 03:29:38.629942 containerd[1552]: time="2025-05-27T03:29:38.629916244Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 03:29:38.629942 containerd[1552]: time="2025-05-27T03:29:38.629940294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 03:29:38.629989 containerd[1552]: time="2025-05-27T03:29:38.629951674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 03:29:38.629989 containerd[1552]: time="2025-05-27T03:29:38.629962024Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 03:29:38.630022 containerd[1552]: time="2025-05-27T03:29:38.630003444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 03:29:38.630022 containerd[1552]: time="2025-05-27T03:29:38.630015274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 03:29:38.630064 containerd[1552]: time="2025-05-27T03:29:38.630025084Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 03:29:38.630667 containerd[1552]: time="2025-05-27T03:29:38.630639324Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:29:38.630713 containerd[1552]: time="2025-05-27T03:29:38.630683864Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 03:29:38.630713 containerd[1552]: time="2025-05-27T03:29:38.630694504Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:29:38.630713 containerd[1552]: time="2025-05-27T03:29:38.630703414Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 03:29:38.630713 containerd[1552]: time="2025-05-27T03:29:38.630710914Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 03:29:38.630787 containerd[1552]: time="2025-05-27T03:29:38.630719844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 03:29:38.630787 containerd[1552]: time="2025-05-27T03:29:38.630729774Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 03:29:38.631642 containerd[1552]: time="2025-05-27T03:29:38.631615374Z" level=info msg="runtime interface created" May 27 03:29:38.631642 containerd[1552]: time="2025-05-27T03:29:38.631634804Z" level=info msg="created NRI interface" May 27 03:29:38.631694 containerd[1552]: time="2025-05-27T03:29:38.631651264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 03:29:38.631694 containerd[1552]: time="2025-05-27T03:29:38.631667514Z" level=info msg="Connect containerd service" May 27 03:29:38.631738 containerd[1552]: time="2025-05-27T03:29:38.631715404Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 03:29:38.639428 containerd[1552]: time="2025-05-27T03:29:38.637406604Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:29:38.638254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 03:29:38.661778 coreos-metadata[1594]: May 27 03:29:38.661 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 May 27 03:29:38.797995 coreos-metadata[1594]: May 27 03:29:38.797 INFO Fetch successful May 27 03:29:38.807269 systemd-logind[1527]: Watching system buttons on /dev/input/event2 (Power Button) May 27 03:29:38.888063 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 03:29:38.891099 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 03:29:38.893835 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 03:29:38.897032 dbus-daemon[1517]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1591 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 03:29:38.905584 kernel: EDAC MC: Ver: 3.0.0 May 27 03:29:38.933325 systemd[1]: Starting polkit.service - Authorization Manager... May 27 03:29:38.943476 containerd[1552]: time="2025-05-27T03:29:38.943037304Z" level=info msg="Start subscribing containerd event" May 27 03:29:38.943476 containerd[1552]: time="2025-05-27T03:29:38.943083414Z" level=info msg="Start recovering state" May 27 03:29:38.943476 containerd[1552]: time="2025-05-27T03:29:38.943172294Z" level=info msg="Start event monitor" May 27 03:29:38.943476 containerd[1552]: time="2025-05-27T03:29:38.943184614Z" level=info msg="Start cni network conf syncer for default" May 27 03:29:38.943476 containerd[1552]: time="2025-05-27T03:29:38.943191774Z" level=info msg="Start streaming server" May 27 03:29:38.943476 containerd[1552]: time="2025-05-27T03:29:38.943199224Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 03:29:38.943476 containerd[1552]: time="2025-05-27T03:29:38.943205694Z" level=info msg="runtime interface starting up..." May 27 03:29:38.943476 containerd[1552]: time="2025-05-27T03:29:38.943211324Z" level=info msg="starting plugins..." May 27 03:29:38.943476 containerd[1552]: time="2025-05-27T03:29:38.943224104Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 03:29:38.948681 containerd[1552]: time="2025-05-27T03:29:38.946247344Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 03:29:38.948681 containerd[1552]: time="2025-05-27T03:29:38.946298774Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 03:29:38.953111 update-ssh-keys[1624]: Updated "/home/core/.ssh/authorized_keys" May 27 03:29:38.958430 containerd[1552]: time="2025-05-27T03:29:38.957617544Z" level=info msg="containerd successfully booted in 0.475705s" May 27 03:29:38.966029 systemd[1]: Started containerd.service - containerd container runtime. May 27 03:29:39.039133 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 03:29:39.047734 systemd[1]: Finished sshkeys.service. May 27 03:29:39.148805 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 03:29:39.175981 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 03:29:39.189028 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 03:29:39.215990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 03:29:39.220426 polkitd[1632]: Started polkitd version 126 May 27 03:29:39.227356 polkitd[1632]: Loading rules from directory /etc/polkit-1/rules.d May 27 03:29:39.227421 systemd[1]: issuegen.service: Deactivated successfully. May 27 03:29:39.227888 polkitd[1632]: Loading rules from directory /run/polkit-1/rules.d May 27 03:29:39.228106 polkitd[1632]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:29:39.228320 polkitd[1632]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 03:29:39.228339 polkitd[1632]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 03:29:39.228371 polkitd[1632]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 03:29:39.228997 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 03:29:39.231009 polkitd[1632]: Finished loading, compiling and executing 2 rules May 27 03:29:39.232543 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 03:29:39.232742 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 03:29:39.233861 polkitd[1632]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 03:29:39.234736 systemd[1]: Started polkit.service - Authorization Manager. May 27 03:29:39.251246 systemd-hostnamed[1591]: Hostname set to <172-234-197-247> (transient) May 27 03:29:39.251555 systemd-resolved[1414]: System hostname changed to '172-234-197-247'. May 27 03:29:39.255413 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 03:29:39.258362 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 03:29:39.261380 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 03:29:39.262143 systemd[1]: Reached target getty.target - Login Prompts. May 27 03:29:39.293180 coreos-metadata[1516]: May 27 03:29:39.293 INFO Putting http://169.254.169.254/v1/token: Attempt #2 May 27 03:29:39.385022 coreos-metadata[1516]: May 27 03:29:39.384 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 May 27 03:29:39.386794 tar[1542]: linux-amd64/README.md May 27 03:29:39.401649 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 03:29:39.577020 coreos-metadata[1516]: May 27 03:29:39.576 INFO Fetch successful May 27 03:29:39.577150 coreos-metadata[1516]: May 27 03:29:39.577 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 May 27 03:29:39.830787 systemd-networkd[1468]: eth0: Gained IPv6LL May 27 03:29:39.834293 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 03:29:39.835074 coreos-metadata[1516]: May 27 03:29:39.833 INFO Fetch successful May 27 03:29:39.836376 systemd[1]: Reached target network-online.target - Network is Online. May 27 03:29:39.839298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:29:39.842731 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 03:29:39.873941 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 03:29:39.951554 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 03:29:39.953323 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 03:29:40.795176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:29:40.796399 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 03:29:40.797798 systemd[1]: Startup finished in 2.916s (kernel) + 7.540s (initrd) + 5.378s (userspace) = 15.835s. May 27 03:29:40.834248 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:29:41.388778 kubelet[1703]: E0527 03:29:41.388688 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:29:41.393052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:29:41.393235 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:29:41.393968 systemd[1]: kubelet.service: Consumed 893ms CPU time, 267.4M memory peak. May 27 03:29:42.382252 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 03:29:42.383827 systemd[1]: Started sshd@0-172.234.197.247:22-139.178.68.195:45432.service - OpenSSH per-connection server daemon (139.178.68.195:45432). May 27 03:29:42.720853 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 45432 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:29:42.722578 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:42.729114 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 03:29:42.730517 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 03:29:42.738646 systemd-logind[1527]: New session 1 of user core. May 27 03:29:42.750095 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 03:29:42.753865 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 03:29:42.764351 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 03:29:42.766709 systemd-logind[1527]: New session c1 of user core. May 27 03:29:42.889906 systemd[1719]: Queued start job for default target default.target. May 27 03:29:42.901779 systemd[1719]: Created slice app.slice - User Application Slice. May 27 03:29:42.901808 systemd[1719]: Reached target paths.target - Paths. May 27 03:29:42.901852 systemd[1719]: Reached target timers.target - Timers. May 27 03:29:42.903427 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 03:29:42.915209 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 03:29:42.915342 systemd[1719]: Reached target sockets.target - Sockets. May 27 03:29:42.915494 systemd[1719]: Reached target basic.target - Basic System. May 27 03:29:42.915550 systemd[1719]: Reached target default.target - Main User Target. May 27 03:29:42.915598 systemd[1719]: Startup finished in 142ms. May 27 03:29:42.915681 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 03:29:42.922735 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 03:29:43.173225 systemd[1]: Started sshd@1-172.234.197.247:22-139.178.68.195:45444.service - OpenSSH per-connection server daemon (139.178.68.195:45444). May 27 03:29:43.508870 sshd[1730]: Accepted publickey for core from 139.178.68.195 port 45444 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:29:43.510343 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:43.515740 systemd-logind[1527]: New session 2 of user core. May 27 03:29:43.520705 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 03:29:43.752777 sshd[1732]: Connection closed by 139.178.68.195 port 45444 May 27 03:29:43.753578 sshd-session[1730]: pam_unix(sshd:session): session closed for user core May 27 03:29:43.757172 systemd[1]: sshd@1-172.234.197.247:22-139.178.68.195:45444.service: Deactivated successfully. May 27 03:29:43.758884 systemd[1]: session-2.scope: Deactivated successfully. May 27 03:29:43.760653 systemd-logind[1527]: Session 2 logged out. Waiting for processes to exit. May 27 03:29:43.762008 systemd-logind[1527]: Removed session 2. May 27 03:29:43.819387 systemd[1]: Started sshd@2-172.234.197.247:22-139.178.68.195:57072.service - OpenSSH per-connection server daemon (139.178.68.195:57072). May 27 03:29:44.172038 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 57072 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:29:44.173323 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:44.179226 systemd-logind[1527]: New session 3 of user core. May 27 03:29:44.184751 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 03:29:44.417993 sshd[1740]: Connection closed by 139.178.68.195 port 57072 May 27 03:29:44.418634 sshd-session[1738]: pam_unix(sshd:session): session closed for user core May 27 03:29:44.423350 systemd[1]: sshd@2-172.234.197.247:22-139.178.68.195:57072.service: Deactivated successfully. May 27 03:29:44.423781 systemd-logind[1527]: Session 3 logged out. Waiting for processes to exit. May 27 03:29:44.425838 systemd[1]: session-3.scope: Deactivated successfully. May 27 03:29:44.427381 systemd-logind[1527]: Removed session 3. May 27 03:29:44.487677 systemd[1]: Started sshd@3-172.234.197.247:22-139.178.68.195:57088.service - OpenSSH per-connection server daemon (139.178.68.195:57088). May 27 03:29:44.845886 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 57088 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:29:44.847811 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:44.853138 systemd-logind[1527]: New session 4 of user core. May 27 03:29:44.863721 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 03:29:45.097546 sshd[1748]: Connection closed by 139.178.68.195 port 57088 May 27 03:29:45.098620 sshd-session[1746]: pam_unix(sshd:session): session closed for user core May 27 03:29:45.106024 systemd[1]: sshd@3-172.234.197.247:22-139.178.68.195:57088.service: Deactivated successfully. May 27 03:29:45.110150 systemd[1]: session-4.scope: Deactivated successfully. May 27 03:29:45.111531 systemd-logind[1527]: Session 4 logged out. Waiting for processes to exit. May 27 03:29:45.114029 systemd-logind[1527]: Removed session 4. May 27 03:29:45.157121 systemd[1]: Started sshd@4-172.234.197.247:22-139.178.68.195:57096.service - OpenSSH per-connection server daemon (139.178.68.195:57096). May 27 03:29:45.516490 sshd[1754]: Accepted publickey for core from 139.178.68.195 port 57096 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:29:45.518321 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:45.523125 systemd-logind[1527]: New session 5 of user core. May 27 03:29:45.529700 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 03:29:45.721756 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 03:29:45.722246 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:29:45.739702 sudo[1757]: pam_unix(sudo:session): session closed for user root May 27 03:29:45.790819 sshd[1756]: Connection closed by 139.178.68.195 port 57096 May 27 03:29:45.791694 sshd-session[1754]: pam_unix(sshd:session): session closed for user core May 27 03:29:45.796635 systemd[1]: sshd@4-172.234.197.247:22-139.178.68.195:57096.service: Deactivated successfully. May 27 03:29:45.798639 systemd[1]: session-5.scope: Deactivated successfully. May 27 03:29:45.799298 systemd-logind[1527]: Session 5 logged out. Waiting for processes to exit. May 27 03:29:45.800790 systemd-logind[1527]: Removed session 5. May 27 03:29:45.860438 systemd[1]: Started sshd@5-172.234.197.247:22-139.178.68.195:57110.service - OpenSSH per-connection server daemon (139.178.68.195:57110). May 27 03:29:46.232467 sshd[1763]: Accepted publickey for core from 139.178.68.195 port 57110 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:29:46.233924 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:46.239082 systemd-logind[1527]: New session 6 of user core. May 27 03:29:46.252700 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 03:29:46.434379 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 03:29:46.434695 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:29:46.440337 sudo[1767]: pam_unix(sudo:session): session closed for user root May 27 03:29:46.446377 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 03:29:46.446709 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:29:46.456752 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 03:29:46.493869 augenrules[1789]: No rules May 27 03:29:46.494298 systemd[1]: audit-rules.service: Deactivated successfully. May 27 03:29:46.494549 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 03:29:46.496066 sudo[1766]: pam_unix(sudo:session): session closed for user root May 27 03:29:46.548429 sshd[1765]: Connection closed by 139.178.68.195 port 57110 May 27 03:29:46.548948 sshd-session[1763]: pam_unix(sshd:session): session closed for user core May 27 03:29:46.552932 systemd[1]: sshd@5-172.234.197.247:22-139.178.68.195:57110.service: Deactivated successfully. May 27 03:29:46.554750 systemd[1]: session-6.scope: Deactivated successfully. May 27 03:29:46.555387 systemd-logind[1527]: Session 6 logged out. Waiting for processes to exit. May 27 03:29:46.556839 systemd-logind[1527]: Removed session 6. May 27 03:29:46.613764 systemd[1]: Started sshd@6-172.234.197.247:22-139.178.68.195:57124.service - OpenSSH per-connection server daemon (139.178.68.195:57124). May 27 03:29:46.946101 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 57124 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:29:46.947580 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:29:46.952687 systemd-logind[1527]: New session 7 of user core. May 27 03:29:46.962696 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 03:29:47.141014 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 03:29:47.141316 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 03:29:47.435200 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 03:29:47.456883 (dockerd)[1820]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 03:29:47.660584 dockerd[1820]: time="2025-05-27T03:29:47.660382964Z" level=info msg="Starting up" May 27 03:29:47.661956 dockerd[1820]: time="2025-05-27T03:29:47.661938304Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 03:29:47.718754 dockerd[1820]: time="2025-05-27T03:29:47.718717134Z" level=info msg="Loading containers: start." May 27 03:29:47.728590 kernel: Initializing XFRM netlink socket May 27 03:29:47.958748 systemd-networkd[1468]: docker0: Link UP May 27 03:29:47.961547 dockerd[1820]: time="2025-05-27T03:29:47.961521794Z" level=info msg="Loading containers: done." May 27 03:29:47.976444 dockerd[1820]: time="2025-05-27T03:29:47.976384064Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 03:29:47.976548 dockerd[1820]: time="2025-05-27T03:29:47.976440594Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 03:29:47.976548 dockerd[1820]: time="2025-05-27T03:29:47.976530824Z" level=info msg="Initializing buildkit" May 27 03:29:47.995262 dockerd[1820]: time="2025-05-27T03:29:47.995232314Z" level=info msg="Completed buildkit initialization" May 27 03:29:48.002326 dockerd[1820]: time="2025-05-27T03:29:48.002303344Z" level=info msg="Daemon has completed initialization" May 27 03:29:48.002440 dockerd[1820]: time="2025-05-27T03:29:48.002401314Z" level=info msg="API listen on /run/docker.sock" May 27 03:29:48.002492 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 03:29:48.895333 containerd[1552]: time="2025-05-27T03:29:48.895296844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 03:29:49.706298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659856684.mount: Deactivated successfully. May 27 03:29:50.873351 containerd[1552]: time="2025-05-27T03:29:50.873286424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:50.874290 containerd[1552]: time="2025-05-27T03:29:50.874130144Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=30075403" May 27 03:29:50.874826 containerd[1552]: time="2025-05-27T03:29:50.874794394Z" level=info msg="ImageCreate event name:\"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:50.877150 containerd[1552]: time="2025-05-27T03:29:50.877093484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:50.878243 containerd[1552]: time="2025-05-27T03:29:50.877809604Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"30072203\" in 1.9824767s" May 27 03:29:50.878243 containerd[1552]: time="2025-05-27T03:29:50.877851064Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:c6ab243b29f82a6ce269a5342bfd9ea3d0d4ef0f2bb7e98c6ac0bde1aeafab66\"" May 27 03:29:50.878415 containerd[1552]: time="2025-05-27T03:29:50.878389244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 03:29:51.467756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 03:29:51.470674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:29:51.688966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:29:51.694822 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 03:29:51.745029 kubelet[2085]: E0527 03:29:51.744858 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 03:29:51.751171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 03:29:51.751395 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 03:29:51.751901 systemd[1]: kubelet.service: Consumed 227ms CPU time, 108.6M memory peak. May 27 03:29:52.274099 containerd[1552]: time="2025-05-27T03:29:52.274018554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:52.275192 containerd[1552]: time="2025-05-27T03:29:52.275150214Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=26011390" May 27 03:29:52.276196 containerd[1552]: time="2025-05-27T03:29:52.276161734Z" level=info msg="ImageCreate event name:\"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:52.278601 containerd[1552]: time="2025-05-27T03:29:52.278542514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:52.279590 containerd[1552]: time="2025-05-27T03:29:52.279538974Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"27638910\" in 1.40112138s" May 27 03:29:52.279630 containerd[1552]: time="2025-05-27T03:29:52.279596924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:ef43894fa110c389f7286f4d5a3ea176072c95280efeca60d6a79617cdbbf3e4\"" May 27 03:29:52.286125 containerd[1552]: time="2025-05-27T03:29:52.286069874Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 03:29:53.502406 containerd[1552]: time="2025-05-27T03:29:53.502338124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:53.503583 containerd[1552]: time="2025-05-27T03:29:53.503348154Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=20148960" May 27 03:29:53.504548 containerd[1552]: time="2025-05-27T03:29:53.504511204Z" level=info msg="ImageCreate event name:\"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:53.507346 containerd[1552]: time="2025-05-27T03:29:53.507308684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:53.508052 containerd[1552]: time="2025-05-27T03:29:53.508012114Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"21776498\" in 1.22190288s" May 27 03:29:53.508100 containerd[1552]: time="2025-05-27T03:29:53.508053044Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:398c985c0d950becc8dcdab5877a8a517ffeafca0792b3fe5f1acff218aeac49\"" May 27 03:29:53.508691 containerd[1552]: time="2025-05-27T03:29:53.508657474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 03:29:54.748404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974136363.mount: Deactivated successfully. May 27 03:29:55.127319 containerd[1552]: time="2025-05-27T03:29:55.127138074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:55.129075 containerd[1552]: time="2025-05-27T03:29:55.129031024Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=31889075" May 27 03:29:55.129766 containerd[1552]: time="2025-05-27T03:29:55.129693914Z" level=info msg="ImageCreate event name:\"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:55.131287 containerd[1552]: time="2025-05-27T03:29:55.131229254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:55.132705 containerd[1552]: time="2025-05-27T03:29:55.132674444Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"31888094\" in 1.62398306s" May 27 03:29:55.132754 containerd[1552]: time="2025-05-27T03:29:55.132708384Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:b79c189b052cdbe0e837d0caa6faf1d9fd696d8664fcc462f67d9ea51f26fef2\"" May 27 03:29:55.133293 containerd[1552]: time="2025-05-27T03:29:55.133261394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 03:29:55.764499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52842779.mount: Deactivated successfully. May 27 03:29:56.557299 containerd[1552]: time="2025-05-27T03:29:56.557229414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:56.558669 containerd[1552]: time="2025-05-27T03:29:56.558364404Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" May 27 03:29:56.559482 containerd[1552]: time="2025-05-27T03:29:56.559446144Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:56.562834 containerd[1552]: time="2025-05-27T03:29:56.562791924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:56.563616 containerd[1552]: time="2025-05-27T03:29:56.563592164Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.43030009s" May 27 03:29:56.563701 containerd[1552]: time="2025-05-27T03:29:56.563685084Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" May 27 03:29:56.564453 containerd[1552]: time="2025-05-27T03:29:56.564417004Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 03:29:57.172926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602662098.mount: Deactivated successfully. May 27 03:29:57.179521 containerd[1552]: time="2025-05-27T03:29:57.179444564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:29:57.180274 containerd[1552]: time="2025-05-27T03:29:57.180237904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 27 03:29:57.181172 containerd[1552]: time="2025-05-27T03:29:57.181134644Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:29:57.182826 containerd[1552]: time="2025-05-27T03:29:57.182769744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 03:29:57.184267 containerd[1552]: time="2025-05-27T03:29:57.183809814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 619.35746ms" May 27 03:29:57.184267 containerd[1552]: time="2025-05-27T03:29:57.183846294Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 03:29:57.184652 containerd[1552]: time="2025-05-27T03:29:57.184615934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 03:29:59.266688 containerd[1552]: time="2025-05-27T03:29:59.266633184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:59.267771 containerd[1552]: time="2025-05-27T03:29:59.267726694Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58142739" May 27 03:29:59.268656 containerd[1552]: time="2025-05-27T03:29:59.268602804Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:59.271657 containerd[1552]: time="2025-05-27T03:29:59.271615464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:29:59.274541 containerd[1552]: time="2025-05-27T03:29:59.272733624Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.08808857s" May 27 03:29:59.274541 containerd[1552]: time="2025-05-27T03:29:59.272760484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 03:30:01.589208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:30:01.589359 systemd[1]: kubelet.service: Consumed 227ms CPU time, 108.6M memory peak. May 27 03:30:01.593784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:30:01.631857 systemd[1]: Reload requested from client PID 2204 ('systemctl') (unit session-7.scope)... May 27 03:30:01.631874 systemd[1]: Reloading... May 27 03:30:01.788605 zram_generator::config[2251]: No configuration found. May 27 03:30:01.887542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:30:01.995965 systemd[1]: Reloading finished in 363 ms. May 27 03:30:02.063145 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 03:30:02.063399 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 03:30:02.063770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:30:02.063823 systemd[1]: kubelet.service: Consumed 152ms CPU time, 98.3M memory peak. May 27 03:30:02.067367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:30:02.267385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:30:02.277998 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:30:02.320375 kubelet[2302]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:30:02.320375 kubelet[2302]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:30:02.320375 kubelet[2302]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:30:02.320782 kubelet[2302]: I0527 03:30:02.320438 2302 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:30:02.637842 kubelet[2302]: I0527 03:30:02.637711 2302 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:30:02.638614 kubelet[2302]: I0527 03:30:02.637988 2302 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:30:02.638614 kubelet[2302]: I0527 03:30:02.638254 2302 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:30:02.671554 kubelet[2302]: E0527 03:30:02.671497 2302 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.197.247:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.197.247:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 03:30:02.672370 kubelet[2302]: I0527 03:30:02.671858 2302 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:30:02.687753 kubelet[2302]: I0527 03:30:02.687710 2302 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:30:02.693623 kubelet[2302]: I0527 03:30:02.693592 2302 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:30:02.693895 kubelet[2302]: I0527 03:30:02.693851 2302 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:30:02.694069 kubelet[2302]: I0527 03:30:02.693881 2302 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-197-247","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:30:02.694069 kubelet[2302]: I0527 03:30:02.694058 2302 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:30:02.694069 kubelet[2302]: I0527 03:30:02.694067 2302 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:30:02.695068 kubelet[2302]: I0527 03:30:02.695030 2302 state_mem.go:36] "Initialized new in-memory state store" May 27 03:30:02.698341 kubelet[2302]: I0527 03:30:02.698170 2302 kubelet.go:480] "Attempting to sync node with API server" May 27 03:30:02.698341 kubelet[2302]: I0527 03:30:02.698195 2302 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:30:02.699918 kubelet[2302]: I0527 03:30:02.699813 2302 kubelet.go:386] "Adding apiserver pod source" May 27 03:30:02.702368 kubelet[2302]: I0527 03:30:02.702083 2302 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:30:02.705631 kubelet[2302]: E0527 03:30:02.705500 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.197.247:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-197-247&limit=500&resourceVersion=0\": dial tcp 172.234.197.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 03:30:02.706482 kubelet[2302]: E0527 03:30:02.706454 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.197.247:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.197.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 03:30:02.706880 kubelet[2302]: I0527 03:30:02.706864 2302 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:30:02.707426 kubelet[2302]: I0527 03:30:02.707410 2302 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:30:02.709874 kubelet[2302]: W0527 03:30:02.709420 2302 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 03:30:02.713486 kubelet[2302]: I0527 03:30:02.713457 2302 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:30:02.713537 kubelet[2302]: I0527 03:30:02.713513 2302 server.go:1289] "Started kubelet" May 27 03:30:02.717327 kubelet[2302]: I0527 03:30:02.716282 2302 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:30:02.717327 kubelet[2302]: I0527 03:30:02.717187 2302 server.go:317] "Adding debug handlers to kubelet server" May 27 03:30:02.718613 kubelet[2302]: I0527 03:30:02.718317 2302 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:30:02.718799 kubelet[2302]: I0527 03:30:02.718786 2302 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:30:02.720398 kubelet[2302]: E0527 03:30:02.718943 2302 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.197.247:6443/api/v1/namespaces/default/events\": dial tcp 172.234.197.247:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-197-247.184344afeba7a74c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-197-247,UID:172-234-197-247,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-197-247,},FirstTimestamp:2025-05-27 03:30:02.713483084 +0000 UTC m=+0.429724151,LastTimestamp:2025-05-27 03:30:02.713483084 +0000 UTC m=+0.429724151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-197-247,}" May 27 03:30:02.722034 kubelet[2302]: I0527 03:30:02.722012 2302 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:30:02.722167 kubelet[2302]: I0527 03:30:02.722136 2302 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:30:02.722263 kubelet[2302]: I0527 03:30:02.722237 2302 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:30:02.724612 kubelet[2302]: I0527 03:30:02.724594 2302 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:30:02.725614 kubelet[2302]: I0527 03:30:02.724736 2302 reconciler.go:26] "Reconciler: start to sync state" May 27 03:30:02.725755 kubelet[2302]: E0527 03:30:02.725736 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.197.247:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.197.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 03:30:02.726066 kubelet[2302]: E0527 03:30:02.726046 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-197-247\" not found" May 27 03:30:02.726464 kubelet[2302]: E0527 03:30:02.726423 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.197.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-197-247?timeout=10s\": dial tcp 172.234.197.247:6443: connect: connection refused" interval="200ms" May 27 03:30:02.728422 kubelet[2302]: E0527 03:30:02.728394 2302 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 03:30:02.728716 kubelet[2302]: I0527 03:30:02.728702 2302 factory.go:223] Registration of the containerd container factory successfully May 27 03:30:02.728765 kubelet[2302]: I0527 03:30:02.728757 2302 factory.go:223] Registration of the systemd container factory successfully May 27 03:30:02.728872 kubelet[2302]: I0527 03:30:02.728857 2302 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:30:02.755542 kubelet[2302]: I0527 03:30:02.755462 2302 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:30:02.756870 kubelet[2302]: I0527 03:30:02.756839 2302 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:30:02.756870 kubelet[2302]: I0527 03:30:02.756863 2302 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:30:02.756945 kubelet[2302]: I0527 03:30:02.756891 2302 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:30:02.756945 kubelet[2302]: I0527 03:30:02.756900 2302 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:30:02.756996 kubelet[2302]: E0527 03:30:02.756948 2302 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:30:02.763016 kubelet[2302]: I0527 03:30:02.762993 2302 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:30:02.763106 kubelet[2302]: I0527 03:30:02.763091 2302 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:30:02.763189 kubelet[2302]: I0527 03:30:02.763175 2302 state_mem.go:36] "Initialized new in-memory state store" May 27 03:30:02.765682 kubelet[2302]: I0527 03:30:02.765666 2302 policy_none.go:49] "None policy: Start" May 27 03:30:02.765757 kubelet[2302]: I0527 03:30:02.765746 2302 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:30:02.765835 kubelet[2302]: I0527 03:30:02.765822 2302 state_mem.go:35] "Initializing new in-memory state store" May 27 03:30:02.766214 kubelet[2302]: E0527 03:30:02.765788 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.197.247:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.197.247:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 03:30:02.772913 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 03:30:02.785897 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 03:30:02.790064 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 03:30:02.805931 kubelet[2302]: E0527 03:30:02.804760 2302 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:30:02.805931 kubelet[2302]: I0527 03:30:02.805721 2302 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:30:02.805931 kubelet[2302]: I0527 03:30:02.805731 2302 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:30:02.806394 kubelet[2302]: I0527 03:30:02.806363 2302 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:30:02.808643 kubelet[2302]: E0527 03:30:02.808612 2302 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:30:02.808690 kubelet[2302]: E0527 03:30:02.808655 2302 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-197-247\" not found" May 27 03:30:02.872878 systemd[1]: Created slice kubepods-burstable-pod0aac9761083350af99b88c7cf7c1f58f.slice - libcontainer container kubepods-burstable-pod0aac9761083350af99b88c7cf7c1f58f.slice. May 27 03:30:02.890411 kubelet[2302]: E0527 03:30:02.888919 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-247\" not found" node="172-234-197-247" May 27 03:30:02.893605 systemd[1]: Created slice kubepods-burstable-pod7b100ea37234713535f0096491f92fdb.slice - libcontainer container kubepods-burstable-pod7b100ea37234713535f0096491f92fdb.slice. May 27 03:30:02.905903 kubelet[2302]: E0527 03:30:02.905860 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-247\" not found" node="172-234-197-247" May 27 03:30:02.908134 kubelet[2302]: I0527 03:30:02.908097 2302 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-247" May 27 03:30:02.908887 kubelet[2302]: E0527 03:30:02.908851 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.197.247:6443/api/v1/nodes\": dial tcp 172.234.197.247:6443: connect: connection refused" node="172-234-197-247" May 27 03:30:02.909991 systemd[1]: Created slice kubepods-burstable-pod6534a8c8bd8629702bc4fa76ad16b2f9.slice - libcontainer container kubepods-burstable-pod6534a8c8bd8629702bc4fa76ad16b2f9.slice. May 27 03:30:02.912553 kubelet[2302]: E0527 03:30:02.912518 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-247\" not found" node="172-234-197-247" May 27 03:30:02.927415 kubelet[2302]: E0527 03:30:02.927367 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.197.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-197-247?timeout=10s\": dial tcp 172.234.197.247:6443: connect: connection refused" interval="400ms" May 27 03:30:03.026161 kubelet[2302]: I0527 03:30:03.026088 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aac9761083350af99b88c7cf7c1f58f-k8s-certs\") pod \"kube-apiserver-172-234-197-247\" (UID: \"0aac9761083350af99b88c7cf7c1f58f\") " pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:03.026375 kubelet[2302]: I0527 03:30:03.026160 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-ca-certs\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:03.026375 kubelet[2302]: I0527 03:30:03.026217 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6534a8c8bd8629702bc4fa76ad16b2f9-kubeconfig\") pod \"kube-scheduler-172-234-197-247\" (UID: \"6534a8c8bd8629702bc4fa76ad16b2f9\") " pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:30:03.026375 kubelet[2302]: I0527 03:30:03.026252 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aac9761083350af99b88c7cf7c1f58f-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-197-247\" (UID: \"0aac9761083350af99b88c7cf7c1f58f\") " pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:03.026375 kubelet[2302]: I0527 03:30:03.026285 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-flexvolume-dir\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:03.026375 kubelet[2302]: I0527 03:30:03.026308 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-k8s-certs\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:03.026508 kubelet[2302]: I0527 03:30:03.026333 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-kubeconfig\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:03.026508 kubelet[2302]: I0527 03:30:03.026360 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:03.026508 kubelet[2302]: I0527 03:30:03.026384 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aac9761083350af99b88c7cf7c1f58f-ca-certs\") pod \"kube-apiserver-172-234-197-247\" (UID: \"0aac9761083350af99b88c7cf7c1f58f\") " pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:03.111139 kubelet[2302]: I0527 03:30:03.111091 2302 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-247" May 27 03:30:03.111471 kubelet[2302]: E0527 03:30:03.111444 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.197.247:6443/api/v1/nodes\": dial tcp 172.234.197.247:6443: connect: connection refused" node="172-234-197-247" May 27 03:30:03.190506 kubelet[2302]: E0527 03:30:03.190405 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:03.191121 containerd[1552]: time="2025-05-27T03:30:03.191061624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-197-247,Uid:0aac9761083350af99b88c7cf7c1f58f,Namespace:kube-system,Attempt:0,}" May 27 03:30:03.207804 kubelet[2302]: E0527 03:30:03.207774 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:03.208511 containerd[1552]: time="2025-05-27T03:30:03.208469484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-197-247,Uid:7b100ea37234713535f0096491f92fdb,Namespace:kube-system,Attempt:0,}" May 27 03:30:03.213686 kubelet[2302]: E0527 03:30:03.213546 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:03.215533 containerd[1552]: time="2025-05-27T03:30:03.215356144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-197-247,Uid:6534a8c8bd8629702bc4fa76ad16b2f9,Namespace:kube-system,Attempt:0,}" May 27 03:30:03.217885 containerd[1552]: time="2025-05-27T03:30:03.217849774Z" level=info msg="connecting to shim 55328234b7622166c4a2769dc8b92cfabc1363e6a109f3e2fb3cde1d121296f0" address="unix:///run/containerd/s/d79520f659fd31d947bb0ec135d89ee8209e800abdf473ab2524e2d547d269f9" namespace=k8s.io protocol=ttrpc version=3 May 27 03:30:03.253411 containerd[1552]: time="2025-05-27T03:30:03.253343034Z" level=info msg="connecting to shim ec028910a37ea6b9de6377321bc7ec78854d3547e9eefdb3ef200dbecb9f883f" address="unix:///run/containerd/s/01cb4231f283b0643efb231ffb3c92fe4e7aeeb3207ff1c6dbb254e5385af404" namespace=k8s.io protocol=ttrpc version=3 May 27 03:30:03.260862 systemd[1]: Started cri-containerd-55328234b7622166c4a2769dc8b92cfabc1363e6a109f3e2fb3cde1d121296f0.scope - libcontainer container 55328234b7622166c4a2769dc8b92cfabc1363e6a109f3e2fb3cde1d121296f0. May 27 03:30:03.270989 containerd[1552]: time="2025-05-27T03:30:03.270959534Z" level=info msg="connecting to shim bab721f734841af8a7ff824e054d13bbe69d96a3fa7d68d3e741a68fb43f9623" address="unix:///run/containerd/s/bf504cc5bc6afdc8c93dafba250a6265e296fd257a53e5eda3f18c1e6749dbf7" namespace=k8s.io protocol=ttrpc version=3 May 27 03:30:03.302710 systemd[1]: Started cri-containerd-ec028910a37ea6b9de6377321bc7ec78854d3547e9eefdb3ef200dbecb9f883f.scope - libcontainer container ec028910a37ea6b9de6377321bc7ec78854d3547e9eefdb3ef200dbecb9f883f. May 27 03:30:03.315938 systemd[1]: Started cri-containerd-bab721f734841af8a7ff824e054d13bbe69d96a3fa7d68d3e741a68fb43f9623.scope - libcontainer container bab721f734841af8a7ff824e054d13bbe69d96a3fa7d68d3e741a68fb43f9623. May 27 03:30:03.329590 kubelet[2302]: E0527 03:30:03.328402 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.197.247:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-197-247?timeout=10s\": dial tcp 172.234.197.247:6443: connect: connection refused" interval="800ms" May 27 03:30:03.398203 containerd[1552]: time="2025-05-27T03:30:03.394103024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-197-247,Uid:7b100ea37234713535f0096491f92fdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec028910a37ea6b9de6377321bc7ec78854d3547e9eefdb3ef200dbecb9f883f\"" May 27 03:30:03.398357 kubelet[2302]: E0527 03:30:03.397905 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:03.405274 containerd[1552]: time="2025-05-27T03:30:03.405134994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-197-247,Uid:0aac9761083350af99b88c7cf7c1f58f,Namespace:kube-system,Attempt:0,} returns sandbox id \"55328234b7622166c4a2769dc8b92cfabc1363e6a109f3e2fb3cde1d121296f0\"" May 27 03:30:03.407016 kubelet[2302]: E0527 03:30:03.406637 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:03.411209 containerd[1552]: time="2025-05-27T03:30:03.411186034Z" level=info msg="CreateContainer within sandbox \"ec028910a37ea6b9de6377321bc7ec78854d3547e9eefdb3ef200dbecb9f883f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 03:30:03.414160 containerd[1552]: time="2025-05-27T03:30:03.414140244Z" level=info msg="CreateContainer within sandbox \"55328234b7622166c4a2769dc8b92cfabc1363e6a109f3e2fb3cde1d121296f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 03:30:03.426201 containerd[1552]: time="2025-05-27T03:30:03.426158164Z" level=info msg="Container a67227ac520f6bb7d2a1177a71fd3e00494d77b5373852a7de7633fce311f277: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:03.430959 containerd[1552]: time="2025-05-27T03:30:03.430920804Z" level=info msg="Container 36f4177595b59d70fb46b42b0c6df6a55fe9bf5b51a6ca221870a3f66509a58d: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:03.433094 containerd[1552]: time="2025-05-27T03:30:03.433057754Z" level=info msg="CreateContainer within sandbox \"ec028910a37ea6b9de6377321bc7ec78854d3547e9eefdb3ef200dbecb9f883f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a67227ac520f6bb7d2a1177a71fd3e00494d77b5373852a7de7633fce311f277\"" May 27 03:30:03.434617 containerd[1552]: time="2025-05-27T03:30:03.434533654Z" level=info msg="StartContainer for \"a67227ac520f6bb7d2a1177a71fd3e00494d77b5373852a7de7633fce311f277\"" May 27 03:30:03.435891 containerd[1552]: time="2025-05-27T03:30:03.435852394Z" level=info msg="connecting to shim a67227ac520f6bb7d2a1177a71fd3e00494d77b5373852a7de7633fce311f277" address="unix:///run/containerd/s/01cb4231f283b0643efb231ffb3c92fe4e7aeeb3207ff1c6dbb254e5385af404" protocol=ttrpc version=3 May 27 03:30:03.437953 containerd[1552]: time="2025-05-27T03:30:03.437920154Z" level=info msg="CreateContainer within sandbox \"55328234b7622166c4a2769dc8b92cfabc1363e6a109f3e2fb3cde1d121296f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36f4177595b59d70fb46b42b0c6df6a55fe9bf5b51a6ca221870a3f66509a58d\"" May 27 03:30:03.438293 containerd[1552]: time="2025-05-27T03:30:03.438263984Z" level=info msg="StartContainer for \"36f4177595b59d70fb46b42b0c6df6a55fe9bf5b51a6ca221870a3f66509a58d\"" May 27 03:30:03.439385 containerd[1552]: time="2025-05-27T03:30:03.439346464Z" level=info msg="connecting to shim 36f4177595b59d70fb46b42b0c6df6a55fe9bf5b51a6ca221870a3f66509a58d" address="unix:///run/containerd/s/d79520f659fd31d947bb0ec135d89ee8209e800abdf473ab2524e2d547d269f9" protocol=ttrpc version=3 May 27 03:30:03.445014 containerd[1552]: time="2025-05-27T03:30:03.444904244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-197-247,Uid:6534a8c8bd8629702bc4fa76ad16b2f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"bab721f734841af8a7ff824e054d13bbe69d96a3fa7d68d3e741a68fb43f9623\"" May 27 03:30:03.445883 kubelet[2302]: E0527 03:30:03.445861 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:03.450221 containerd[1552]: time="2025-05-27T03:30:03.450184634Z" level=info msg="CreateContainer within sandbox \"bab721f734841af8a7ff824e054d13bbe69d96a3fa7d68d3e741a68fb43f9623\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 03:30:03.470846 containerd[1552]: time="2025-05-27T03:30:03.468469824Z" level=info msg="Container 0e58a870caa7a548187c63d0e8c96d5441428303e359eb7e9e6423377bec4726: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:03.477508 containerd[1552]: time="2025-05-27T03:30:03.477463954Z" level=info msg="CreateContainer within sandbox \"bab721f734841af8a7ff824e054d13bbe69d96a3fa7d68d3e741a68fb43f9623\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e58a870caa7a548187c63d0e8c96d5441428303e359eb7e9e6423377bec4726\"" May 27 03:30:03.478167 containerd[1552]: time="2025-05-27T03:30:03.478141544Z" level=info msg="StartContainer for \"0e58a870caa7a548187c63d0e8c96d5441428303e359eb7e9e6423377bec4726\"" May 27 03:30:03.479161 containerd[1552]: time="2025-05-27T03:30:03.479004644Z" level=info msg="connecting to shim 0e58a870caa7a548187c63d0e8c96d5441428303e359eb7e9e6423377bec4726" address="unix:///run/containerd/s/bf504cc5bc6afdc8c93dafba250a6265e296fd257a53e5eda3f18c1e6749dbf7" protocol=ttrpc version=3 May 27 03:30:03.479731 systemd[1]: Started cri-containerd-36f4177595b59d70fb46b42b0c6df6a55fe9bf5b51a6ca221870a3f66509a58d.scope - libcontainer container 36f4177595b59d70fb46b42b0c6df6a55fe9bf5b51a6ca221870a3f66509a58d. May 27 03:30:03.481039 systemd[1]: Started cri-containerd-a67227ac520f6bb7d2a1177a71fd3e00494d77b5373852a7de7633fce311f277.scope - libcontainer container a67227ac520f6bb7d2a1177a71fd3e00494d77b5373852a7de7633fce311f277. May 27 03:30:03.508844 systemd[1]: Started cri-containerd-0e58a870caa7a548187c63d0e8c96d5441428303e359eb7e9e6423377bec4726.scope - libcontainer container 0e58a870caa7a548187c63d0e8c96d5441428303e359eb7e9e6423377bec4726. May 27 03:30:03.517599 kubelet[2302]: I0527 03:30:03.516935 2302 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-247" May 27 03:30:03.518609 kubelet[2302]: E0527 03:30:03.518550 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.197.247:6443/api/v1/nodes\": dial tcp 172.234.197.247:6443: connect: connection refused" node="172-234-197-247" May 27 03:30:03.599194 containerd[1552]: time="2025-05-27T03:30:03.598940124Z" level=info msg="StartContainer for \"a67227ac520f6bb7d2a1177a71fd3e00494d77b5373852a7de7633fce311f277\" returns successfully" May 27 03:30:03.618633 containerd[1552]: time="2025-05-27T03:30:03.618524504Z" level=info msg="StartContainer for \"36f4177595b59d70fb46b42b0c6df6a55fe9bf5b51a6ca221870a3f66509a58d\" returns successfully" May 27 03:30:03.621579 containerd[1552]: time="2025-05-27T03:30:03.621489694Z" level=info msg="StartContainer for \"0e58a870caa7a548187c63d0e8c96d5441428303e359eb7e9e6423377bec4726\" returns successfully" May 27 03:30:03.784333 kubelet[2302]: E0527 03:30:03.784285 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-247\" not found" node="172-234-197-247" May 27 03:30:03.784468 kubelet[2302]: E0527 03:30:03.784446 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:03.786967 kubelet[2302]: E0527 03:30:03.786934 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-247\" not found" node="172-234-197-247" May 27 03:30:03.787087 kubelet[2302]: E0527 03:30:03.787060 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:03.793916 kubelet[2302]: E0527 03:30:03.793888 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-247\" not found" node="172-234-197-247" May 27 03:30:03.794200 kubelet[2302]: E0527 03:30:03.794171 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:04.321866 kubelet[2302]: I0527 03:30:04.321830 2302 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-247" May 27 03:30:04.804703 kubelet[2302]: E0527 03:30:04.804305 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-247\" not found" node="172-234-197-247" May 27 03:30:04.808235 kubelet[2302]: E0527 03:30:04.806152 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-197-247\" not found" node="172-234-197-247" May 27 03:30:04.808235 kubelet[2302]: E0527 03:30:04.806238 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:04.808235 kubelet[2302]: E0527 03:30:04.806669 2302 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:05.415890 kubelet[2302]: E0527 03:30:05.415840 2302 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-197-247\" not found" node="172-234-197-247" May 27 03:30:05.429653 kubelet[2302]: I0527 03:30:05.429609 2302 kubelet_node_status.go:78] "Successfully registered node" node="172-234-197-247" May 27 03:30:05.528820 kubelet[2302]: I0527 03:30:05.528786 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:05.540396 kubelet[2302]: E0527 03:30:05.540293 2302 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-197-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:05.540396 kubelet[2302]: I0527 03:30:05.540331 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:05.542281 kubelet[2302]: E0527 03:30:05.541598 2302 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-197-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:05.542281 kubelet[2302]: I0527 03:30:05.541614 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:30:05.545846 kubelet[2302]: E0527 03:30:05.545803 2302 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-197-247\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:30:05.708584 kubelet[2302]: I0527 03:30:05.707242 2302 apiserver.go:52] "Watching apiserver" May 27 03:30:05.725184 kubelet[2302]: I0527 03:30:05.725151 2302 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:30:07.243092 systemd[1]: Reload requested from client PID 2576 ('systemctl') (unit session-7.scope)... May 27 03:30:07.243115 systemd[1]: Reloading... May 27 03:30:07.376649 zram_generator::config[2626]: No configuration found. May 27 03:30:07.480514 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 03:30:07.602569 systemd[1]: Reloading finished in 359 ms. May 27 03:30:07.631320 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:30:07.653199 systemd[1]: kubelet.service: Deactivated successfully. May 27 03:30:07.653749 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:30:07.653801 systemd[1]: kubelet.service: Consumed 907ms CPU time, 130M memory peak. May 27 03:30:07.658265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 03:30:07.849039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 03:30:07.859847 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 03:30:07.913591 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:30:07.913591 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 03:30:07.913591 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 03:30:07.914096 kubelet[2671]: I0527 03:30:07.913618 2671 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 03:30:07.922152 kubelet[2671]: I0527 03:30:07.922062 2671 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 03:30:07.922152 kubelet[2671]: I0527 03:30:07.922093 2671 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 03:30:07.922668 kubelet[2671]: I0527 03:30:07.922632 2671 server.go:956] "Client rotation is on, will bootstrap in background" May 27 03:30:07.927504 kubelet[2671]: I0527 03:30:07.927040 2671 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 03:30:07.930401 kubelet[2671]: I0527 03:30:07.930176 2671 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 03:30:07.934687 kubelet[2671]: I0527 03:30:07.934671 2671 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 03:30:07.938912 kubelet[2671]: I0527 03:30:07.938840 2671 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 03:30:07.939306 kubelet[2671]: I0527 03:30:07.939268 2671 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 03:30:07.939464 kubelet[2671]: I0527 03:30:07.939300 2671 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-197-247","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 03:30:07.939464 kubelet[2671]: I0527 03:30:07.939457 2671 topology_manager.go:138] "Creating topology manager with none policy" May 27 03:30:07.939464 kubelet[2671]: I0527 03:30:07.939466 2671 container_manager_linux.go:303] "Creating device plugin manager" May 27 03:30:07.939659 kubelet[2671]: I0527 03:30:07.939515 2671 state_mem.go:36] "Initialized new in-memory state store" May 27 03:30:07.939718 kubelet[2671]: I0527 03:30:07.939699 2671 kubelet.go:480] "Attempting to sync node with API server" May 27 03:30:07.939747 kubelet[2671]: I0527 03:30:07.939721 2671 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 03:30:07.940420 kubelet[2671]: I0527 03:30:07.940187 2671 kubelet.go:386] "Adding apiserver pod source" May 27 03:30:07.940462 kubelet[2671]: I0527 03:30:07.940433 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 03:30:07.948583 kubelet[2671]: I0527 03:30:07.947927 2671 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 03:30:07.949439 kubelet[2671]: I0527 03:30:07.949286 2671 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 03:30:07.955601 kubelet[2671]: I0527 03:30:07.954877 2671 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 03:30:07.955601 kubelet[2671]: I0527 03:30:07.954917 2671 server.go:1289] "Started kubelet" May 27 03:30:07.957044 kubelet[2671]: I0527 03:30:07.956990 2671 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 03:30:07.958012 kubelet[2671]: I0527 03:30:07.957982 2671 server.go:317] "Adding debug handlers to kubelet server" May 27 03:30:07.959684 kubelet[2671]: I0527 03:30:07.959663 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 03:30:07.966458 kubelet[2671]: I0527 03:30:07.966375 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 03:30:07.966547 kubelet[2671]: I0527 03:30:07.966514 2671 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 03:30:07.967408 kubelet[2671]: I0527 03:30:07.967388 2671 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 03:30:07.969133 kubelet[2671]: I0527 03:30:07.968528 2671 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 03:30:07.970519 kubelet[2671]: I0527 03:30:07.970479 2671 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 03:30:07.970635 kubelet[2671]: I0527 03:30:07.970613 2671 reconciler.go:26] "Reconciler: start to sync state" May 27 03:30:07.976966 kubelet[2671]: I0527 03:30:07.976545 2671 factory.go:223] Registration of the systemd container factory successfully May 27 03:30:07.976966 kubelet[2671]: I0527 03:30:07.976667 2671 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 03:30:07.981071 kubelet[2671]: I0527 03:30:07.981038 2671 factory.go:223] Registration of the containerd container factory successfully May 27 03:30:07.989167 kubelet[2671]: I0527 03:30:07.989129 2671 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 03:30:07.991128 kubelet[2671]: I0527 03:30:07.991112 2671 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 03:30:07.991186 kubelet[2671]: I0527 03:30:07.991178 2671 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 03:30:07.991298 kubelet[2671]: I0527 03:30:07.991286 2671 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 03:30:07.991375 kubelet[2671]: I0527 03:30:07.991367 2671 kubelet.go:2436] "Starting kubelet main sync loop" May 27 03:30:07.991511 kubelet[2671]: E0527 03:30:07.991474 2671 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 03:30:08.054623 kubelet[2671]: I0527 03:30:08.054354 2671 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 03:30:08.054623 kubelet[2671]: I0527 03:30:08.054376 2671 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 03:30:08.054623 kubelet[2671]: I0527 03:30:08.054397 2671 state_mem.go:36] "Initialized new in-memory state store" May 27 03:30:08.054623 kubelet[2671]: I0527 03:30:08.054539 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 03:30:08.055026 kubelet[2671]: I0527 03:30:08.054548 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 03:30:08.055092 kubelet[2671]: I0527 03:30:08.055084 2671 policy_none.go:49] "None policy: Start" May 27 03:30:08.055151 kubelet[2671]: I0527 03:30:08.055142 2671 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 03:30:08.055207 kubelet[2671]: I0527 03:30:08.055198 2671 state_mem.go:35] "Initializing new in-memory state store" May 27 03:30:08.055378 kubelet[2671]: I0527 03:30:08.055365 2671 state_mem.go:75] "Updated machine memory state" May 27 03:30:08.061337 kubelet[2671]: E0527 03:30:08.061316 2671 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 03:30:08.062586 kubelet[2671]: I0527 03:30:08.061994 2671 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 03:30:08.062586 kubelet[2671]: I0527 03:30:08.062014 2671 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 03:30:08.062586 kubelet[2671]: I0527 03:30:08.062327 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 03:30:08.064271 kubelet[2671]: E0527 03:30:08.064172 2671 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 03:30:08.092448 kubelet[2671]: I0527 03:30:08.092407 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:08.095220 kubelet[2671]: I0527 03:30:08.094989 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:08.095510 kubelet[2671]: I0527 03:30:08.095481 2671 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:30:08.173155 kubelet[2671]: I0527 03:30:08.172004 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-kubeconfig\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:08.174611 kubelet[2671]: I0527 03:30:08.174163 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:08.174611 kubelet[2671]: I0527 03:30:08.174408 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0aac9761083350af99b88c7cf7c1f58f-k8s-certs\") pod \"kube-apiserver-172-234-197-247\" (UID: \"0aac9761083350af99b88c7cf7c1f58f\") " pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:08.174611 kubelet[2671]: I0527 03:30:08.174426 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0aac9761083350af99b88c7cf7c1f58f-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-197-247\" (UID: \"0aac9761083350af99b88c7cf7c1f58f\") " pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:08.174611 kubelet[2671]: I0527 03:30:08.174444 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-flexvolume-dir\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:08.174611 kubelet[2671]: I0527 03:30:08.174459 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-k8s-certs\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:08.174767 kubelet[2671]: I0527 03:30:08.174476 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6534a8c8bd8629702bc4fa76ad16b2f9-kubeconfig\") pod \"kube-scheduler-172-234-197-247\" (UID: \"6534a8c8bd8629702bc4fa76ad16b2f9\") " pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:30:08.174767 kubelet[2671]: I0527 03:30:08.174495 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0aac9761083350af99b88c7cf7c1f58f-ca-certs\") pod \"kube-apiserver-172-234-197-247\" (UID: \"0aac9761083350af99b88c7cf7c1f58f\") " pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:08.174767 kubelet[2671]: I0527 03:30:08.174520 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b100ea37234713535f0096491f92fdb-ca-certs\") pod \"kube-controller-manager-172-234-197-247\" (UID: \"7b100ea37234713535f0096491f92fdb\") " pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:08.175475 kubelet[2671]: I0527 03:30:08.175315 2671 kubelet_node_status.go:75] "Attempting to register node" node="172-234-197-247" May 27 03:30:08.183270 kubelet[2671]: I0527 03:30:08.183199 2671 kubelet_node_status.go:124] "Node was previously registered" node="172-234-197-247" May 27 03:30:08.183270 kubelet[2671]: I0527 03:30:08.183282 2671 kubelet_node_status.go:78] "Successfully registered node" node="172-234-197-247" May 27 03:30:08.242689 sudo[2710]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 03:30:08.243035 sudo[2710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 03:30:08.400594 kubelet[2671]: E0527 03:30:08.400461 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:08.401233 kubelet[2671]: E0527 03:30:08.401131 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:08.401888 kubelet[2671]: E0527 03:30:08.401664 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:08.801109 sudo[2710]: pam_unix(sudo:session): session closed for user root May 27 03:30:08.941504 kubelet[2671]: I0527 03:30:08.940993 2671 apiserver.go:52] "Watching apiserver" May 27 03:30:08.971068 kubelet[2671]: I0527 03:30:08.971019 2671 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 03:30:09.023432 kubelet[2671]: E0527 03:30:09.023378 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:09.024993 kubelet[2671]: E0527 03:30:09.023968 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:09.024993 kubelet[2671]: E0527 03:30:09.024376 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:09.065830 kubelet[2671]: I0527 03:30:09.065710 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-197-247" podStartSLOduration=1.065697574 podStartE2EDuration="1.065697574s" podCreationTimestamp="2025-05-27 03:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:30:09.053722494 +0000 UTC m=+1.186691351" watchObservedRunningTime="2025-05-27 03:30:09.065697574 +0000 UTC m=+1.198666431" May 27 03:30:09.066068 kubelet[2671]: I0527 03:30:09.066024 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-197-247" podStartSLOduration=1.066018324 podStartE2EDuration="1.066018324s" podCreationTimestamp="2025-05-27 03:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:30:09.062108084 +0000 UTC m=+1.195076941" watchObservedRunningTime="2025-05-27 03:30:09.066018324 +0000 UTC m=+1.198987201" May 27 03:30:09.072463 kubelet[2671]: I0527 03:30:09.072173 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-197-247" podStartSLOduration=1.072162824 podStartE2EDuration="1.072162824s" podCreationTimestamp="2025-05-27 03:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:30:09.072036684 +0000 UTC m=+1.205005541" watchObservedRunningTime="2025-05-27 03:30:09.072162824 +0000 UTC m=+1.205131681" May 27 03:30:09.287007 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 03:30:10.025825 kubelet[2671]: E0527 03:30:10.025406 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:10.026502 kubelet[2671]: E0527 03:30:10.026038 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:10.361231 sudo[1801]: pam_unix(sudo:session): session closed for user root May 27 03:30:10.411600 sshd[1800]: Connection closed by 139.178.68.195 port 57124 May 27 03:30:10.412220 sshd-session[1798]: pam_unix(sshd:session): session closed for user core May 27 03:30:10.416353 systemd[1]: sshd@6-172.234.197.247:22-139.178.68.195:57124.service: Deactivated successfully. May 27 03:30:10.418848 systemd[1]: session-7.scope: Deactivated successfully. May 27 03:30:10.419127 systemd[1]: session-7.scope: Consumed 4.438s CPU time, 274.8M memory peak. May 27 03:30:10.423895 systemd-logind[1527]: Session 7 logged out. Waiting for processes to exit. May 27 03:30:10.425509 systemd-logind[1527]: Removed session 7. May 27 03:30:12.144924 kubelet[2671]: E0527 03:30:12.144890 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:13.499349 kubelet[2671]: I0527 03:30:13.499293 2671 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 03:30:13.499833 containerd[1552]: time="2025-05-27T03:30:13.499791016Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 03:30:13.500071 kubelet[2671]: I0527 03:30:13.500003 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 03:30:14.267883 systemd[1]: Created slice kubepods-besteffort-pod23eb420a_a614_4711_97aa_1fb6a5ca2dc4.slice - libcontainer container kubepods-besteffort-pod23eb420a_a614_4711_97aa_1fb6a5ca2dc4.slice. May 27 03:30:14.290599 systemd[1]: Created slice kubepods-burstable-pod7c0dd807_9699_488d_ac58_f1562c7b3dd2.slice - libcontainer container kubepods-burstable-pod7c0dd807_9699_488d_ac58_f1562c7b3dd2.slice. May 27 03:30:14.318417 kubelet[2671]: I0527 03:30:14.318190 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-run\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.318417 kubelet[2671]: I0527 03:30:14.318245 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-bpf-maps\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.318417 kubelet[2671]: I0527 03:30:14.318263 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cni-path\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.318417 kubelet[2671]: I0527 03:30:14.318276 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-etc-cni-netd\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.318417 kubelet[2671]: I0527 03:30:14.318289 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-host-proc-sys-net\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.318417 kubelet[2671]: I0527 03:30:14.318303 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23eb420a-a614-4711-97aa-1fb6a5ca2dc4-xtables-lock\") pod \"kube-proxy-qn9jr\" (UID: \"23eb420a-a614-4711-97aa-1fb6a5ca2dc4\") " pod="kube-system/kube-proxy-qn9jr" May 27 03:30:14.318767 kubelet[2671]: I0527 03:30:14.318316 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c0dd807-9699-488d-ac58-f1562c7b3dd2-hubble-tls\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.318767 kubelet[2671]: I0527 03:30:14.318329 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfvvs\" (UniqueName: \"kubernetes.io/projected/7c0dd807-9699-488d-ac58-f1562c7b3dd2-kube-api-access-lfvvs\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.318767 kubelet[2671]: I0527 03:30:14.318342 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23eb420a-a614-4711-97aa-1fb6a5ca2dc4-lib-modules\") pod \"kube-proxy-qn9jr\" (UID: \"23eb420a-a614-4711-97aa-1fb6a5ca2dc4\") " pod="kube-system/kube-proxy-qn9jr" May 27 03:30:14.318767 kubelet[2671]: I0527 03:30:14.318354 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-cgroup\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.319051 kubelet[2671]: I0527 03:30:14.318870 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-xtables-lock\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.319051 kubelet[2671]: I0527 03:30:14.318905 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-host-proc-sys-kernel\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.319051 kubelet[2671]: I0527 03:30:14.318921 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23eb420a-a614-4711-97aa-1fb6a5ca2dc4-kube-proxy\") pod \"kube-proxy-qn9jr\" (UID: \"23eb420a-a614-4711-97aa-1fb6a5ca2dc4\") " pod="kube-system/kube-proxy-qn9jr" May 27 03:30:14.319051 kubelet[2671]: I0527 03:30:14.318933 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdvgk\" (UniqueName: \"kubernetes.io/projected/23eb420a-a614-4711-97aa-1fb6a5ca2dc4-kube-api-access-cdvgk\") pod \"kube-proxy-qn9jr\" (UID: \"23eb420a-a614-4711-97aa-1fb6a5ca2dc4\") " pod="kube-system/kube-proxy-qn9jr" May 27 03:30:14.319051 kubelet[2671]: I0527 03:30:14.318946 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-hostproc\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.319051 kubelet[2671]: I0527 03:30:14.318959 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-lib-modules\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.319189 kubelet[2671]: I0527 03:30:14.318971 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c0dd807-9699-488d-ac58-f1562c7b3dd2-clustermesh-secrets\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.319189 kubelet[2671]: I0527 03:30:14.318984 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-config-path\") pod \"cilium-lpk5f\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " pod="kube-system/cilium-lpk5f" May 27 03:30:14.575797 kubelet[2671]: E0527 03:30:14.575048 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:14.578081 containerd[1552]: time="2025-05-27T03:30:14.578003389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn9jr,Uid:23eb420a-a614-4711-97aa-1fb6a5ca2dc4,Namespace:kube-system,Attempt:0,}" May 27 03:30:14.582606 systemd[1]: Created slice kubepods-besteffort-podbde459c4_e5ba_482b_8176_05f5dc1c00d9.slice - libcontainer container kubepods-besteffort-podbde459c4_e5ba_482b_8176_05f5dc1c00d9.slice. May 27 03:30:14.595546 kubelet[2671]: E0527 03:30:14.595432 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:14.596030 containerd[1552]: time="2025-05-27T03:30:14.596004677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lpk5f,Uid:7c0dd807-9699-488d-ac58-f1562c7b3dd2,Namespace:kube-system,Attempt:0,}" May 27 03:30:14.611248 containerd[1552]: time="2025-05-27T03:30:14.611213777Z" level=info msg="connecting to shim 61ef31179506edcab85d3b74ee1fc4eebaf22f086ea6b0e7357a28f3fdb4b561" address="unix:///run/containerd/s/9429916e6e86004eb6fc8a2d14ce98f83a5e03f07402de6867569d589b368ea5" namespace=k8s.io protocol=ttrpc version=3 May 27 03:30:14.619545 containerd[1552]: time="2025-05-27T03:30:14.619517839Z" level=info msg="connecting to shim 26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d" address="unix:///run/containerd/s/79f770f831849de07f901385479c78ed44eaa8cfc9b01d62fe64c7c6fa70959c" namespace=k8s.io protocol=ttrpc version=3 May 27 03:30:14.622273 kubelet[2671]: I0527 03:30:14.622168 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bde459c4-e5ba-482b-8176-05f5dc1c00d9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pqbr5\" (UID: \"bde459c4-e5ba-482b-8176-05f5dc1c00d9\") " pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:30:14.622348 kubelet[2671]: I0527 03:30:14.622324 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-829w4\" (UniqueName: \"kubernetes.io/projected/bde459c4-e5ba-482b-8176-05f5dc1c00d9-kube-api-access-829w4\") pod \"cilium-operator-6c4d7847fc-pqbr5\" (UID: \"bde459c4-e5ba-482b-8176-05f5dc1c00d9\") " pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:30:14.653695 systemd[1]: Started cri-containerd-26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d.scope - libcontainer container 26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d. May 27 03:30:14.655258 systemd[1]: Started cri-containerd-61ef31179506edcab85d3b74ee1fc4eebaf22f086ea6b0e7357a28f3fdb4b561.scope - libcontainer container 61ef31179506edcab85d3b74ee1fc4eebaf22f086ea6b0e7357a28f3fdb4b561. May 27 03:30:14.690042 containerd[1552]: time="2025-05-27T03:30:14.689994242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lpk5f,Uid:7c0dd807-9699-488d-ac58-f1562c7b3dd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\"" May 27 03:30:14.691138 kubelet[2671]: E0527 03:30:14.691000 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:14.693894 containerd[1552]: time="2025-05-27T03:30:14.693835363Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 03:30:14.701260 containerd[1552]: time="2025-05-27T03:30:14.701222026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn9jr,Uid:23eb420a-a614-4711-97aa-1fb6a5ca2dc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"61ef31179506edcab85d3b74ee1fc4eebaf22f086ea6b0e7357a28f3fdb4b561\"" May 27 03:30:14.702327 kubelet[2671]: E0527 03:30:14.702309 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:14.706991 containerd[1552]: time="2025-05-27T03:30:14.706958977Z" level=info msg="CreateContainer within sandbox \"61ef31179506edcab85d3b74ee1fc4eebaf22f086ea6b0e7357a28f3fdb4b561\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 03:30:14.716107 containerd[1552]: time="2025-05-27T03:30:14.716086825Z" level=info msg="Container a0538b603446ba29523a87f4841acfc0b1a54b70db0a2d441bc00aa702cf60bb: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:14.728536 containerd[1552]: time="2025-05-27T03:30:14.728376782Z" level=info msg="CreateContainer within sandbox \"61ef31179506edcab85d3b74ee1fc4eebaf22f086ea6b0e7357a28f3fdb4b561\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0538b603446ba29523a87f4841acfc0b1a54b70db0a2d441bc00aa702cf60bb\"" May 27 03:30:14.730638 containerd[1552]: time="2025-05-27T03:30:14.730584222Z" level=info msg="StartContainer for \"a0538b603446ba29523a87f4841acfc0b1a54b70db0a2d441bc00aa702cf60bb\"" May 27 03:30:14.736022 containerd[1552]: time="2025-05-27T03:30:14.735767195Z" level=info msg="connecting to shim a0538b603446ba29523a87f4841acfc0b1a54b70db0a2d441bc00aa702cf60bb" address="unix:///run/containerd/s/9429916e6e86004eb6fc8a2d14ce98f83a5e03f07402de6867569d589b368ea5" protocol=ttrpc version=3 May 27 03:30:14.761845 systemd[1]: Started cri-containerd-a0538b603446ba29523a87f4841acfc0b1a54b70db0a2d441bc00aa702cf60bb.scope - libcontainer container a0538b603446ba29523a87f4841acfc0b1a54b70db0a2d441bc00aa702cf60bb. May 27 03:30:14.803528 containerd[1552]: time="2025-05-27T03:30:14.803464021Z" level=info msg="StartContainer for \"a0538b603446ba29523a87f4841acfc0b1a54b70db0a2d441bc00aa702cf60bb\" returns successfully" May 27 03:30:14.887035 kubelet[2671]: E0527 03:30:14.886595 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:14.888276 containerd[1552]: time="2025-05-27T03:30:14.888179263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pqbr5,Uid:bde459c4-e5ba-482b-8176-05f5dc1c00d9,Namespace:kube-system,Attempt:0,}" May 27 03:30:14.909505 containerd[1552]: time="2025-05-27T03:30:14.909454794Z" level=info msg="connecting to shim bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0" address="unix:///run/containerd/s/b6b2ee838ba9352d15ed4bdf292e971af04aa4aa4c49e6051681a56f449955d7" namespace=k8s.io protocol=ttrpc version=3 May 27 03:30:14.948815 systemd[1]: Started cri-containerd-bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0.scope - libcontainer container bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0. May 27 03:30:15.018439 containerd[1552]: time="2025-05-27T03:30:15.018183321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pqbr5,Uid:bde459c4-e5ba-482b-8176-05f5dc1c00d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\"" May 27 03:30:15.020043 kubelet[2671]: E0527 03:30:15.019657 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:15.036400 kubelet[2671]: E0527 03:30:15.036380 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:15.045639 kubelet[2671]: I0527 03:30:15.045594 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qn9jr" podStartSLOduration=1.045581481 podStartE2EDuration="1.045581481s" podCreationTimestamp="2025-05-27 03:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:30:15.045300193 +0000 UTC m=+7.178269060" watchObservedRunningTime="2025-05-27 03:30:15.045581481 +0000 UTC m=+7.178550338" May 27 03:30:15.316627 kubelet[2671]: E0527 03:30:15.315054 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:16.041718 kubelet[2671]: E0527 03:30:16.041155 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:17.046362 kubelet[2671]: E0527 03:30:17.046300 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:17.745988 kubelet[2671]: E0527 03:30:17.745095 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:18.051048 kubelet[2671]: E0527 03:30:18.050882 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:18.917053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558403490.mount: Deactivated successfully. May 27 03:30:20.628362 containerd[1552]: time="2025-05-27T03:30:20.628320671Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:30:20.629428 containerd[1552]: time="2025-05-27T03:30:20.629226071Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 03:30:20.629932 containerd[1552]: time="2025-05-27T03:30:20.629902755Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:30:20.631284 containerd[1552]: time="2025-05-27T03:30:20.631244284Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.937344869s" May 27 03:30:20.631284 containerd[1552]: time="2025-05-27T03:30:20.631281775Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 03:30:20.632865 containerd[1552]: time="2025-05-27T03:30:20.632841718Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 03:30:20.637240 containerd[1552]: time="2025-05-27T03:30:20.637201982Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 03:30:20.644055 containerd[1552]: time="2025-05-27T03:30:20.642466474Z" level=info msg="Container 52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:20.656006 containerd[1552]: time="2025-05-27T03:30:20.655960773Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\"" May 27 03:30:20.657588 containerd[1552]: time="2025-05-27T03:30:20.656588157Z" level=info msg="StartContainer for \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\"" May 27 03:30:20.658217 containerd[1552]: time="2025-05-27T03:30:20.658188921Z" level=info msg="connecting to shim 52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13" address="unix:///run/containerd/s/79f770f831849de07f901385479c78ed44eaa8cfc9b01d62fe64c7c6fa70959c" protocol=ttrpc version=3 May 27 03:30:20.686790 systemd[1]: Started cri-containerd-52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13.scope - libcontainer container 52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13. May 27 03:30:20.719600 containerd[1552]: time="2025-05-27T03:30:20.719538015Z" level=info msg="StartContainer for \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\" returns successfully" May 27 03:30:20.735718 containerd[1552]: time="2025-05-27T03:30:20.735682241Z" level=info msg="received exit event container_id:\"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\" id:\"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\" pid:3098 exited_at:{seconds:1748316620 nanos:735413665}" May 27 03:30:20.736193 containerd[1552]: time="2025-05-27T03:30:20.736163001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\" id:\"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\" pid:3098 exited_at:{seconds:1748316620 nanos:735413665}" May 27 03:30:20.738781 systemd[1]: cri-containerd-52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13.scope: Deactivated successfully. May 27 03:30:20.760947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13-rootfs.mount: Deactivated successfully. May 27 03:30:21.056201 kubelet[2671]: E0527 03:30:21.056140 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:21.063619 containerd[1552]: time="2025-05-27T03:30:21.062175861Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 03:30:21.080687 containerd[1552]: time="2025-05-27T03:30:21.080652362Z" level=info msg="Container b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:21.087310 containerd[1552]: time="2025-05-27T03:30:21.087279045Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\"" May 27 03:30:21.088290 containerd[1552]: time="2025-05-27T03:30:21.088254334Z" level=info msg="StartContainer for \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\"" May 27 03:30:21.093103 containerd[1552]: time="2025-05-27T03:30:21.089694083Z" level=info msg="connecting to shim b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492" address="unix:///run/containerd/s/79f770f831849de07f901385479c78ed44eaa8cfc9b01d62fe64c7c6fa70959c" protocol=ttrpc version=3 May 27 03:30:21.115740 systemd[1]: Started cri-containerd-b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492.scope - libcontainer container b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492. May 27 03:30:21.152515 containerd[1552]: time="2025-05-27T03:30:21.152465354Z" level=info msg="StartContainer for \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\" returns successfully" May 27 03:30:21.168359 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 03:30:21.168589 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 03:30:21.169968 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 03:30:21.172335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 03:30:21.177919 systemd[1]: cri-containerd-b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492.scope: Deactivated successfully. May 27 03:30:21.182953 containerd[1552]: time="2025-05-27T03:30:21.182912305Z" level=info msg="received exit event container_id:\"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\" id:\"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\" pid:3142 exited_at:{seconds:1748316621 nanos:180407324}" May 27 03:30:21.184279 containerd[1552]: time="2025-05-27T03:30:21.184260232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\" id:\"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\" pid:3142 exited_at:{seconds:1748316621 nanos:180407324}" May 27 03:30:21.199423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 03:30:21.852421 containerd[1552]: time="2025-05-27T03:30:21.852359236Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:30:21.853959 containerd[1552]: time="2025-05-27T03:30:21.853914187Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 03:30:21.854752 containerd[1552]: time="2025-05-27T03:30:21.854706673Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 03:30:21.855911 containerd[1552]: time="2025-05-27T03:30:21.855851096Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.222923796s" May 27 03:30:21.855911 containerd[1552]: time="2025-05-27T03:30:21.855884116Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 03:30:21.859421 containerd[1552]: time="2025-05-27T03:30:21.859366376Z" level=info msg="CreateContainer within sandbox \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 03:30:21.871958 containerd[1552]: time="2025-05-27T03:30:21.870787746Z" level=info msg="Container 159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:21.872396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3570882708.mount: Deactivated successfully. May 27 03:30:21.879736 containerd[1552]: time="2025-05-27T03:30:21.879696424Z" level=info msg="CreateContainer within sandbox \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\"" May 27 03:30:21.880410 containerd[1552]: time="2025-05-27T03:30:21.880381008Z" level=info msg="StartContainer for \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\"" May 27 03:30:21.881277 containerd[1552]: time="2025-05-27T03:30:21.881232715Z" level=info msg="connecting to shim 159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42" address="unix:///run/containerd/s/b6b2ee838ba9352d15ed4bdf292e971af04aa4aa4c49e6051681a56f449955d7" protocol=ttrpc version=3 May 27 03:30:21.909743 systemd[1]: Started cri-containerd-159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42.scope - libcontainer container 159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42. May 27 03:30:21.946007 containerd[1552]: time="2025-05-27T03:30:21.945924624Z" level=info msg="StartContainer for \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" returns successfully" May 27 03:30:22.065594 kubelet[2671]: E0527 03:30:22.065540 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:22.071706 containerd[1552]: time="2025-05-27T03:30:22.071455037Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 03:30:22.073380 kubelet[2671]: E0527 03:30:22.073351 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:22.091969 containerd[1552]: time="2025-05-27T03:30:22.090964444Z" level=info msg="Container df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:22.098718 containerd[1552]: time="2025-05-27T03:30:22.098083458Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\"" May 27 03:30:22.099428 containerd[1552]: time="2025-05-27T03:30:22.099389392Z" level=info msg="StartContainer for \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\"" May 27 03:30:22.101417 containerd[1552]: time="2025-05-27T03:30:22.101383680Z" level=info msg="connecting to shim df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a" address="unix:///run/containerd/s/79f770f831849de07f901385479c78ed44eaa8cfc9b01d62fe64c7c6fa70959c" protocol=ttrpc version=3 May 27 03:30:22.133804 systemd[1]: Started cri-containerd-df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a.scope - libcontainer container df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a. May 27 03:30:22.154041 kubelet[2671]: E0527 03:30:22.154004 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:22.224907 kubelet[2671]: I0527 03:30:22.224832 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" podStartSLOduration=1.389251838 podStartE2EDuration="8.224819073s" podCreationTimestamp="2025-05-27 03:30:14 +0000 UTC" firstStartedPulling="2025-05-27 03:30:15.021063876 +0000 UTC m=+7.154032733" lastFinishedPulling="2025-05-27 03:30:21.856631111 +0000 UTC m=+13.989599968" observedRunningTime="2025-05-27 03:30:22.224789593 +0000 UTC m=+14.357758470" watchObservedRunningTime="2025-05-27 03:30:22.224819073 +0000 UTC m=+14.357787930" May 27 03:30:22.259530 containerd[1552]: time="2025-05-27T03:30:22.259477455Z" level=info msg="StartContainer for \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\" returns successfully" May 27 03:30:22.265013 systemd[1]: cri-containerd-df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a.scope: Deactivated successfully. May 27 03:30:22.266428 containerd[1552]: time="2025-05-27T03:30:22.266360985Z" level=info msg="received exit event container_id:\"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\" id:\"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\" pid:3238 exited_at:{seconds:1748316622 nanos:266126031}" May 27 03:30:22.267100 containerd[1552]: time="2025-05-27T03:30:22.267077279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\" id:\"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\" pid:3238 exited_at:{seconds:1748316622 nanos:266126031}" May 27 03:30:23.081021 kubelet[2671]: E0527 03:30:23.080523 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:23.084286 kubelet[2671]: E0527 03:30:23.082053 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:23.086706 kubelet[2671]: E0527 03:30:23.086632 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:23.098900 containerd[1552]: time="2025-05-27T03:30:23.098850489Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 03:30:23.114662 containerd[1552]: time="2025-05-27T03:30:23.114614787Z" level=info msg="Container eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:23.130975 containerd[1552]: time="2025-05-27T03:30:23.130899035Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\"" May 27 03:30:23.131659 containerd[1552]: time="2025-05-27T03:30:23.131541776Z" level=info msg="StartContainer for \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\"" May 27 03:30:23.134410 containerd[1552]: time="2025-05-27T03:30:23.134369796Z" level=info msg="connecting to shim eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932" address="unix:///run/containerd/s/79f770f831849de07f901385479c78ed44eaa8cfc9b01d62fe64c7c6fa70959c" protocol=ttrpc version=3 May 27 03:30:23.179776 systemd[1]: Started cri-containerd-eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932.scope - libcontainer container eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932. May 27 03:30:23.221872 systemd[1]: cri-containerd-eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932.scope: Deactivated successfully. May 27 03:30:23.224813 containerd[1552]: time="2025-05-27T03:30:23.222807026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\" id:\"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\" pid:3277 exited_at:{seconds:1748316623 nanos:222219676}" May 27 03:30:23.224813 containerd[1552]: time="2025-05-27T03:30:23.222898098Z" level=info msg="received exit event container_id:\"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\" id:\"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\" pid:3277 exited_at:{seconds:1748316623 nanos:222219676}" May 27 03:30:23.232313 containerd[1552]: time="2025-05-27T03:30:23.232288644Z" level=info msg="StartContainer for \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\" returns successfully" May 27 03:30:23.247017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932-rootfs.mount: Deactivated successfully. May 27 03:30:23.962987 update_engine[1530]: I20250527 03:30:23.962934 1530 update_attempter.cc:509] Updating boot flags... May 27 03:30:24.093586 kubelet[2671]: E0527 03:30:24.093062 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:24.101943 containerd[1552]: time="2025-05-27T03:30:24.101901388Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 03:30:24.132243 containerd[1552]: time="2025-05-27T03:30:24.132216499Z" level=info msg="Container cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:24.153132 containerd[1552]: time="2025-05-27T03:30:24.153081784Z" level=info msg="CreateContainer within sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\"" May 27 03:30:24.159754 containerd[1552]: time="2025-05-27T03:30:24.158645266Z" level=info msg="StartContainer for \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\"" May 27 03:30:24.162221 containerd[1552]: time="2025-05-27T03:30:24.162200095Z" level=info msg="connecting to shim cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22" address="unix:///run/containerd/s/79f770f831849de07f901385479c78ed44eaa8cfc9b01d62fe64c7c6fa70959c" protocol=ttrpc version=3 May 27 03:30:24.207385 systemd[1]: Started cri-containerd-cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22.scope - libcontainer container cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22. May 27 03:30:24.341596 containerd[1552]: time="2025-05-27T03:30:24.341533952Z" level=info msg="StartContainer for \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" returns successfully" May 27 03:30:24.425077 containerd[1552]: time="2025-05-27T03:30:24.425036353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" id:\"6fa76113068e4b2985cca0cba2670cbfed36c91dce9f99ae841dada5d264d7fa\" pid:3367 exited_at:{seconds:1748316624 nanos:424544165}" May 27 03:30:24.505596 kubelet[2671]: I0527 03:30:24.505140 2671 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 03:30:24.537625 systemd[1]: Created slice kubepods-burstable-pode2057878_6268_49cc_8eb3_9e74302b1152.slice - libcontainer container kubepods-burstable-pode2057878_6268_49cc_8eb3_9e74302b1152.slice. May 27 03:30:24.549311 systemd[1]: Created slice kubepods-burstable-pod887469ca_4b4c_4cfa_a507_2ddda10a3e6b.slice - libcontainer container kubepods-burstable-pod887469ca_4b4c_4cfa_a507_2ddda10a3e6b.slice. May 27 03:30:24.593893 kubelet[2671]: I0527 03:30:24.593702 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2057878-6268-49cc-8eb3-9e74302b1152-config-volume\") pod \"coredns-674b8bbfcf-ps8nf\" (UID: \"e2057878-6268-49cc-8eb3-9e74302b1152\") " pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:30:24.594098 kubelet[2671]: I0527 03:30:24.593982 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghjf\" (UniqueName: \"kubernetes.io/projected/887469ca-4b4c-4cfa-a507-2ddda10a3e6b-kube-api-access-bghjf\") pod \"coredns-674b8bbfcf-4zfgh\" (UID: \"887469ca-4b4c-4cfa-a507-2ddda10a3e6b\") " pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:30:24.594206 kubelet[2671]: I0527 03:30:24.594193 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/887469ca-4b4c-4cfa-a507-2ddda10a3e6b-config-volume\") pod \"coredns-674b8bbfcf-4zfgh\" (UID: \"887469ca-4b4c-4cfa-a507-2ddda10a3e6b\") " pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:30:24.594318 kubelet[2671]: I0527 03:30:24.594306 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6qcv\" (UniqueName: \"kubernetes.io/projected/e2057878-6268-49cc-8eb3-9e74302b1152-kube-api-access-q6qcv\") pod \"coredns-674b8bbfcf-ps8nf\" (UID: \"e2057878-6268-49cc-8eb3-9e74302b1152\") " pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:30:24.846664 kubelet[2671]: E0527 03:30:24.845897 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:24.847898 containerd[1552]: time="2025-05-27T03:30:24.847848798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ps8nf,Uid:e2057878-6268-49cc-8eb3-9e74302b1152,Namespace:kube-system,Attempt:0,}" May 27 03:30:24.853207 kubelet[2671]: E0527 03:30:24.853183 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:24.855485 containerd[1552]: time="2025-05-27T03:30:24.854610569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4zfgh,Uid:887469ca-4b4c-4cfa-a507-2ddda10a3e6b,Namespace:kube-system,Attempt:0,}" May 27 03:30:25.098857 kubelet[2671]: E0527 03:30:25.098434 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:25.114458 kubelet[2671]: I0527 03:30:25.114379 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lpk5f" podStartSLOduration=5.174928287 podStartE2EDuration="11.11436548s" podCreationTimestamp="2025-05-27 03:30:14 +0000 UTC" firstStartedPulling="2025-05-27 03:30:14.693055028 +0000 UTC m=+6.826023885" lastFinishedPulling="2025-05-27 03:30:20.632492221 +0000 UTC m=+12.765461078" observedRunningTime="2025-05-27 03:30:25.11240919 +0000 UTC m=+17.245378047" watchObservedRunningTime="2025-05-27 03:30:25.11436548 +0000 UTC m=+17.247334337" May 27 03:30:26.100517 kubelet[2671]: E0527 03:30:26.100476 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:26.590502 systemd-networkd[1468]: cilium_host: Link UP May 27 03:30:26.591023 systemd-networkd[1468]: cilium_net: Link UP May 27 03:30:26.591210 systemd-networkd[1468]: cilium_net: Gained carrier May 27 03:30:26.591383 systemd-networkd[1468]: cilium_host: Gained carrier May 27 03:30:26.628467 systemd-networkd[1468]: cilium_host: Gained IPv6LL May 27 03:30:26.702995 systemd-networkd[1468]: cilium_vxlan: Link UP May 27 03:30:26.703007 systemd-networkd[1468]: cilium_vxlan: Gained carrier May 27 03:30:26.917803 kernel: NET: Registered PF_ALG protocol family May 27 03:30:27.102859 kubelet[2671]: E0527 03:30:27.102815 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:27.192975 systemd-networkd[1468]: cilium_net: Gained IPv6LL May 27 03:30:27.614494 systemd-networkd[1468]: lxc_health: Link UP May 27 03:30:27.623346 systemd-networkd[1468]: lxc_health: Gained carrier May 27 03:30:27.898715 kernel: eth0: renamed from tmpb4169 May 27 03:30:27.901608 systemd-networkd[1468]: lxc0e0f3606aa99: Link UP May 27 03:30:27.902165 systemd-networkd[1468]: lxc0e0f3606aa99: Gained carrier May 27 03:30:27.913896 systemd-networkd[1468]: lxc055248a3c606: Link UP May 27 03:30:27.925587 kernel: eth0: renamed from tmpea123 May 27 03:30:27.927189 systemd-networkd[1468]: lxc055248a3c606: Gained carrier May 27 03:30:28.169829 kubelet[2671]: I0527 03:30:28.169395 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:30:28.169829 kubelet[2671]: I0527 03:30:28.169780 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:30:28.173528 kubelet[2671]: I0527 03:30:28.173493 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:30:28.196585 kubelet[2671]: I0527 03:30:28.194992 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:30:28.196936 kubelet[2671]: I0527 03:30:28.196835 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-ps8nf","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-proxy-qn9jr","kube-system/kube-apiserver-172-234-197-247","kube-system/cilium-lpk5f","kube-system/kube-scheduler-172-234-197-247"] May 27 03:30:28.196936 kubelet[2671]: E0527 03:30:28.196891 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:30:28.196936 kubelet[2671]: E0527 03:30:28.196901 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:30:28.196936 kubelet[2671]: E0527 03:30:28.196911 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:30:28.197105 kubelet[2671]: E0527 03:30:28.196920 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:28.197105 kubelet[2671]: E0527 03:30:28.197060 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:30:28.197105 kubelet[2671]: E0527 03:30:28.197069 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:28.197105 kubelet[2671]: E0527 03:30:28.197077 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:30:28.197105 kubelet[2671]: E0527 03:30:28.197085 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:30:28.197105 kubelet[2671]: I0527 03:30:28.197095 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:30:28.598558 kubelet[2671]: E0527 03:30:28.597594 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:28.726756 systemd-networkd[1468]: cilium_vxlan: Gained IPv6LL May 27 03:30:28.790677 systemd-networkd[1468]: lxc_health: Gained IPv6LL May 27 03:30:29.048804 systemd-networkd[1468]: lxc0e0f3606aa99: Gained IPv6LL May 27 03:30:29.496974 systemd-networkd[1468]: lxc055248a3c606: Gained IPv6LL May 27 03:30:31.325617 containerd[1552]: time="2025-05-27T03:30:31.325374206Z" level=info msg="connecting to shim ea12376bc8e91001b99daa3def1acbf0b67b7e4d56183bb4a0104b958690f641" address="unix:///run/containerd/s/3d45157fd60e2308385e3ec8d9f6fd95824c6f8ef062e2f9010cb08d01d50d91" namespace=k8s.io protocol=ttrpc version=3 May 27 03:30:31.335236 containerd[1552]: time="2025-05-27T03:30:31.335178649Z" level=info msg="connecting to shim b4169446f79773b9325586ecef9ebf4b041187927671bf045c70523eceb77725" address="unix:///run/containerd/s/061cd94ce60161eb49a3f253ae5fea27e71729f1d9b90993821d4f658d638050" namespace=k8s.io protocol=ttrpc version=3 May 27 03:30:31.378689 systemd[1]: Started cri-containerd-ea12376bc8e91001b99daa3def1acbf0b67b7e4d56183bb4a0104b958690f641.scope - libcontainer container ea12376bc8e91001b99daa3def1acbf0b67b7e4d56183bb4a0104b958690f641. May 27 03:30:31.387343 systemd[1]: Started cri-containerd-b4169446f79773b9325586ecef9ebf4b041187927671bf045c70523eceb77725.scope - libcontainer container b4169446f79773b9325586ecef9ebf4b041187927671bf045c70523eceb77725. May 27 03:30:31.475035 containerd[1552]: time="2025-05-27T03:30:31.474970381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4zfgh,Uid:887469ca-4b4c-4cfa-a507-2ddda10a3e6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea12376bc8e91001b99daa3def1acbf0b67b7e4d56183bb4a0104b958690f641\"" May 27 03:30:31.475578 kubelet[2671]: E0527 03:30:31.475524 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:31.484637 containerd[1552]: time="2025-05-27T03:30:31.483691923Z" level=info msg="CreateContainer within sandbox \"ea12376bc8e91001b99daa3def1acbf0b67b7e4d56183bb4a0104b958690f641\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:30:31.502828 containerd[1552]: time="2025-05-27T03:30:31.502541191Z" level=info msg="Container 97e98b04d9898f0c63a0b09e53d84a9b473c486265b7e4a89f437fda04982008: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:31.507211 containerd[1552]: time="2025-05-27T03:30:31.507165060Z" level=info msg="CreateContainer within sandbox \"ea12376bc8e91001b99daa3def1acbf0b67b7e4d56183bb4a0104b958690f641\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97e98b04d9898f0c63a0b09e53d84a9b473c486265b7e4a89f437fda04982008\"" May 27 03:30:31.508196 containerd[1552]: time="2025-05-27T03:30:31.507994349Z" level=info msg="StartContainer for \"97e98b04d9898f0c63a0b09e53d84a9b473c486265b7e4a89f437fda04982008\"" May 27 03:30:31.509911 containerd[1552]: time="2025-05-27T03:30:31.509874348Z" level=info msg="connecting to shim 97e98b04d9898f0c63a0b09e53d84a9b473c486265b7e4a89f437fda04982008" address="unix:///run/containerd/s/3d45157fd60e2308385e3ec8d9f6fd95824c6f8ef062e2f9010cb08d01d50d91" protocol=ttrpc version=3 May 27 03:30:31.512500 containerd[1552]: time="2025-05-27T03:30:31.512463136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-ps8nf,Uid:e2057878-6268-49cc-8eb3-9e74302b1152,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4169446f79773b9325586ecef9ebf4b041187927671bf045c70523eceb77725\"" May 27 03:30:31.513653 kubelet[2671]: E0527 03:30:31.513603 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:31.523254 containerd[1552]: time="2025-05-27T03:30:31.523141478Z" level=info msg="CreateContainer within sandbox \"b4169446f79773b9325586ecef9ebf4b041187927671bf045c70523eceb77725\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 03:30:31.530684 systemd[1]: Started cri-containerd-97e98b04d9898f0c63a0b09e53d84a9b473c486265b7e4a89f437fda04982008.scope - libcontainer container 97e98b04d9898f0c63a0b09e53d84a9b473c486265b7e4a89f437fda04982008. May 27 03:30:31.532696 containerd[1552]: time="2025-05-27T03:30:31.532586788Z" level=info msg="Container 9e6e6fe8de989113d445e769f9a9d477d492a2ff6a14f229c60f5e1e1446b9e6: CDI devices from CRI Config.CDIDevices: []" May 27 03:30:31.545196 containerd[1552]: time="2025-05-27T03:30:31.545125080Z" level=info msg="CreateContainer within sandbox \"b4169446f79773b9325586ecef9ebf4b041187927671bf045c70523eceb77725\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e6e6fe8de989113d445e769f9a9d477d492a2ff6a14f229c60f5e1e1446b9e6\"" May 27 03:30:31.546424 containerd[1552]: time="2025-05-27T03:30:31.545768526Z" level=info msg="StartContainer for \"9e6e6fe8de989113d445e769f9a9d477d492a2ff6a14f229c60f5e1e1446b9e6\"" May 27 03:30:31.547698 containerd[1552]: time="2025-05-27T03:30:31.547651776Z" level=info msg="connecting to shim 9e6e6fe8de989113d445e769f9a9d477d492a2ff6a14f229c60f5e1e1446b9e6" address="unix:///run/containerd/s/061cd94ce60161eb49a3f253ae5fea27e71729f1d9b90993821d4f658d638050" protocol=ttrpc version=3 May 27 03:30:31.572966 systemd[1]: Started cri-containerd-9e6e6fe8de989113d445e769f9a9d477d492a2ff6a14f229c60f5e1e1446b9e6.scope - libcontainer container 9e6e6fe8de989113d445e769f9a9d477d492a2ff6a14f229c60f5e1e1446b9e6. May 27 03:30:31.579610 containerd[1552]: time="2025-05-27T03:30:31.579523002Z" level=info msg="StartContainer for \"97e98b04d9898f0c63a0b09e53d84a9b473c486265b7e4a89f437fda04982008\" returns successfully" May 27 03:30:31.621017 containerd[1552]: time="2025-05-27T03:30:31.620981658Z" level=info msg="StartContainer for \"9e6e6fe8de989113d445e769f9a9d477d492a2ff6a14f229c60f5e1e1446b9e6\" returns successfully" May 27 03:30:32.115087 kubelet[2671]: E0527 03:30:32.114805 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:32.117296 kubelet[2671]: E0527 03:30:32.117277 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:32.128597 kubelet[2671]: I0527 03:30:32.127668 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-ps8nf" podStartSLOduration=18.12765893 podStartE2EDuration="18.12765893s" podCreationTimestamp="2025-05-27 03:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:30:32.127293216 +0000 UTC m=+24.260262073" watchObservedRunningTime="2025-05-27 03:30:32.12765893 +0000 UTC m=+24.260627797" May 27 03:30:32.140746 kubelet[2671]: I0527 03:30:32.140482 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4zfgh" podStartSLOduration=18.140471166 podStartE2EDuration="18.140471166s" podCreationTimestamp="2025-05-27 03:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:30:32.139414236 +0000 UTC m=+24.272383123" watchObservedRunningTime="2025-05-27 03:30:32.140471166 +0000 UTC m=+24.273440023" May 27 03:30:32.999198 kubelet[2671]: I0527 03:30:32.999097 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 03:30:33.000407 kubelet[2671]: E0527 03:30:33.000390 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:33.119361 kubelet[2671]: E0527 03:30:33.119318 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:33.119728 kubelet[2671]: E0527 03:30:33.119543 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:33.120498 kubelet[2671]: E0527 03:30:33.120433 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:34.121596 kubelet[2671]: E0527 03:30:34.121244 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:34.121596 kubelet[2671]: E0527 03:30:34.121278 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:30:38.223323 kubelet[2671]: I0527 03:30:38.223266 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:30:38.223763 kubelet[2671]: I0527 03:30:38.223348 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:30:38.229294 kubelet[2671]: I0527 03:30:38.229250 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:30:38.250603 kubelet[2671]: I0527 03:30:38.250543 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:30:38.250800 kubelet[2671]: I0527 03:30:38.250706 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-proxy-qn9jr","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:30:38.250800 kubelet[2671]: E0527 03:30:38.250742 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:30:38.250800 kubelet[2671]: E0527 03:30:38.250755 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:30:38.250800 kubelet[2671]: E0527 03:30:38.250763 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:30:38.250800 kubelet[2671]: E0527 03:30:38.250771 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:30:38.250800 kubelet[2671]: E0527 03:30:38.250779 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:38.250800 kubelet[2671]: E0527 03:30:38.250788 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:30:38.250800 kubelet[2671]: E0527 03:30:38.250796 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:38.250800 kubelet[2671]: E0527 03:30:38.250803 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:30:38.250800 kubelet[2671]: I0527 03:30:38.250813 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:30:48.270094 kubelet[2671]: I0527 03:30:48.270057 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:30:48.270094 kubelet[2671]: I0527 03:30:48.270109 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:30:48.273251 kubelet[2671]: I0527 03:30:48.273235 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:30:48.287287 kubelet[2671]: I0527 03:30:48.287248 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:30:48.287371 kubelet[2671]: I0527 03:30:48.287355 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/cilium-lpk5f","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-proxy-qn9jr","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:30:48.287419 kubelet[2671]: E0527 03:30:48.287394 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:30:48.287419 kubelet[2671]: E0527 03:30:48.287413 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:30:48.287467 kubelet[2671]: E0527 03:30:48.287421 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:30:48.287467 kubelet[2671]: E0527 03:30:48.287429 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:30:48.287467 kubelet[2671]: E0527 03:30:48.287438 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:48.287467 kubelet[2671]: E0527 03:30:48.287445 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:30:48.287467 kubelet[2671]: E0527 03:30:48.287452 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:48.287467 kubelet[2671]: E0527 03:30:48.287459 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:30:48.287467 kubelet[2671]: I0527 03:30:48.287469 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:30:58.310225 kubelet[2671]: I0527 03:30:58.309748 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:30:58.310225 kubelet[2671]: I0527 03:30:58.309810 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:30:58.313212 kubelet[2671]: I0527 03:30:58.313193 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:30:58.330494 kubelet[2671]: I0527 03:30:58.330048 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:30:58.330494 kubelet[2671]: I0527 03:30:58.330169 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-proxy-qn9jr","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:30:58.330494 kubelet[2671]: E0527 03:30:58.330204 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:30:58.330494 kubelet[2671]: E0527 03:30:58.330220 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:30:58.330494 kubelet[2671]: E0527 03:30:58.330228 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:30:58.330494 kubelet[2671]: E0527 03:30:58.330238 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:30:58.330494 kubelet[2671]: E0527 03:30:58.330249 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:30:58.330494 kubelet[2671]: E0527 03:30:58.330257 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:30:58.330494 kubelet[2671]: E0527 03:30:58.330266 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:30:58.330494 kubelet[2671]: E0527 03:30:58.330275 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:30:58.330494 kubelet[2671]: I0527 03:30:58.330284 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:31:08.356670 kubelet[2671]: I0527 03:31:08.356001 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:31:08.356670 kubelet[2671]: I0527 03:31:08.356048 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:31:08.358465 kubelet[2671]: I0527 03:31:08.358423 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:31:08.371472 kubelet[2671]: I0527 03:31:08.371428 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:31:08.371764 kubelet[2671]: I0527 03:31:08.371595 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:31:08.371764 kubelet[2671]: E0527 03:31:08.371627 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:31:08.371764 kubelet[2671]: E0527 03:31:08.371639 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:31:08.371764 kubelet[2671]: E0527 03:31:08.371646 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:31:08.371764 kubelet[2671]: E0527 03:31:08.371655 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:31:08.371764 kubelet[2671]: E0527 03:31:08.371662 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:31:08.371764 kubelet[2671]: E0527 03:31:08.371670 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:31:08.371764 kubelet[2671]: E0527 03:31:08.371678 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:31:08.371764 kubelet[2671]: E0527 03:31:08.371686 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:31:08.371764 kubelet[2671]: I0527 03:31:08.371695 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:31:18.389426 kubelet[2671]: I0527 03:31:18.389372 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:31:18.389426 kubelet[2671]: I0527 03:31:18.389428 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:31:18.391874 kubelet[2671]: I0527 03:31:18.391857 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:31:18.402156 kubelet[2671]: I0527 03:31:18.402124 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:31:18.402268 kubelet[2671]: I0527 03:31:18.402248 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:31:18.402319 kubelet[2671]: E0527 03:31:18.402278 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:31:18.402319 kubelet[2671]: E0527 03:31:18.402289 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:31:18.402319 kubelet[2671]: E0527 03:31:18.402296 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:31:18.402319 kubelet[2671]: E0527 03:31:18.402304 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:31:18.402319 kubelet[2671]: E0527 03:31:18.402311 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:31:18.402319 kubelet[2671]: E0527 03:31:18.402317 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:31:18.402446 kubelet[2671]: E0527 03:31:18.402324 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:31:18.402446 kubelet[2671]: E0527 03:31:18.402330 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:31:18.402446 kubelet[2671]: I0527 03:31:18.402342 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:31:19.993839 kubelet[2671]: E0527 03:31:19.993158 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:31:28.425642 kubelet[2671]: I0527 03:31:28.425599 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:31:28.425642 kubelet[2671]: I0527 03:31:28.425645 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:31:28.428376 kubelet[2671]: I0527 03:31:28.428351 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:31:28.443490 kubelet[2671]: I0527 03:31:28.443447 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:31:28.443643 kubelet[2671]: I0527 03:31:28.443609 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/cilium-lpk5f","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:31:28.443643 kubelet[2671]: E0527 03:31:28.443641 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:31:28.443700 kubelet[2671]: E0527 03:31:28.443652 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:31:28.443700 kubelet[2671]: E0527 03:31:28.443661 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:31:28.443700 kubelet[2671]: E0527 03:31:28.443669 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:31:28.443700 kubelet[2671]: E0527 03:31:28.443675 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:31:28.443700 kubelet[2671]: E0527 03:31:28.443683 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:31:28.443700 kubelet[2671]: E0527 03:31:28.443689 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:31:28.443700 kubelet[2671]: E0527 03:31:28.443695 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:31:28.443700 kubelet[2671]: I0527 03:31:28.443705 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:31:32.992836 kubelet[2671]: E0527 03:31:32.992801 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:31:33.993342 kubelet[2671]: E0527 03:31:33.992752 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:31:36.992760 kubelet[2671]: E0527 03:31:36.992727 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:31:38.466724 kubelet[2671]: I0527 03:31:38.466689 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:31:38.466724 kubelet[2671]: I0527 03:31:38.466736 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:31:38.469646 kubelet[2671]: I0527 03:31:38.469251 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:31:38.482885 kubelet[2671]: I0527 03:31:38.482855 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:31:38.483035 kubelet[2671]: I0527 03:31:38.482971 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/cilium-lpk5f","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:31:38.483035 kubelet[2671]: E0527 03:31:38.483002 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:31:38.483035 kubelet[2671]: E0527 03:31:38.483014 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:31:38.483035 kubelet[2671]: E0527 03:31:38.483021 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:31:38.483035 kubelet[2671]: E0527 03:31:38.483030 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:31:38.483035 kubelet[2671]: E0527 03:31:38.483038 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:31:38.483196 kubelet[2671]: E0527 03:31:38.483047 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:31:38.483196 kubelet[2671]: E0527 03:31:38.483055 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:31:38.483196 kubelet[2671]: E0527 03:31:38.483062 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:31:38.483196 kubelet[2671]: I0527 03:31:38.483070 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:31:39.993061 kubelet[2671]: E0527 03:31:39.992394 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:31:47.993161 kubelet[2671]: E0527 03:31:47.992633 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:31:48.500386 kubelet[2671]: I0527 03:31:48.500352 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:31:48.500512 kubelet[2671]: I0527 03:31:48.500411 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:31:48.503583 kubelet[2671]: I0527 03:31:48.503535 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:31:48.516720 kubelet[2671]: I0527 03:31:48.516690 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:31:48.517049 kubelet[2671]: I0527 03:31:48.517025 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:31:48.517084 kubelet[2671]: E0527 03:31:48.517073 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:31:48.517135 kubelet[2671]: E0527 03:31:48.517091 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:31:48.517135 kubelet[2671]: E0527 03:31:48.517105 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:31:48.517135 kubelet[2671]: E0527 03:31:48.517117 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:31:48.517135 kubelet[2671]: E0527 03:31:48.517128 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:31:48.517135 kubelet[2671]: E0527 03:31:48.517138 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:31:48.517264 kubelet[2671]: E0527 03:31:48.517146 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:31:48.517264 kubelet[2671]: E0527 03:31:48.517154 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:31:48.517264 kubelet[2671]: I0527 03:31:48.517162 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:31:58.535302 kubelet[2671]: I0527 03:31:58.535234 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:31:58.535302 kubelet[2671]: I0527 03:31:58.535292 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:31:58.538494 kubelet[2671]: I0527 03:31:58.538453 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:31:58.552819 kubelet[2671]: I0527 03:31:58.552775 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:31:58.552945 kubelet[2671]: I0527 03:31:58.552882 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:31:58.552945 kubelet[2671]: E0527 03:31:58.552913 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:31:58.552945 kubelet[2671]: E0527 03:31:58.552924 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:31:58.552945 kubelet[2671]: E0527 03:31:58.552932 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:31:58.552945 kubelet[2671]: E0527 03:31:58.552943 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:31:58.552945 kubelet[2671]: E0527 03:31:58.552951 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:31:58.553101 kubelet[2671]: E0527 03:31:58.552958 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:31:58.553101 kubelet[2671]: E0527 03:31:58.552965 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:31:58.553101 kubelet[2671]: E0527 03:31:58.552972 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:31:58.553101 kubelet[2671]: I0527 03:31:58.552981 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:31:58.992985 kubelet[2671]: E0527 03:31:58.992955 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:02.992644 kubelet[2671]: E0527 03:32:02.992609 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:08.582582 kubelet[2671]: I0527 03:32:08.581436 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:32:08.583135 kubelet[2671]: I0527 03:32:08.583034 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:32:08.585988 kubelet[2671]: I0527 03:32:08.585974 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:32:08.589197 kubelet[2671]: I0527 03:32:08.589170 2671 image_gc_manager.go:514] "Removing image to free bytes" imageID="sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1" size=58938593 runtimeHandler="" May 27 03:32:08.589441 containerd[1552]: time="2025-05-27T03:32:08.589410804Z" level=info msg="RemoveImage \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 03:32:08.591686 containerd[1552]: time="2025-05-27T03:32:08.590717303Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.21-0\"" May 27 03:32:08.591686 containerd[1552]: time="2025-05-27T03:32:08.591527844Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\"" May 27 03:32:08.592837 containerd[1552]: time="2025-05-27T03:32:08.592811432Z" level=info msg="RemoveImage \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" returns successfully" May 27 03:32:08.592938 containerd[1552]: time="2025-05-27T03:32:08.592915954Z" level=info msg="ImageDelete event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" May 27 03:32:08.607449 kubelet[2671]: I0527 03:32:08.607420 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:32:08.607571 kubelet[2671]: I0527 03:32:08.607534 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:32:08.607676 kubelet[2671]: E0527 03:32:08.607627 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:32:08.607676 kubelet[2671]: E0527 03:32:08.607673 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:32:08.607728 kubelet[2671]: E0527 03:32:08.607682 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:32:08.607728 kubelet[2671]: E0527 03:32:08.607696 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:32:08.607728 kubelet[2671]: E0527 03:32:08.607703 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:32:08.607728 kubelet[2671]: E0527 03:32:08.607710 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:32:08.607728 kubelet[2671]: E0527 03:32:08.607716 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:32:08.607728 kubelet[2671]: E0527 03:32:08.607722 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:32:08.607728 kubelet[2671]: I0527 03:32:08.607731 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:32:12.441199 systemd[1]: Started sshd@7-172.234.197.247:22-139.178.68.195:40456.service - OpenSSH per-connection server daemon (139.178.68.195:40456). May 27 03:32:12.792864 sshd[4004]: Accepted publickey for core from 139.178.68.195 port 40456 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:12.794809 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:12.800626 systemd-logind[1527]: New session 8 of user core. May 27 03:32:12.806783 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 03:32:13.141719 sshd[4006]: Connection closed by 139.178.68.195 port 40456 May 27 03:32:13.142487 sshd-session[4004]: pam_unix(sshd:session): session closed for user core May 27 03:32:13.148013 systemd[1]: sshd@7-172.234.197.247:22-139.178.68.195:40456.service: Deactivated successfully. May 27 03:32:13.148063 systemd-logind[1527]: Session 8 logged out. Waiting for processes to exit. May 27 03:32:13.152067 systemd[1]: session-8.scope: Deactivated successfully. May 27 03:32:13.155616 systemd-logind[1527]: Removed session 8. May 27 03:32:18.214146 systemd[1]: Started sshd@8-172.234.197.247:22-139.178.68.195:58104.service - OpenSSH per-connection server daemon (139.178.68.195:58104). May 27 03:32:18.555296 sshd[4021]: Accepted publickey for core from 139.178.68.195 port 58104 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:18.556799 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:18.561728 systemd-logind[1527]: New session 9 of user core. May 27 03:32:18.564677 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 03:32:18.630325 kubelet[2671]: I0527 03:32:18.630291 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:32:18.630325 kubelet[2671]: I0527 03:32:18.630333 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:32:18.632928 kubelet[2671]: I0527 03:32:18.632898 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:32:18.646968 kubelet[2671]: I0527 03:32:18.646944 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:32:18.647072 kubelet[2671]: I0527 03:32:18.647035 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:32:18.647072 kubelet[2671]: E0527 03:32:18.647062 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:32:18.647072 kubelet[2671]: E0527 03:32:18.647072 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:32:18.647170 kubelet[2671]: E0527 03:32:18.647081 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:32:18.647170 kubelet[2671]: E0527 03:32:18.647089 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:32:18.647170 kubelet[2671]: E0527 03:32:18.647096 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:32:18.647170 kubelet[2671]: E0527 03:32:18.647103 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:32:18.647170 kubelet[2671]: E0527 03:32:18.647109 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:32:18.647170 kubelet[2671]: E0527 03:32:18.647116 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:32:18.647170 kubelet[2671]: I0527 03:32:18.647125 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:32:18.856037 sshd[4023]: Connection closed by 139.178.68.195 port 58104 May 27 03:32:18.856954 sshd-session[4021]: pam_unix(sshd:session): session closed for user core May 27 03:32:18.861486 systemd[1]: sshd@8-172.234.197.247:22-139.178.68.195:58104.service: Deactivated successfully. May 27 03:32:18.863486 systemd[1]: session-9.scope: Deactivated successfully. May 27 03:32:18.864629 systemd-logind[1527]: Session 9 logged out. Waiting for processes to exit. May 27 03:32:18.865967 systemd-logind[1527]: Removed session 9. May 27 03:32:23.924341 systemd[1]: Started sshd@9-172.234.197.247:22-139.178.68.195:35008.service - OpenSSH per-connection server daemon (139.178.68.195:35008). May 27 03:32:23.993434 kubelet[2671]: E0527 03:32:23.992747 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:24.271855 sshd[4036]: Accepted publickey for core from 139.178.68.195 port 35008 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:24.273375 sshd-session[4036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:24.279383 systemd-logind[1527]: New session 10 of user core. May 27 03:32:24.286879 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 03:32:24.580772 sshd[4038]: Connection closed by 139.178.68.195 port 35008 May 27 03:32:24.581639 sshd-session[4036]: pam_unix(sshd:session): session closed for user core May 27 03:32:24.586748 systemd[1]: sshd@9-172.234.197.247:22-139.178.68.195:35008.service: Deactivated successfully. May 27 03:32:24.589339 systemd[1]: session-10.scope: Deactivated successfully. May 27 03:32:24.591593 systemd-logind[1527]: Session 10 logged out. Waiting for processes to exit. May 27 03:32:24.593213 systemd-logind[1527]: Removed session 10. May 27 03:32:24.640490 systemd[1]: Started sshd@10-172.234.197.247:22-139.178.68.195:35018.service - OpenSSH per-connection server daemon (139.178.68.195:35018). May 27 03:32:24.983172 sshd[4051]: Accepted publickey for core from 139.178.68.195 port 35018 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:24.984720 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:24.990688 systemd-logind[1527]: New session 11 of user core. May 27 03:32:24.997669 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 03:32:25.321225 sshd[4053]: Connection closed by 139.178.68.195 port 35018 May 27 03:32:25.321958 sshd-session[4051]: pam_unix(sshd:session): session closed for user core May 27 03:32:25.326410 systemd-logind[1527]: Session 11 logged out. Waiting for processes to exit. May 27 03:32:25.327361 systemd[1]: sshd@10-172.234.197.247:22-139.178.68.195:35018.service: Deactivated successfully. May 27 03:32:25.330262 systemd[1]: session-11.scope: Deactivated successfully. May 27 03:32:25.332155 systemd-logind[1527]: Removed session 11. May 27 03:32:25.381334 systemd[1]: Started sshd@11-172.234.197.247:22-139.178.68.195:35026.service - OpenSSH per-connection server daemon (139.178.68.195:35026). May 27 03:32:25.719811 sshd[4063]: Accepted publickey for core from 139.178.68.195 port 35026 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:25.721267 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:25.726284 systemd-logind[1527]: New session 12 of user core. May 27 03:32:25.734703 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 03:32:26.035964 sshd[4065]: Connection closed by 139.178.68.195 port 35026 May 27 03:32:26.036742 sshd-session[4063]: pam_unix(sshd:session): session closed for user core May 27 03:32:26.040188 systemd[1]: sshd@11-172.234.197.247:22-139.178.68.195:35026.service: Deactivated successfully. May 27 03:32:26.043878 systemd[1]: session-12.scope: Deactivated successfully. May 27 03:32:26.047272 systemd-logind[1527]: Session 12 logged out. Waiting for processes to exit. May 27 03:32:26.049311 systemd-logind[1527]: Removed session 12. May 27 03:32:28.665977 kubelet[2671]: I0527 03:32:28.665932 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:32:28.665977 kubelet[2671]: I0527 03:32:28.665988 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:32:28.667915 kubelet[2671]: I0527 03:32:28.667890 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:32:28.681355 kubelet[2671]: I0527 03:32:28.681328 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:32:28.681491 kubelet[2671]: I0527 03:32:28.681451 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-proxy-qn9jr","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:32:28.681491 kubelet[2671]: E0527 03:32:28.681477 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:32:28.681491 kubelet[2671]: E0527 03:32:28.681487 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:32:28.681491 kubelet[2671]: E0527 03:32:28.681494 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:32:28.681596 kubelet[2671]: E0527 03:32:28.681503 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:32:28.681596 kubelet[2671]: E0527 03:32:28.681512 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:32:28.681596 kubelet[2671]: E0527 03:32:28.681518 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:32:28.681596 kubelet[2671]: E0527 03:32:28.681524 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:32:28.681596 kubelet[2671]: E0527 03:32:28.681531 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:32:28.681596 kubelet[2671]: I0527 03:32:28.681540 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:32:31.101833 systemd[1]: Started sshd@12-172.234.197.247:22-139.178.68.195:35028.service - OpenSSH per-connection server daemon (139.178.68.195:35028). May 27 03:32:31.444486 sshd[4077]: Accepted publickey for core from 139.178.68.195 port 35028 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:31.446310 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:31.452138 systemd-logind[1527]: New session 13 of user core. May 27 03:32:31.467707 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 03:32:31.760444 sshd[4079]: Connection closed by 139.178.68.195 port 35028 May 27 03:32:31.761091 sshd-session[4077]: pam_unix(sshd:session): session closed for user core May 27 03:32:31.765815 systemd-logind[1527]: Session 13 logged out. Waiting for processes to exit. May 27 03:32:31.766893 systemd[1]: sshd@12-172.234.197.247:22-139.178.68.195:35028.service: Deactivated successfully. May 27 03:32:31.769493 systemd[1]: session-13.scope: Deactivated successfully. May 27 03:32:31.772265 systemd-logind[1527]: Removed session 13. May 27 03:32:36.823862 systemd[1]: Started sshd@13-172.234.197.247:22-139.178.68.195:53558.service - OpenSSH per-connection server daemon (139.178.68.195:53558). May 27 03:32:37.163801 sshd[4091]: Accepted publickey for core from 139.178.68.195 port 53558 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:37.165430 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:37.171148 systemd-logind[1527]: New session 14 of user core. May 27 03:32:37.176712 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 03:32:37.466974 sshd[4093]: Connection closed by 139.178.68.195 port 53558 May 27 03:32:37.467773 sshd-session[4091]: pam_unix(sshd:session): session closed for user core May 27 03:32:37.472125 systemd[1]: sshd@13-172.234.197.247:22-139.178.68.195:53558.service: Deactivated successfully. May 27 03:32:37.474761 systemd[1]: session-14.scope: Deactivated successfully. May 27 03:32:37.475890 systemd-logind[1527]: Session 14 logged out. Waiting for processes to exit. May 27 03:32:37.478537 systemd-logind[1527]: Removed session 14. May 27 03:32:37.530786 systemd[1]: Started sshd@14-172.234.197.247:22-139.178.68.195:53572.service - OpenSSH per-connection server daemon (139.178.68.195:53572). May 27 03:32:37.868744 sshd[4105]: Accepted publickey for core from 139.178.68.195 port 53572 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:37.870277 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:37.875519 systemd-logind[1527]: New session 15 of user core. May 27 03:32:37.880698 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 03:32:37.993586 kubelet[2671]: E0527 03:32:37.993329 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:38.210518 sshd[4107]: Connection closed by 139.178.68.195 port 53572 May 27 03:32:38.211425 sshd-session[4105]: pam_unix(sshd:session): session closed for user core May 27 03:32:38.216203 systemd-logind[1527]: Session 15 logged out. Waiting for processes to exit. May 27 03:32:38.217053 systemd[1]: sshd@14-172.234.197.247:22-139.178.68.195:53572.service: Deactivated successfully. May 27 03:32:38.219676 systemd[1]: session-15.scope: Deactivated successfully. May 27 03:32:38.221612 systemd-logind[1527]: Removed session 15. May 27 03:32:38.282445 systemd[1]: Started sshd@15-172.234.197.247:22-139.178.68.195:53582.service - OpenSSH per-connection server daemon (139.178.68.195:53582). May 27 03:32:38.627552 sshd[4117]: Accepted publickey for core from 139.178.68.195 port 53582 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:38.629449 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:38.634878 systemd-logind[1527]: New session 16 of user core. May 27 03:32:38.637685 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 03:32:38.698378 kubelet[2671]: I0527 03:32:38.698342 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:32:38.698378 kubelet[2671]: I0527 03:32:38.698382 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:32:38.700472 kubelet[2671]: I0527 03:32:38.700395 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:32:38.712535 kubelet[2671]: I0527 03:32:38.712508 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:32:38.712678 kubelet[2671]: I0527 03:32:38.712658 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:32:38.712719 kubelet[2671]: E0527 03:32:38.712689 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:32:38.712719 kubelet[2671]: E0527 03:32:38.712699 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:32:38.712719 kubelet[2671]: E0527 03:32:38.712706 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:32:38.712719 kubelet[2671]: E0527 03:32:38.712713 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:32:38.712719 kubelet[2671]: E0527 03:32:38.712720 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:32:38.712827 kubelet[2671]: E0527 03:32:38.712727 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:32:38.712827 kubelet[2671]: E0527 03:32:38.712733 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:32:38.712827 kubelet[2671]: E0527 03:32:38.712741 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:32:38.712827 kubelet[2671]: I0527 03:32:38.712751 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:32:38.992997 kubelet[2671]: E0527 03:32:38.992954 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:39.530418 sshd[4119]: Connection closed by 139.178.68.195 port 53582 May 27 03:32:39.531121 sshd-session[4117]: pam_unix(sshd:session): session closed for user core May 27 03:32:39.536179 systemd[1]: sshd@15-172.234.197.247:22-139.178.68.195:53582.service: Deactivated successfully. May 27 03:32:39.539311 systemd[1]: session-16.scope: Deactivated successfully. May 27 03:32:39.540349 systemd-logind[1527]: Session 16 logged out. Waiting for processes to exit. May 27 03:32:39.542076 systemd-logind[1527]: Removed session 16. May 27 03:32:39.599439 systemd[1]: Started sshd@16-172.234.197.247:22-139.178.68.195:53598.service - OpenSSH per-connection server daemon (139.178.68.195:53598). May 27 03:32:39.932976 sshd[4136]: Accepted publickey for core from 139.178.68.195 port 53598 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:39.934930 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:39.941291 systemd-logind[1527]: New session 17 of user core. May 27 03:32:39.943687 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 03:32:40.338206 sshd[4138]: Connection closed by 139.178.68.195 port 53598 May 27 03:32:40.338892 sshd-session[4136]: pam_unix(sshd:session): session closed for user core May 27 03:32:40.343072 systemd[1]: sshd@16-172.234.197.247:22-139.178.68.195:53598.service: Deactivated successfully. May 27 03:32:40.345202 systemd[1]: session-17.scope: Deactivated successfully. May 27 03:32:40.346676 systemd-logind[1527]: Session 17 logged out. Waiting for processes to exit. May 27 03:32:40.348651 systemd-logind[1527]: Removed session 17. May 27 03:32:40.404558 systemd[1]: Started sshd@17-172.234.197.247:22-139.178.68.195:53614.service - OpenSSH per-connection server daemon (139.178.68.195:53614). May 27 03:32:40.742824 sshd[4147]: Accepted publickey for core from 139.178.68.195 port 53614 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:40.744648 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:40.749171 systemd-logind[1527]: New session 18 of user core. May 27 03:32:40.758727 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 03:32:41.046587 sshd[4149]: Connection closed by 139.178.68.195 port 53614 May 27 03:32:41.047789 sshd-session[4147]: pam_unix(sshd:session): session closed for user core May 27 03:32:41.052471 systemd[1]: sshd@17-172.234.197.247:22-139.178.68.195:53614.service: Deactivated successfully. May 27 03:32:41.054992 systemd[1]: session-18.scope: Deactivated successfully. May 27 03:32:41.056405 systemd-logind[1527]: Session 18 logged out. Waiting for processes to exit. May 27 03:32:41.058383 systemd-logind[1527]: Removed session 18. May 27 03:32:43.993736 kubelet[2671]: E0527 03:32:43.993022 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:46.112637 systemd[1]: Started sshd@18-172.234.197.247:22-139.178.68.195:40842.service - OpenSSH per-connection server daemon (139.178.68.195:40842). May 27 03:32:46.456440 sshd[4165]: Accepted publickey for core from 139.178.68.195 port 40842 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:46.458025 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:46.463447 systemd-logind[1527]: New session 19 of user core. May 27 03:32:46.467668 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 03:32:46.761070 sshd[4167]: Connection closed by 139.178.68.195 port 40842 May 27 03:32:46.761759 sshd-session[4165]: pam_unix(sshd:session): session closed for user core May 27 03:32:46.766317 systemd[1]: sshd@18-172.234.197.247:22-139.178.68.195:40842.service: Deactivated successfully. May 27 03:32:46.769032 systemd[1]: session-19.scope: Deactivated successfully. May 27 03:32:46.770470 systemd-logind[1527]: Session 19 logged out. Waiting for processes to exit. May 27 03:32:46.771690 systemd-logind[1527]: Removed session 19. May 27 03:32:48.731100 kubelet[2671]: I0527 03:32:48.731057 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:32:48.731100 kubelet[2671]: I0527 03:32:48.731095 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:32:48.734856 kubelet[2671]: I0527 03:32:48.734835 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:32:48.748626 kubelet[2671]: I0527 03:32:48.748599 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:32:48.748774 kubelet[2671]: I0527 03:32:48.748723 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-pqbr5","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-lpk5f","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-proxy-qn9jr","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:32:48.748774 kubelet[2671]: E0527 03:32:48.748762 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-pqbr5" May 27 03:32:48.748774 kubelet[2671]: E0527 03:32:48.748773 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:32:48.748861 kubelet[2671]: E0527 03:32:48.748781 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:32:48.748861 kubelet[2671]: E0527 03:32:48.748790 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-lpk5f" May 27 03:32:48.748861 kubelet[2671]: E0527 03:32:48.748797 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:32:48.748861 kubelet[2671]: E0527 03:32:48.748803 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:32:48.748861 kubelet[2671]: E0527 03:32:48.748810 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:32:48.748861 kubelet[2671]: E0527 03:32:48.748818 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:32:48.748861 kubelet[2671]: I0527 03:32:48.748827 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:32:51.830525 systemd[1]: Started sshd@19-172.234.197.247:22-139.178.68.195:40846.service - OpenSSH per-connection server daemon (139.178.68.195:40846). May 27 03:32:52.179966 sshd[4179]: Accepted publickey for core from 139.178.68.195 port 40846 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:52.181654 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:52.187116 systemd-logind[1527]: New session 20 of user core. May 27 03:32:52.194700 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 03:32:52.498034 sshd[4181]: Connection closed by 139.178.68.195 port 40846 May 27 03:32:52.498805 sshd-session[4179]: pam_unix(sshd:session): session closed for user core May 27 03:32:52.503518 systemd[1]: sshd@19-172.234.197.247:22-139.178.68.195:40846.service: Deactivated successfully. May 27 03:32:52.506670 systemd[1]: session-20.scope: Deactivated successfully. May 27 03:32:52.507894 systemd-logind[1527]: Session 20 logged out. Waiting for processes to exit. May 27 03:32:52.510980 systemd-logind[1527]: Removed session 20. May 27 03:32:52.560534 systemd[1]: Started sshd@20-172.234.197.247:22-139.178.68.195:40860.service - OpenSSH per-connection server daemon (139.178.68.195:40860). May 27 03:32:52.912380 sshd[4193]: Accepted publickey for core from 139.178.68.195 port 40860 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:52.913892 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:52.919427 systemd-logind[1527]: New session 21 of user core. May 27 03:32:52.926044 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 03:32:54.450757 containerd[1552]: time="2025-05-27T03:32:54.449830140Z" level=info msg="StopContainer for \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" with timeout 30 (s)" May 27 03:32:54.452108 containerd[1552]: time="2025-05-27T03:32:54.452066207Z" level=info msg="Stop container \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" with signal terminated" May 27 03:32:54.466666 systemd[1]: cri-containerd-159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42.scope: Deactivated successfully. May 27 03:32:54.470012 containerd[1552]: time="2025-05-27T03:32:54.469927253Z" level=info msg="received exit event container_id:\"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" id:\"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" pid:3207 exited_at:{seconds:1748316774 nanos:468826075}" May 27 03:32:54.470270 containerd[1552]: time="2025-05-27T03:32:54.469999584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" id:\"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" pid:3207 exited_at:{seconds:1748316774 nanos:468826075}" May 27 03:32:54.482091 containerd[1552]: time="2025-05-27T03:32:54.482038916Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 03:32:54.492264 containerd[1552]: time="2025-05-27T03:32:54.492109082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" id:\"aa21fa6bd7741b451f7b791f4c835837805d1cf3882bdb124ed9e9fdab2ee7f6\" pid:4221 exited_at:{seconds:1748316774 nanos:491487638}" May 27 03:32:54.495662 containerd[1552]: time="2025-05-27T03:32:54.495632319Z" level=info msg="StopContainer for \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" with timeout 2 (s)" May 27 03:32:54.496028 containerd[1552]: time="2025-05-27T03:32:54.495974042Z" level=info msg="Stop container \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" with signal terminated" May 27 03:32:54.502293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42-rootfs.mount: Deactivated successfully. May 27 03:32:54.508499 systemd-networkd[1468]: lxc_health: Link DOWN May 27 03:32:54.508525 systemd-networkd[1468]: lxc_health: Lost carrier May 27 03:32:54.520024 containerd[1552]: time="2025-05-27T03:32:54.519983135Z" level=info msg="StopContainer for \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" returns successfully" May 27 03:32:54.520770 containerd[1552]: time="2025-05-27T03:32:54.520679840Z" level=info msg="StopPodSandbox for \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\"" May 27 03:32:54.520770 containerd[1552]: time="2025-05-27T03:32:54.520734950Z" level=info msg="Container to stop \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:32:54.530019 systemd[1]: cri-containerd-cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22.scope: Deactivated successfully. May 27 03:32:54.531106 systemd[1]: cri-containerd-cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22.scope: Consumed 6.558s CPU time, 125M memory peak, 144K read from disk, 13.3M written to disk. May 27 03:32:54.536897 systemd[1]: cri-containerd-bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0.scope: Deactivated successfully. May 27 03:32:54.540159 containerd[1552]: time="2025-05-27T03:32:54.539050600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" id:\"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" pid:3334 exited_at:{seconds:1748316774 nanos:538721067}" May 27 03:32:54.540159 containerd[1552]: time="2025-05-27T03:32:54.539393432Z" level=info msg="received exit event container_id:\"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" id:\"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" pid:3334 exited_at:{seconds:1748316774 nanos:538721067}" May 27 03:32:54.547989 containerd[1552]: time="2025-05-27T03:32:54.547951818Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" id:\"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" pid:2943 exit_status:137 exited_at:{seconds:1748316774 nanos:545891852}" May 27 03:32:54.571388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22-rootfs.mount: Deactivated successfully. May 27 03:32:54.582226 containerd[1552]: time="2025-05-27T03:32:54.582164008Z" level=info msg="StopContainer for \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" returns successfully" May 27 03:32:54.582973 containerd[1552]: time="2025-05-27T03:32:54.582922184Z" level=info msg="StopPodSandbox for \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\"" May 27 03:32:54.583821 containerd[1552]: time="2025-05-27T03:32:54.582987864Z" level=info msg="Container to stop \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:32:54.583821 containerd[1552]: time="2025-05-27T03:32:54.582999085Z" level=info msg="Container to stop \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:32:54.583821 containerd[1552]: time="2025-05-27T03:32:54.583007125Z" level=info msg="Container to stop \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:32:54.583821 containerd[1552]: time="2025-05-27T03:32:54.583014595Z" level=info msg="Container to stop \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:32:54.583821 containerd[1552]: time="2025-05-27T03:32:54.583022395Z" level=info msg="Container to stop \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 03:32:54.599447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0-rootfs.mount: Deactivated successfully. May 27 03:32:54.601924 systemd[1]: cri-containerd-26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d.scope: Deactivated successfully. May 27 03:32:54.604877 containerd[1552]: time="2025-05-27T03:32:54.604793211Z" level=info msg="shim disconnected" id=bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0 namespace=k8s.io May 27 03:32:54.604877 containerd[1552]: time="2025-05-27T03:32:54.604822301Z" level=warning msg="cleaning up after shim disconnected" id=bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0 namespace=k8s.io May 27 03:32:54.605206 containerd[1552]: time="2025-05-27T03:32:54.604830491Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:32:54.627528 containerd[1552]: time="2025-05-27T03:32:54.627496283Z" level=info msg="received exit event sandbox_id:\"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" exit_status:137 exited_at:{seconds:1748316774 nanos:545891852}" May 27 03:32:54.632688 containerd[1552]: time="2025-05-27T03:32:54.628478411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" id:\"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" pid:2818 exit_status:137 exited_at:{seconds:1748316774 nanos:604740120}" May 27 03:32:54.632332 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0-shm.mount: Deactivated successfully. May 27 03:32:54.633266 containerd[1552]: time="2025-05-27T03:32:54.633151876Z" level=info msg="TearDown network for sandbox \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" successfully" May 27 03:32:54.633266 containerd[1552]: time="2025-05-27T03:32:54.633171197Z" level=info msg="StopPodSandbox for \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" returns successfully" May 27 03:32:54.644246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d-rootfs.mount: Deactivated successfully. May 27 03:32:54.649817 containerd[1552]: time="2025-05-27T03:32:54.649763593Z" level=info msg="shim disconnected" id=26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d namespace=k8s.io May 27 03:32:54.649817 containerd[1552]: time="2025-05-27T03:32:54.649787033Z" level=warning msg="cleaning up after shim disconnected" id=26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d namespace=k8s.io May 27 03:32:54.650195 containerd[1552]: time="2025-05-27T03:32:54.649903484Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 03:32:54.652773 containerd[1552]: time="2025-05-27T03:32:54.652753876Z" level=info msg="received exit event sandbox_id:\"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" exit_status:137 exited_at:{seconds:1748316774 nanos:604740120}" May 27 03:32:54.654117 containerd[1552]: time="2025-05-27T03:32:54.654068026Z" level=info msg="TearDown network for sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" successfully" May 27 03:32:54.655282 containerd[1552]: time="2025-05-27T03:32:54.655141904Z" level=info msg="StopPodSandbox for \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" returns successfully" May 27 03:32:54.716651 kubelet[2671]: I0527 03:32:54.715889 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bde459c4-e5ba-482b-8176-05f5dc1c00d9-cilium-config-path\") pod \"bde459c4-e5ba-482b-8176-05f5dc1c00d9\" (UID: \"bde459c4-e5ba-482b-8176-05f5dc1c00d9\") " May 27 03:32:54.716651 kubelet[2671]: I0527 03:32:54.715936 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-829w4\" (UniqueName: \"kubernetes.io/projected/bde459c4-e5ba-482b-8176-05f5dc1c00d9-kube-api-access-829w4\") pod \"bde459c4-e5ba-482b-8176-05f5dc1c00d9\" (UID: \"bde459c4-e5ba-482b-8176-05f5dc1c00d9\") " May 27 03:32:54.720965 kubelet[2671]: I0527 03:32:54.720937 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde459c4-e5ba-482b-8176-05f5dc1c00d9-kube-api-access-829w4" (OuterVolumeSpecName: "kube-api-access-829w4") pod "bde459c4-e5ba-482b-8176-05f5dc1c00d9" (UID: "bde459c4-e5ba-482b-8176-05f5dc1c00d9"). InnerVolumeSpecName "kube-api-access-829w4". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:32:54.721372 kubelet[2671]: I0527 03:32:54.721350 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bde459c4-e5ba-482b-8176-05f5dc1c00d9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bde459c4-e5ba-482b-8176-05f5dc1c00d9" (UID: "bde459c4-e5ba-482b-8176-05f5dc1c00d9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:32:54.816198 kubelet[2671]: I0527 03:32:54.816152 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-xtables-lock\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816198 kubelet[2671]: I0527 03:32:54.816201 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-bpf-maps\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816379 kubelet[2671]: I0527 03:32:54.816220 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-cgroup\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816379 kubelet[2671]: I0527 03:32:54.816238 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-hostproc\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816379 kubelet[2671]: I0527 03:32:54.816261 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfvvs\" (UniqueName: \"kubernetes.io/projected/7c0dd807-9699-488d-ac58-f1562c7b3dd2-kube-api-access-lfvvs\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816379 kubelet[2671]: I0527 03:32:54.816279 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c0dd807-9699-488d-ac58-f1562c7b3dd2-hubble-tls\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816379 kubelet[2671]: I0527 03:32:54.816296 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-host-proc-sys-net\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816379 kubelet[2671]: I0527 03:32:54.816311 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cni-path\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816551 kubelet[2671]: I0527 03:32:54.816326 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-host-proc-sys-kernel\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816551 kubelet[2671]: I0527 03:32:54.816339 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-lib-modules\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816551 kubelet[2671]: I0527 03:32:54.816359 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c0dd807-9699-488d-ac58-f1562c7b3dd2-clustermesh-secrets\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816551 kubelet[2671]: I0527 03:32:54.816376 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-config-path\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816551 kubelet[2671]: I0527 03:32:54.816392 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-run\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816551 kubelet[2671]: I0527 03:32:54.816406 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-etc-cni-netd\") pod \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\" (UID: \"7c0dd807-9699-488d-ac58-f1562c7b3dd2\") " May 27 03:32:54.816735 kubelet[2671]: I0527 03:32:54.816442 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-829w4\" (UniqueName: \"kubernetes.io/projected/bde459c4-e5ba-482b-8176-05f5dc1c00d9-kube-api-access-829w4\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.816735 kubelet[2671]: I0527 03:32:54.816453 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bde459c4-e5ba-482b-8176-05f5dc1c00d9-cilium-config-path\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.816735 kubelet[2671]: I0527 03:32:54.816503 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.816735 kubelet[2671]: I0527 03:32:54.816535 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.816735 kubelet[2671]: I0527 03:32:54.816551 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.817630 kubelet[2671]: I0527 03:32:54.816585 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.817630 kubelet[2671]: I0527 03:32:54.816600 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-hostproc" (OuterVolumeSpecName: "hostproc") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.817630 kubelet[2671]: I0527 03:32:54.816947 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.817630 kubelet[2671]: I0527 03:32:54.817329 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.821261 kubelet[2671]: I0527 03:32:54.821125 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.821261 kubelet[2671]: I0527 03:32:54.821153 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cni-path" (OuterVolumeSpecName: "cni-path") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.821261 kubelet[2671]: I0527 03:32:54.821223 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0dd807-9699-488d-ac58-f1562c7b3dd2-kube-api-access-lfvvs" (OuterVolumeSpecName: "kube-api-access-lfvvs") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "kube-api-access-lfvvs". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:32:54.821261 kubelet[2671]: I0527 03:32:54.821244 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 03:32:54.823083 kubelet[2671]: I0527 03:32:54.823064 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0dd807-9699-488d-ac58-f1562c7b3dd2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 03:32:54.823243 kubelet[2671]: I0527 03:32:54.823228 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0dd807-9699-488d-ac58-f1562c7b3dd2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 03:32:54.823957 kubelet[2671]: I0527 03:32:54.823920 2671 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c0dd807-9699-488d-ac58-f1562c7b3dd2" (UID: "7c0dd807-9699-488d-ac58-f1562c7b3dd2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 03:32:54.917700 kubelet[2671]: I0527 03:32:54.917647 2671 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-hostproc\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917700 kubelet[2671]: I0527 03:32:54.917677 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lfvvs\" (UniqueName: \"kubernetes.io/projected/7c0dd807-9699-488d-ac58-f1562c7b3dd2-kube-api-access-lfvvs\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917700 kubelet[2671]: I0527 03:32:54.917687 2671 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c0dd807-9699-488d-ac58-f1562c7b3dd2-hubble-tls\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917700 kubelet[2671]: I0527 03:32:54.917697 2671 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-host-proc-sys-net\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917700 kubelet[2671]: I0527 03:32:54.917708 2671 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cni-path\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917700 kubelet[2671]: I0527 03:32:54.917716 2671 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-host-proc-sys-kernel\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917973 kubelet[2671]: I0527 03:32:54.917725 2671 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-lib-modules\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917973 kubelet[2671]: I0527 03:32:54.917736 2671 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c0dd807-9699-488d-ac58-f1562c7b3dd2-clustermesh-secrets\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917973 kubelet[2671]: I0527 03:32:54.917745 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-config-path\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917973 kubelet[2671]: I0527 03:32:54.917753 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-run\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917973 kubelet[2671]: I0527 03:32:54.917761 2671 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-etc-cni-netd\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917973 kubelet[2671]: I0527 03:32:54.917769 2671 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-xtables-lock\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917973 kubelet[2671]: I0527 03:32:54.917777 2671 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-bpf-maps\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:54.917973 kubelet[2671]: I0527 03:32:54.917785 2671 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c0dd807-9699-488d-ac58-f1562c7b3dd2-cilium-cgroup\") on node \"172-234-197-247\" DevicePath \"\"" May 27 03:32:55.407776 kubelet[2671]: I0527 03:32:55.407374 2671 scope.go:117] "RemoveContainer" containerID="159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42" May 27 03:32:55.411255 containerd[1552]: time="2025-05-27T03:32:55.411222198Z" level=info msg="RemoveContainer for \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\"" May 27 03:32:55.416874 containerd[1552]: time="2025-05-27T03:32:55.416730120Z" level=info msg="RemoveContainer for \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" returns successfully" May 27 03:32:55.417725 kubelet[2671]: I0527 03:32:55.417106 2671 scope.go:117] "RemoveContainer" containerID="159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42" May 27 03:32:55.417781 containerd[1552]: time="2025-05-27T03:32:55.417662417Z" level=error msg="ContainerStatus for \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\": not found" May 27 03:32:55.419086 kubelet[2671]: E0527 03:32:55.419046 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\": not found" containerID="159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42" May 27 03:32:55.419162 kubelet[2671]: I0527 03:32:55.419082 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42"} err="failed to get container status \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\": rpc error: code = NotFound desc = an error occurred when try to find container \"159a82f3f0eed660f1a2c5036c30bc0628439e2c961372a554b3a063796d5a42\": not found" May 27 03:32:55.419774 systemd[1]: Removed slice kubepods-besteffort-podbde459c4_e5ba_482b_8176_05f5dc1c00d9.slice - libcontainer container kubepods-besteffort-podbde459c4_e5ba_482b_8176_05f5dc1c00d9.slice. May 27 03:32:55.423515 kubelet[2671]: I0527 03:32:55.423482 2671 scope.go:117] "RemoveContainer" containerID="cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22" May 27 03:32:55.431467 systemd[1]: Removed slice kubepods-burstable-pod7c0dd807_9699_488d_ac58_f1562c7b3dd2.slice - libcontainer container kubepods-burstable-pod7c0dd807_9699_488d_ac58_f1562c7b3dd2.slice. May 27 03:32:55.431689 systemd[1]: kubepods-burstable-pod7c0dd807_9699_488d_ac58_f1562c7b3dd2.slice: Consumed 6.671s CPU time, 125.5M memory peak, 144K read from disk, 13.3M written to disk. May 27 03:32:55.440201 containerd[1552]: time="2025-05-27T03:32:55.439691783Z" level=info msg="RemoveContainer for \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\"" May 27 03:32:55.443555 containerd[1552]: time="2025-05-27T03:32:55.443427642Z" level=info msg="RemoveContainer for \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" returns successfully" May 27 03:32:55.443638 kubelet[2671]: I0527 03:32:55.443607 2671 scope.go:117] "RemoveContainer" containerID="eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932" May 27 03:32:55.445865 containerd[1552]: time="2025-05-27T03:32:55.445824430Z" level=info msg="RemoveContainer for \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\"" May 27 03:32:55.450987 containerd[1552]: time="2025-05-27T03:32:55.450941698Z" level=info msg="RemoveContainer for \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\" returns successfully" May 27 03:32:55.451969 kubelet[2671]: I0527 03:32:55.451154 2671 scope.go:117] "RemoveContainer" containerID="df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a" May 27 03:32:55.457699 containerd[1552]: time="2025-05-27T03:32:55.457670349Z" level=info msg="RemoveContainer for \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\"" May 27 03:32:55.462224 containerd[1552]: time="2025-05-27T03:32:55.462185633Z" level=info msg="RemoveContainer for \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\" returns successfully" May 27 03:32:55.462586 kubelet[2671]: I0527 03:32:55.462382 2671 scope.go:117] "RemoveContainer" containerID="b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492" May 27 03:32:55.466013 containerd[1552]: time="2025-05-27T03:32:55.465887261Z" level=info msg="RemoveContainer for \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\"" May 27 03:32:55.470591 containerd[1552]: time="2025-05-27T03:32:55.470528096Z" level=info msg="RemoveContainer for \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\" returns successfully" May 27 03:32:55.470767 kubelet[2671]: I0527 03:32:55.470746 2671 scope.go:117] "RemoveContainer" containerID="52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13" May 27 03:32:55.471910 containerd[1552]: time="2025-05-27T03:32:55.471874546Z" level=info msg="RemoveContainer for \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\"" May 27 03:32:55.474317 containerd[1552]: time="2025-05-27T03:32:55.474284935Z" level=info msg="RemoveContainer for \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\" returns successfully" May 27 03:32:55.474474 kubelet[2671]: I0527 03:32:55.474430 2671 scope.go:117] "RemoveContainer" containerID="cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22" May 27 03:32:55.474691 containerd[1552]: time="2025-05-27T03:32:55.474630997Z" level=error msg="ContainerStatus for \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\": not found" May 27 03:32:55.474850 kubelet[2671]: E0527 03:32:55.474800 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\": not found" containerID="cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22" May 27 03:32:55.474850 kubelet[2671]: I0527 03:32:55.474825 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22"} err="failed to get container status \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf7fd43398f859546340f103174980c08cbd1b25abe22f6794abb02a3ca42e22\": not found" May 27 03:32:55.474850 kubelet[2671]: I0527 03:32:55.474843 2671 scope.go:117] "RemoveContainer" containerID="eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932" May 27 03:32:55.474973 containerd[1552]: time="2025-05-27T03:32:55.474944950Z" level=error msg="ContainerStatus for \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\": not found" May 27 03:32:55.475064 kubelet[2671]: E0527 03:32:55.475044 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\": not found" containerID="eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932" May 27 03:32:55.475097 kubelet[2671]: I0527 03:32:55.475065 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932"} err="failed to get container status \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\": rpc error: code = NotFound desc = an error occurred when try to find container \"eccda6ff93c3f3bd9baace361531e8c77ea92ada06ee7522fbfd5bb5f931d932\": not found" May 27 03:32:55.475097 kubelet[2671]: I0527 03:32:55.475078 2671 scope.go:117] "RemoveContainer" containerID="df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a" May 27 03:32:55.475460 containerd[1552]: time="2025-05-27T03:32:55.475214992Z" level=error msg="ContainerStatus for \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\": not found" May 27 03:32:55.475664 kubelet[2671]: E0527 03:32:55.475606 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\": not found" containerID="df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a" May 27 03:32:55.475752 kubelet[2671]: I0527 03:32:55.475729 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a"} err="failed to get container status \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"df77542d9b5f680f684ed68afa97800d49edaa0f23f9039c14a2598dc00edd7a\": not found" May 27 03:32:55.475880 kubelet[2671]: I0527 03:32:55.475811 2671 scope.go:117] "RemoveContainer" containerID="b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492" May 27 03:32:55.476174 containerd[1552]: time="2025-05-27T03:32:55.476132039Z" level=error msg="ContainerStatus for \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\": not found" May 27 03:32:55.476373 kubelet[2671]: E0527 03:32:55.476348 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\": not found" containerID="b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492" May 27 03:32:55.476471 kubelet[2671]: I0527 03:32:55.476451 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492"} err="failed to get container status \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\": rpc error: code = NotFound desc = an error occurred when try to find container \"b52265c1e36b3525fb9ed11f474036adff79465cb217b5f5304543193f375492\": not found" May 27 03:32:55.476597 kubelet[2671]: I0527 03:32:55.476515 2671 scope.go:117] "RemoveContainer" containerID="52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13" May 27 03:32:55.476800 containerd[1552]: time="2025-05-27T03:32:55.476763873Z" level=error msg="ContainerStatus for \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\": not found" May 27 03:32:55.477003 kubelet[2671]: E0527 03:32:55.476978 2671 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\": not found" containerID="52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13" May 27 03:32:55.477095 kubelet[2671]: I0527 03:32:55.477078 2671 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13"} err="failed to get container status \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\": rpc error: code = NotFound desc = an error occurred when try to find container \"52aefc3b11349c15de6ca37bf52929d31bca81fe1d658326ac29ff38885d0c13\": not found" May 27 03:32:55.500963 systemd[1]: var-lib-kubelet-pods-bde459c4\x2de5ba\x2d482b\x2d8176\x2d05f5dc1c00d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d829w4.mount: Deactivated successfully. May 27 03:32:55.501083 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d-shm.mount: Deactivated successfully. May 27 03:32:55.501154 systemd[1]: var-lib-kubelet-pods-7c0dd807\x2d9699\x2d488d\x2dac58\x2df1562c7b3dd2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlfvvs.mount: Deactivated successfully. May 27 03:32:55.501233 systemd[1]: var-lib-kubelet-pods-7c0dd807\x2d9699\x2d488d\x2dac58\x2df1562c7b3dd2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 03:32:55.501301 systemd[1]: var-lib-kubelet-pods-7c0dd807\x2d9699\x2d488d\x2dac58\x2df1562c7b3dd2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 03:32:55.994953 kubelet[2671]: I0527 03:32:55.994902 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0dd807-9699-488d-ac58-f1562c7b3dd2" path="/var/lib/kubelet/pods/7c0dd807-9699-488d-ac58-f1562c7b3dd2/volumes" May 27 03:32:55.995839 kubelet[2671]: I0527 03:32:55.995812 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde459c4-e5ba-482b-8176-05f5dc1c00d9" path="/var/lib/kubelet/pods/bde459c4-e5ba-482b-8176-05f5dc1c00d9/volumes" May 27 03:32:56.442804 sshd[4195]: Connection closed by 139.178.68.195 port 40860 May 27 03:32:56.443722 sshd-session[4193]: pam_unix(sshd:session): session closed for user core May 27 03:32:56.449800 systemd[1]: sshd@20-172.234.197.247:22-139.178.68.195:40860.service: Deactivated successfully. May 27 03:32:56.452485 systemd[1]: session-21.scope: Deactivated successfully. May 27 03:32:56.453859 systemd-logind[1527]: Session 21 logged out. Waiting for processes to exit. May 27 03:32:56.455796 systemd-logind[1527]: Removed session 21. May 27 03:32:56.504612 systemd[1]: Started sshd@21-172.234.197.247:22-139.178.68.195:35286.service - OpenSSH per-connection server daemon (139.178.68.195:35286). May 27 03:32:56.862600 sshd[4344]: Accepted publickey for core from 139.178.68.195 port 35286 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:56.863703 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:56.872151 systemd-logind[1527]: New session 22 of user core. May 27 03:32:56.879764 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 03:32:57.516267 systemd[1]: Created slice kubepods-burstable-pod0ebe774e_5dc1_4518_9998_8ac9250b4b36.slice - libcontainer container kubepods-burstable-pod0ebe774e_5dc1_4518_9998_8ac9250b4b36.slice. May 27 03:32:57.520049 sshd[4346]: Connection closed by 139.178.68.195 port 35286 May 27 03:32:57.523438 sshd-session[4344]: pam_unix(sshd:session): session closed for user core May 27 03:32:57.530465 systemd-logind[1527]: Session 22 logged out. Waiting for processes to exit. May 27 03:32:57.533309 systemd[1]: sshd@21-172.234.197.247:22-139.178.68.195:35286.service: Deactivated successfully. May 27 03:32:57.536376 systemd[1]: session-22.scope: Deactivated successfully. May 27 03:32:57.540317 systemd-logind[1527]: Removed session 22. May 27 03:32:57.580971 systemd[1]: Started sshd@22-172.234.197.247:22-139.178.68.195:35288.service - OpenSSH per-connection server daemon (139.178.68.195:35288). May 27 03:32:57.653350 kubelet[2671]: I0527 03:32:57.653240 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0ebe774e-5dc1-4518-9998-8ac9250b4b36-clustermesh-secrets\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.653350 kubelet[2671]: I0527 03:32:57.653293 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f4q7\" (UniqueName: \"kubernetes.io/projected/0ebe774e-5dc1-4518-9998-8ac9250b4b36-kube-api-access-7f4q7\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.653350 kubelet[2671]: I0527 03:32:57.653319 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-cilium-run\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.653350 kubelet[2671]: I0527 03:32:57.653337 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-cni-path\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.653350 kubelet[2671]: I0527 03:32:57.653356 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-xtables-lock\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654148 kubelet[2671]: I0527 03:32:57.653377 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ebe774e-5dc1-4518-9998-8ac9250b4b36-cilium-config-path\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654148 kubelet[2671]: I0527 03:32:57.653409 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-host-proc-sys-net\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654148 kubelet[2671]: I0527 03:32:57.653427 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-etc-cni-netd\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654148 kubelet[2671]: I0527 03:32:57.653498 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-bpf-maps\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654148 kubelet[2671]: I0527 03:32:57.653540 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-lib-modules\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654148 kubelet[2671]: I0527 03:32:57.653620 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-host-proc-sys-kernel\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654357 kubelet[2671]: I0527 03:32:57.653644 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-cilium-cgroup\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654357 kubelet[2671]: I0527 03:32:57.653675 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0ebe774e-5dc1-4518-9998-8ac9250b4b36-cilium-ipsec-secrets\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654357 kubelet[2671]: I0527 03:32:57.653705 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0ebe774e-5dc1-4518-9998-8ac9250b4b36-hostproc\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.654357 kubelet[2671]: I0527 03:32:57.653734 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0ebe774e-5dc1-4518-9998-8ac9250b4b36-hubble-tls\") pod \"cilium-vldkd\" (UID: \"0ebe774e-5dc1-4518-9998-8ac9250b4b36\") " pod="kube-system/cilium-vldkd" May 27 03:32:57.820766 kubelet[2671]: E0527 03:32:57.819840 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:57.821837 containerd[1552]: time="2025-05-27T03:32:57.821752717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vldkd,Uid:0ebe774e-5dc1-4518-9998-8ac9250b4b36,Namespace:kube-system,Attempt:0,}" May 27 03:32:57.845840 containerd[1552]: time="2025-05-27T03:32:57.845274462Z" level=info msg="connecting to shim 57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6" address="unix:///run/containerd/s/cf211da92fbdc06cc8963e48d5cbd8f5b10cf81e8c042358b90bb16b2ddef970" namespace=k8s.io protocol=ttrpc version=3 May 27 03:32:57.878727 systemd[1]: Started cri-containerd-57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6.scope - libcontainer container 57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6. May 27 03:32:57.914293 containerd[1552]: time="2025-05-27T03:32:57.914259676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vldkd,Uid:0ebe774e-5dc1-4518-9998-8ac9250b4b36,Namespace:kube-system,Attempt:0,} returns sandbox id \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\"" May 27 03:32:57.915850 kubelet[2671]: E0527 03:32:57.915826 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:57.922369 containerd[1552]: time="2025-05-27T03:32:57.922314726Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 03:32:57.924832 sshd[4356]: Accepted publickey for core from 139.178.68.195 port 35288 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:57.927338 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:57.935064 systemd-logind[1527]: New session 23 of user core. May 27 03:32:57.941091 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 03:32:57.945961 containerd[1552]: time="2025-05-27T03:32:57.945852011Z" level=info msg="Container fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f: CDI devices from CRI Config.CDIDevices: []" May 27 03:32:57.951799 containerd[1552]: time="2025-05-27T03:32:57.951776885Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f\"" May 27 03:32:57.953067 containerd[1552]: time="2025-05-27T03:32:57.953019714Z" level=info msg="StartContainer for \"fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f\"" May 27 03:32:57.954206 containerd[1552]: time="2025-05-27T03:32:57.954122412Z" level=info msg="connecting to shim fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f" address="unix:///run/containerd/s/cf211da92fbdc06cc8963e48d5cbd8f5b10cf81e8c042358b90bb16b2ddef970" protocol=ttrpc version=3 May 27 03:32:57.975739 systemd[1]: Started cri-containerd-fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f.scope - libcontainer container fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f. May 27 03:32:58.014767 containerd[1552]: time="2025-05-27T03:32:58.014699562Z" level=info msg="StartContainer for \"fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f\" returns successfully" May 27 03:32:58.031185 systemd[1]: cri-containerd-fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f.scope: Deactivated successfully. May 27 03:32:58.033711 containerd[1552]: time="2025-05-27T03:32:58.033392200Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f\" id:\"fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f\" pid:4422 exited_at:{seconds:1748316778 nanos:32811856}" May 27 03:32:58.034101 containerd[1552]: time="2025-05-27T03:32:58.033890834Z" level=info msg="received exit event container_id:\"fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f\" id:\"fc38d3ee42688b3af8a7d6d399dedbb48d6df1eca85eb02a6a2270568b9dc41f\" pid:4422 exited_at:{seconds:1748316778 nanos:32811856}" May 27 03:32:58.110137 kubelet[2671]: E0527 03:32:58.110046 2671 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 03:32:58.164967 sshd[4408]: Connection closed by 139.178.68.195 port 35288 May 27 03:32:58.165770 sshd-session[4356]: pam_unix(sshd:session): session closed for user core May 27 03:32:58.171896 systemd[1]: sshd@22-172.234.197.247:22-139.178.68.195:35288.service: Deactivated successfully. May 27 03:32:58.174935 systemd[1]: session-23.scope: Deactivated successfully. May 27 03:32:58.176063 systemd-logind[1527]: Session 23 logged out. Waiting for processes to exit. May 27 03:32:58.177879 systemd-logind[1527]: Removed session 23. May 27 03:32:58.229471 systemd[1]: Started sshd@23-172.234.197.247:22-139.178.68.195:35294.service - OpenSSH per-connection server daemon (139.178.68.195:35294). May 27 03:32:58.435951 kubelet[2671]: E0527 03:32:58.435724 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:58.441440 containerd[1552]: time="2025-05-27T03:32:58.441267321Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 03:32:58.447404 containerd[1552]: time="2025-05-27T03:32:58.447336016Z" level=info msg="Container 564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb: CDI devices from CRI Config.CDIDevices: []" May 27 03:32:58.453849 containerd[1552]: time="2025-05-27T03:32:58.453741233Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb\"" May 27 03:32:58.456602 containerd[1552]: time="2025-05-27T03:32:58.455790998Z" level=info msg="StartContainer for \"564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb\"" May 27 03:32:58.457740 containerd[1552]: time="2025-05-27T03:32:58.457706972Z" level=info msg="connecting to shim 564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb" address="unix:///run/containerd/s/cf211da92fbdc06cc8963e48d5cbd8f5b10cf81e8c042358b90bb16b2ddef970" protocol=ttrpc version=3 May 27 03:32:58.494789 systemd[1]: Started cri-containerd-564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb.scope - libcontainer container 564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb. May 27 03:32:58.565324 sshd[4461]: Accepted publickey for core from 139.178.68.195 port 35294 ssh2: RSA SHA256:jxXZ1xczrG8cnpkwXQgX0Kgw4UJGn7xFWFd7bDU9ewY May 27 03:32:58.568229 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 03:32:58.571618 containerd[1552]: time="2025-05-27T03:32:58.571487813Z" level=info msg="StartContainer for \"564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb\" returns successfully" May 27 03:32:58.577785 systemd-logind[1527]: New session 24 of user core. May 27 03:32:58.582808 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 03:32:58.586426 systemd[1]: cri-containerd-564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb.scope: Deactivated successfully. May 27 03:32:58.588625 containerd[1552]: time="2025-05-27T03:32:58.588596219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb\" id:\"564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb\" pid:4476 exited_at:{seconds:1748316778 nanos:587830634}" May 27 03:32:58.588926 containerd[1552]: time="2025-05-27T03:32:58.588220317Z" level=info msg="received exit event container_id:\"564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb\" id:\"564d6291080aefff852a258c9ca983447ba78b8831577b80f97ed894a00463bb\" pid:4476 exited_at:{seconds:1748316778 nanos:587830634}" May 27 03:32:58.776166 kubelet[2671]: I0527 03:32:58.776131 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:32:58.776166 kubelet[2671]: I0527 03:32:58.776179 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:32:58.779308 containerd[1552]: time="2025-05-27T03:32:58.778575612Z" level=info msg="StopPodSandbox for \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\"" May 27 03:32:58.779308 containerd[1552]: time="2025-05-27T03:32:58.778722023Z" level=info msg="TearDown network for sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" successfully" May 27 03:32:58.779308 containerd[1552]: time="2025-05-27T03:32:58.778738823Z" level=info msg="StopPodSandbox for \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" returns successfully" May 27 03:32:58.780518 containerd[1552]: time="2025-05-27T03:32:58.780483086Z" level=info msg="RemovePodSandbox for \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\"" May 27 03:32:58.780518 containerd[1552]: time="2025-05-27T03:32:58.780513277Z" level=info msg="Forcibly stopping sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\"" May 27 03:32:58.780624 containerd[1552]: time="2025-05-27T03:32:58.780615267Z" level=info msg="TearDown network for sandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" successfully" May 27 03:32:58.782310 containerd[1552]: time="2025-05-27T03:32:58.782280000Z" level=info msg="Ensure that sandbox 26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d in task-service has been cleanup successfully" May 27 03:32:58.785065 containerd[1552]: time="2025-05-27T03:32:58.785025020Z" level=info msg="RemovePodSandbox \"26b718a377804a299e363fd037e7874422226c5ab52dcb8487eadc88d41e333d\" returns successfully" May 27 03:32:58.785889 containerd[1552]: time="2025-05-27T03:32:58.785810196Z" level=info msg="StopPodSandbox for \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\"" May 27 03:32:58.786041 containerd[1552]: time="2025-05-27T03:32:58.785993057Z" level=info msg="TearDown network for sandbox \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" successfully" May 27 03:32:58.786041 containerd[1552]: time="2025-05-27T03:32:58.786005277Z" level=info msg="StopPodSandbox for \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" returns successfully" May 27 03:32:58.786462 containerd[1552]: time="2025-05-27T03:32:58.786435990Z" level=info msg="RemovePodSandbox for \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\"" May 27 03:32:58.786462 containerd[1552]: time="2025-05-27T03:32:58.786459721Z" level=info msg="Forcibly stopping sandbox \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\"" May 27 03:32:58.786627 containerd[1552]: time="2025-05-27T03:32:58.786603862Z" level=info msg="TearDown network for sandbox \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" successfully" May 27 03:32:58.791912 containerd[1552]: time="2025-05-27T03:32:58.788719887Z" level=info msg="Ensure that sandbox bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0 in task-service has been cleanup successfully" May 27 03:32:58.793138 containerd[1552]: time="2025-05-27T03:32:58.793087119Z" level=info msg="RemovePodSandbox \"bc78d8a79f707bc1360a6cbb94617506cd0dfee2574477024a26955315d593a0\" returns successfully" May 27 03:32:58.794596 kubelet[2671]: I0527 03:32:58.794529 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:32:58.796880 kubelet[2671]: I0527 03:32:58.796850 2671 image_gc_manager.go:514] "Removing image to free bytes" imageID="sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c" size=18897442 runtimeHandler="" May 27 03:32:58.797051 containerd[1552]: time="2025-05-27T03:32:58.796984908Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 03:32:58.797919 containerd[1552]: time="2025-05-27T03:32:58.797855785Z" level=info msg="ImageDelete event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 03:32:58.798407 containerd[1552]: time="2025-05-27T03:32:58.798380688Z" level=info msg="RemoveImage \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" returns successfully" May 27 03:32:58.798479 containerd[1552]: time="2025-05-27T03:32:58.798451419Z" level=info msg="ImageDelete event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 03:32:58.818130 kubelet[2671]: I0527 03:32:58.818095 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:32:58.818222 kubelet[2671]: I0527 03:32:58.818190 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-vldkd","kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:32:58.818264 kubelet[2671]: E0527 03:32:58.818246 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vldkd" May 27 03:32:58.818264 kubelet[2671]: E0527 03:32:58.818261 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:32:58.818327 kubelet[2671]: E0527 03:32:58.818270 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:32:58.818327 kubelet[2671]: E0527 03:32:58.818278 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:32:58.818327 kubelet[2671]: E0527 03:32:58.818285 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:32:58.818327 kubelet[2671]: E0527 03:32:58.818323 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:32:58.818423 kubelet[2671]: E0527 03:32:58.818332 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:32:58.818423 kubelet[2671]: I0527 03:32:58.818342 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:32:59.444035 kubelet[2671]: E0527 03:32:59.443905 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:32:59.448055 containerd[1552]: time="2025-05-27T03:32:59.447990112Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 03:32:59.460621 containerd[1552]: time="2025-05-27T03:32:59.460593394Z" level=info msg="Container 21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195: CDI devices from CRI Config.CDIDevices: []" May 27 03:32:59.467107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817528858.mount: Deactivated successfully. May 27 03:32:59.472253 containerd[1552]: time="2025-05-27T03:32:59.472212869Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195\"" May 27 03:32:59.473608 containerd[1552]: time="2025-05-27T03:32:59.472693233Z" level=info msg="StartContainer for \"21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195\"" May 27 03:32:59.474879 containerd[1552]: time="2025-05-27T03:32:59.474845988Z" level=info msg="connecting to shim 21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195" address="unix:///run/containerd/s/cf211da92fbdc06cc8963e48d5cbd8f5b10cf81e8c042358b90bb16b2ddef970" protocol=ttrpc version=3 May 27 03:32:59.503687 systemd[1]: Started cri-containerd-21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195.scope - libcontainer container 21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195. May 27 03:32:59.552857 containerd[1552]: time="2025-05-27T03:32:59.552809060Z" level=info msg="StartContainer for \"21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195\" returns successfully" May 27 03:32:59.556294 systemd[1]: cri-containerd-21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195.scope: Deactivated successfully. May 27 03:32:59.558504 containerd[1552]: time="2025-05-27T03:32:59.558472072Z" level=info msg="received exit event container_id:\"21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195\" id:\"21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195\" pid:4527 exited_at:{seconds:1748316779 nanos:558240550}" May 27 03:32:59.558949 containerd[1552]: time="2025-05-27T03:32:59.558908005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195\" id:\"21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195\" pid:4527 exited_at:{seconds:1748316779 nanos:558240550}" May 27 03:32:59.592542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21fe4a7670420c135ab6a2f87c5971f659a7f1e13d32b2efdf2c475db5a80195-rootfs.mount: Deactivated successfully. May 27 03:33:00.447667 kubelet[2671]: E0527 03:33:00.447608 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:00.450434 containerd[1552]: time="2025-05-27T03:33:00.450384466Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 03:33:00.462691 containerd[1552]: time="2025-05-27T03:33:00.459010169Z" level=info msg="Container c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:00.467531 containerd[1552]: time="2025-05-27T03:33:00.467499260Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672\"" May 27 03:33:00.468596 containerd[1552]: time="2025-05-27T03:33:00.468541188Z" level=info msg="StartContainer for \"c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672\"" May 27 03:33:00.470727 containerd[1552]: time="2025-05-27T03:33:00.470351151Z" level=info msg="connecting to shim c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672" address="unix:///run/containerd/s/cf211da92fbdc06cc8963e48d5cbd8f5b10cf81e8c042358b90bb16b2ddef970" protocol=ttrpc version=3 May 27 03:33:00.490700 systemd[1]: Started cri-containerd-c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672.scope - libcontainer container c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672. May 27 03:33:00.516300 systemd[1]: cri-containerd-c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672.scope: Deactivated successfully. May 27 03:33:00.520473 containerd[1552]: time="2025-05-27T03:33:00.520425775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672\" id:\"c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672\" pid:4565 exited_at:{seconds:1748316780 nanos:518245259}" May 27 03:33:00.520694 containerd[1552]: time="2025-05-27T03:33:00.520664377Z" level=info msg="received exit event container_id:\"c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672\" id:\"c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672\" pid:4565 exited_at:{seconds:1748316780 nanos:518245259}" May 27 03:33:00.528961 containerd[1552]: time="2025-05-27T03:33:00.528932887Z" level=info msg="StartContainer for \"c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672\" returns successfully" May 27 03:33:00.542738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c663e6cffe13acc39d2a9edf0ae12b4aae75e8f721b8c37605132feab2558672-rootfs.mount: Deactivated successfully. May 27 03:33:01.453458 kubelet[2671]: E0527 03:33:01.453255 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:01.457000 containerd[1552]: time="2025-05-27T03:33:01.456883057Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 03:33:01.476240 containerd[1552]: time="2025-05-27T03:33:01.476203507Z" level=info msg="Container 7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b: CDI devices from CRI Config.CDIDevices: []" May 27 03:33:01.480479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301570359.mount: Deactivated successfully. May 27 03:33:01.486026 containerd[1552]: time="2025-05-27T03:33:01.485981268Z" level=info msg="CreateContainer within sandbox \"57bc2bccc45c08794f891a797bf9a833f00ab22aefe4c02ce9a2ff3fed854fb6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\"" May 27 03:33:01.486529 containerd[1552]: time="2025-05-27T03:33:01.486491061Z" level=info msg="StartContainer for \"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\"" May 27 03:33:01.487401 containerd[1552]: time="2025-05-27T03:33:01.487373478Z" level=info msg="connecting to shim 7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b" address="unix:///run/containerd/s/cf211da92fbdc06cc8963e48d5cbd8f5b10cf81e8c042358b90bb16b2ddef970" protocol=ttrpc version=3 May 27 03:33:01.511707 systemd[1]: Started cri-containerd-7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b.scope - libcontainer container 7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b. May 27 03:33:01.555739 containerd[1552]: time="2025-05-27T03:33:01.555689002Z" level=info msg="StartContainer for \"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\" returns successfully" May 27 03:33:01.633997 containerd[1552]: time="2025-05-27T03:33:01.633906827Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\" id:\"befa6515d5b60d4410672cf0e26c4c1594b81e473ec0bd99b1487145049d3807\" pid:4632 exited_at:{seconds:1748316781 nanos:633132311}" May 27 03:33:01.702841 kubelet[2671]: I0527 03:33:01.702758 2671 setters.go:618] "Node became not ready" node="172-234-197-247" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T03:33:01Z","lastTransitionTime":"2025-05-27T03:33:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 03:33:02.037039 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 03:33:02.459655 kubelet[2671]: E0527 03:33:02.459528 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:02.474399 kubelet[2671]: I0527 03:33:02.474267 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vldkd" podStartSLOduration=5.474254558 podStartE2EDuration="5.474254558s" podCreationTimestamp="2025-05-27 03:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 03:33:02.473455672 +0000 UTC m=+174.606424529" watchObservedRunningTime="2025-05-27 03:33:02.474254558 +0000 UTC m=+174.607223415" May 27 03:33:02.958025 containerd[1552]: time="2025-05-27T03:33:02.957942870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\" id:\"e3b0a00094fa5dbfb53d89df3f829c4ff694c9d17d0d142fed90ca1eb12de6af\" pid:4707 exit_status:1 exited_at:{seconds:1748316782 nanos:957362575}" May 27 03:33:02.993059 kubelet[2671]: E0527 03:33:02.993027 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:03.821039 kubelet[2671]: E0527 03:33:03.820800 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:04.995712 systemd-networkd[1468]: lxc_health: Link UP May 27 03:33:04.996756 kubelet[2671]: E0527 03:33:04.996710 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:05.014777 systemd-networkd[1468]: lxc_health: Gained carrier May 27 03:33:05.173491 containerd[1552]: time="2025-05-27T03:33:05.173456069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\" id:\"da84baca954a0d8c24b795a4c9cc2cf090615f16093723cfa5b4564a5d7fcf2b\" pid:5142 exit_status:1 exited_at:{seconds:1748316785 nanos:173035306}" May 27 03:33:05.822612 kubelet[2671]: E0527 03:33:05.822554 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:06.167715 systemd-networkd[1468]: lxc_health: Gained IPv6LL May 27 03:33:06.468856 kubelet[2671]: E0527 03:33:06.468804 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:07.297925 containerd[1552]: time="2025-05-27T03:33:07.297846608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\" id:\"67d1cd510bffde7694f8cc775cc246097aab83ba88119c9e3a7d76cfc11e6cef\" pid:5180 exited_at:{seconds:1748316787 nanos:297450845}" May 27 03:33:07.472092 kubelet[2671]: E0527 03:33:07.472043 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:08.838496 kubelet[2671]: I0527 03:33:08.838452 2671 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 03:33:08.838496 kubelet[2671]: I0527 03:33:08.838505 2671 container_gc.go:86] "Attempting to delete unused containers" May 27 03:33:08.840991 kubelet[2671]: I0527 03:33:08.840964 2671 image_gc_manager.go:447] "Attempting to delete unused images" May 27 03:33:08.854911 kubelet[2671]: I0527 03:33:08.854867 2671 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 03:33:08.855034 kubelet[2671]: I0527 03:33:08.854966 2671 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-4zfgh","kube-system/coredns-674b8bbfcf-ps8nf","kube-system/cilium-vldkd","kube-system/kube-proxy-qn9jr","kube-system/kube-controller-manager-172-234-197-247","kube-system/kube-apiserver-172-234-197-247","kube-system/kube-scheduler-172-234-197-247"] May 27 03:33:08.855034 kubelet[2671]: E0527 03:33:08.854996 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-4zfgh" May 27 03:33:08.855034 kubelet[2671]: E0527 03:33:08.855008 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-ps8nf" May 27 03:33:08.855034 kubelet[2671]: E0527 03:33:08.855018 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-vldkd" May 27 03:33:08.855034 kubelet[2671]: E0527 03:33:08.855026 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-qn9jr" May 27 03:33:08.855034 kubelet[2671]: E0527 03:33:08.855034 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-197-247" May 27 03:33:08.855194 kubelet[2671]: E0527 03:33:08.855043 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-197-247" May 27 03:33:08.855194 kubelet[2671]: E0527 03:33:08.855050 2671 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-197-247" May 27 03:33:08.855194 kubelet[2671]: I0527 03:33:08.855060 2671 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" May 27 03:33:09.410886 containerd[1552]: time="2025-05-27T03:33:09.410843083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\" id:\"9703e510b1d781b112bcaefeb1201b0895250f59f602885c1b0aca32221fc20e\" pid:5213 exited_at:{seconds:1748316789 nanos:410211589}" May 27 03:33:10.992973 kubelet[2671]: E0527 03:33:10.992941 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" May 27 03:33:11.527165 containerd[1552]: time="2025-05-27T03:33:11.527120519Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\" id:\"8ad6e8088e7a140d98b151cbec9f8c9d61667a0d6586d1bd5291ccc01f4598ef\" pid:5237 exited_at:{seconds:1748316791 nanos:526147352}" May 27 03:33:13.618882 containerd[1552]: time="2025-05-27T03:33:13.618745946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a5ff40b771435613961e5ec49cbbae35cc3e96f59b17611b9cdb6204f22906b\" id:\"8c95ae55ec417e9b0c29bb55a9b887e2842c25dc776f7c679754778d6ee04ea6\" pid:5261 exited_at:{seconds:1748316793 nanos:618363054}" May 27 03:33:13.622481 kubelet[2671]: E0527 03:33:13.622445 2671 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36834->127.0.0.1:39549: write tcp 127.0.0.1:36834->127.0.0.1:39549: write: broken pipe May 27 03:33:13.673589 sshd[4494]: Connection closed by 139.178.68.195 port 35294 May 27 03:33:13.674611 sshd-session[4461]: pam_unix(sshd:session): session closed for user core May 27 03:33:13.679434 systemd[1]: sshd@23-172.234.197.247:22-139.178.68.195:35294.service: Deactivated successfully. May 27 03:33:13.681183 systemd[1]: session-24.scope: Deactivated successfully. May 27 03:33:13.682926 systemd-logind[1527]: Session 24 logged out. Waiting for processes to exit. May 27 03:33:13.685304 systemd-logind[1527]: Removed session 24.