May 9 00:36:26.913115 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:52:37 -00 2025 May 9 00:36:26.913144 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:36:26.913159 kernel: BIOS-provided physical RAM map: May 9 00:36:26.913168 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:36:26.913177 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 9 00:36:26.913197 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 9 00:36:26.913208 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 9 00:36:26.913217 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 9 00:36:26.913226 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 9 00:36:26.913234 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 9 00:36:26.913248 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 9 00:36:26.913257 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 9 00:36:26.913270 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 9 00:36:26.913287 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 9 00:36:26.913307 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 9 00:36:26.913343 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 9 00:36:26.913357 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 9 00:36:26.913366 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 9 00:36:26.913376 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 9 00:36:26.913385 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 9 00:36:26.913394 kernel: NX (Execute Disable) protection: active May 9 00:36:26.913404 kernel: APIC: Static calls initialized May 9 00:36:26.913413 kernel: efi: EFI v2.7 by EDK II May 9 00:36:26.913423 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 9 00:36:26.913432 kernel: SMBIOS 2.8 present. May 9 00:36:26.913450 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 9 00:36:26.913460 kernel: Hypervisor detected: KVM May 9 00:36:26.913474 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:36:26.913483 kernel: kvm-clock: using sched offset of 5120109512 cycles May 9 00:36:26.913493 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:36:26.913503 kernel: tsc: Detected 2794.748 MHz processor May 9 00:36:26.913513 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:36:26.913523 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:36:26.913533 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 9 00:36:26.913542 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 9 00:36:26.913552 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:36:26.913565 kernel: Using GB pages for direct mapping May 9 00:36:26.913575 kernel: Secure boot disabled May 9 00:36:26.913584 kernel: ACPI: Early table checksum verification disabled May 9 00:36:26.913594 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 9 00:36:26.913609 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 9 00:36:26.913618 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:26.913627 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:26.913639 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 9 00:36:26.913648 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:26.913662 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:26.913671 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:26.913680 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:36:26.913688 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 9 00:36:26.913697 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 9 00:36:26.913723 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 9 00:36:26.913732 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 9 00:36:26.913741 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 9 00:36:26.913750 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 9 00:36:26.913759 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 9 00:36:26.913767 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 9 00:36:26.913776 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 9 00:36:26.913785 kernel: No NUMA configuration found May 9 00:36:26.913797 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 9 00:36:26.913811 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 9 00:36:26.913820 kernel: Zone ranges: May 9 00:36:26.913829 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:36:26.913838 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 9 00:36:26.913847 kernel: Normal empty May 9 00:36:26.913856 kernel: Movable zone start for each node May 9 00:36:26.913864 kernel: Early memory node ranges May 9 00:36:26.913873 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 9 00:36:26.913882 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 9 00:36:26.913891 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 9 00:36:26.913903 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 9 00:36:26.913912 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 9 00:36:26.913921 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 9 00:36:26.913930 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 9 00:36:26.913938 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:36:26.913947 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 9 00:36:26.913956 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 9 00:36:26.913965 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:36:26.913974 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 9 00:36:26.913986 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 9 00:36:26.913996 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 9 00:36:26.914006 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:36:26.914016 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:36:26.914026 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:36:26.914033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:36:26.914040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:36:26.914047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:36:26.914055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:36:26.914064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:36:26.914072 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:36:26.914079 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:36:26.914086 kernel: TSC deadline timer available May 9 00:36:26.914093 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 9 00:36:26.914100 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:36:26.914108 kernel: kvm-guest: KVM setup pv remote TLB flush May 9 00:36:26.914115 kernel: kvm-guest: setup PV sched yield May 9 00:36:26.914122 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 9 00:36:26.914131 kernel: Booting paravirtualized kernel on KVM May 9 00:36:26.914139 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:36:26.914147 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 9 00:36:26.914154 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 9 00:36:26.914162 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 9 00:36:26.914171 kernel: pcpu-alloc: [0] 0 1 2 3 May 9 00:36:26.914181 kernel: kvm-guest: PV spinlocks enabled May 9 00:36:26.914190 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:36:26.914200 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:36:26.914216 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:36:26.914225 kernel: random: crng init done May 9 00:36:26.914234 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:36:26.914243 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:36:26.914252 kernel: Fallback order for Node 0: 0 May 9 00:36:26.914261 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 9 00:36:26.914270 kernel: Policy zone: DMA32 May 9 00:36:26.914279 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:36:26.914291 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 166140K reserved, 0K cma-reserved) May 9 00:36:26.914300 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:36:26.914309 kernel: ftrace: allocating 37944 entries in 149 pages May 9 00:36:26.914318 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:36:26.914327 kernel: Dynamic Preempt: voluntary May 9 00:36:26.914355 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:36:26.914370 kernel: rcu: RCU event tracing is enabled. May 9 00:36:26.914380 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:36:26.914390 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:36:26.914401 kernel: Rude variant of Tasks RCU enabled. May 9 00:36:26.914410 kernel: Tracing variant of Tasks RCU enabled. May 9 00:36:26.914418 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:36:26.914428 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:36:26.914436 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 9 00:36:26.914444 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:36:26.914451 kernel: Console: colour dummy device 80x25 May 9 00:36:26.914459 kernel: printk: console [ttyS0] enabled May 9 00:36:26.914469 kernel: ACPI: Core revision 20230628 May 9 00:36:26.914476 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:36:26.914484 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:36:26.914492 kernel: x2apic enabled May 9 00:36:26.914499 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:36:26.914507 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 9 00:36:26.914515 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 9 00:36:26.914522 kernel: kvm-guest: setup PV IPIs May 9 00:36:26.914532 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:36:26.914544 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:36:26.914555 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 9 00:36:26.914566 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:36:26.914577 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:36:26.914588 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:36:26.914597 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:36:26.914606 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:36:26.914617 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:36:26.914633 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:36:26.914648 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:36:26.914659 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:36:26.914669 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:36:26.914679 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 9 00:36:26.914695 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 9 00:36:26.914706 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 9 00:36:26.914730 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:36:26.914741 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:36:26.914756 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:36:26.914767 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:36:26.914778 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:36:26.914788 kernel: Freeing SMP alternatives memory: 32K May 9 00:36:26.914799 kernel: pid_max: default: 32768 minimum: 301 May 9 00:36:26.914809 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:36:26.914820 kernel: landlock: Up and running. May 9 00:36:26.914831 kernel: SELinux: Initializing. May 9 00:36:26.914842 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:36:26.914858 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:36:26.914869 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:36:26.914880 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:36:26.914891 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:36:26.914903 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:36:26.914914 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:36:26.914924 kernel: ... version: 0 May 9 00:36:26.914935 kernel: ... bit width: 48 May 9 00:36:26.914946 kernel: ... generic registers: 6 May 9 00:36:26.914961 kernel: ... value mask: 0000ffffffffffff May 9 00:36:26.914972 kernel: ... max period: 00007fffffffffff May 9 00:36:26.914983 kernel: ... fixed-purpose events: 0 May 9 00:36:26.914993 kernel: ... event mask: 000000000000003f May 9 00:36:26.915004 kernel: signal: max sigframe size: 1776 May 9 00:36:26.915014 kernel: rcu: Hierarchical SRCU implementation. May 9 00:36:26.915026 kernel: rcu: Max phase no-delay instances is 400. May 9 00:36:26.915037 kernel: smp: Bringing up secondary CPUs ... May 9 00:36:26.915047 kernel: smpboot: x86: Booting SMP configuration: May 9 00:36:26.915062 kernel: .... node #0, CPUs: #1 #2 #3 May 9 00:36:26.915072 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:36:26.915083 kernel: smpboot: Max logical packages: 1 May 9 00:36:26.915093 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 9 00:36:26.915104 kernel: devtmpfs: initialized May 9 00:36:26.915114 kernel: x86/mm: Memory block size: 128MB May 9 00:36:26.915125 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 9 00:36:26.915136 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 9 00:36:26.915147 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 9 00:36:26.915162 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 9 00:36:26.915173 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 9 00:36:26.915183 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:36:26.915194 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:36:26.915205 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:36:26.915216 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:36:26.915227 kernel: audit: initializing netlink subsys (disabled) May 9 00:36:26.915237 kernel: audit: type=2000 audit(1746750986.477:1): state=initialized audit_enabled=0 res=1 May 9 00:36:26.915248 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:36:26.915264 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:36:26.915275 kernel: cpuidle: using governor menu May 9 00:36:26.915285 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:36:26.915296 kernel: dca service started, version 1.12.1 May 9 00:36:26.915307 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 9 00:36:26.915318 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 9 00:36:26.915329 kernel: PCI: Using configuration type 1 for base access May 9 00:36:26.915350 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:36:26.915361 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:36:26.915376 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:36:26.915386 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:36:26.915397 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:36:26.915408 kernel: ACPI: Added _OSI(Module Device) May 9 00:36:26.915419 kernel: ACPI: Added _OSI(Processor Device) May 9 00:36:26.915429 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:36:26.915440 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:36:26.915451 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:36:26.915461 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:36:26.915476 kernel: ACPI: Interpreter enabled May 9 00:36:26.915487 kernel: ACPI: PM: (supports S0 S3 S5) May 9 00:36:26.915497 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:36:26.915508 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:36:26.915518 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:36:26.915529 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:36:26.915539 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:36:26.915874 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:36:26.916066 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:36:26.916239 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:36:26.916256 kernel: PCI host bridge to bus 0000:00 May 9 00:36:26.916451 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:36:26.916613 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:36:26.916791 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:36:26.916952 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 9 00:36:26.917116 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 00:36:26.917275 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 9 00:36:26.917446 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:36:26.917660 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:36:26.917874 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 9 00:36:26.918045 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 9 00:36:26.918215 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 9 00:36:26.918387 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 9 00:36:26.918551 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 9 00:36:26.918735 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:36:26.918921 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:36:26.919082 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 9 00:36:26.919242 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 9 00:36:26.919424 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 9 00:36:26.919607 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 9 00:36:26.919791 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 9 00:36:26.919956 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 9 00:36:26.920124 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 9 00:36:26.920313 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 9 00:36:26.920500 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 9 00:36:26.920650 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 9 00:36:26.920810 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 9 00:36:26.920940 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 9 00:36:26.921083 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:36:26.921210 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:36:26.921358 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:36:26.921485 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 9 00:36:26.921624 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 9 00:36:26.921862 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:36:26.922034 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 9 00:36:26.922051 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:36:26.922062 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:36:26.922074 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:36:26.922085 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:36:26.922102 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:36:26.922113 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:36:26.922124 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:36:26.922134 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:36:26.922145 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:36:26.922156 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:36:26.922167 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:36:26.922177 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:36:26.922188 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:36:26.922202 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:36:26.922213 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:36:26.922224 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:36:26.922235 kernel: iommu: Default domain type: Translated May 9 00:36:26.922245 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:36:26.922256 kernel: efivars: Registered efivars operations May 9 00:36:26.922266 kernel: PCI: Using ACPI for IRQ routing May 9 00:36:26.922277 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:36:26.922287 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 9 00:36:26.922298 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 9 00:36:26.922313 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 9 00:36:26.922323 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 9 00:36:26.922487 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:36:26.922704 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:36:26.922893 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:36:26.922910 kernel: vgaarb: loaded May 9 00:36:26.922920 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:36:26.922930 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:36:26.922945 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:36:26.922954 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:36:26.922964 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:36:26.922974 kernel: pnp: PnP ACPI init May 9 00:36:26.923165 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 9 00:36:26.923182 kernel: pnp: PnP ACPI: found 6 devices May 9 00:36:26.923190 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:36:26.923198 kernel: NET: Registered PF_INET protocol family May 9 00:36:26.923210 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:36:26.923218 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:36:26.923226 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:36:26.923234 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:36:26.923242 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:36:26.923249 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:36:26.923257 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:36:26.923265 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:36:26.923272 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:36:26.923282 kernel: NET: Registered PF_XDP protocol family May 9 00:36:26.923446 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 9 00:36:26.923597 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 9 00:36:26.923743 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:36:26.923874 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:36:26.923989 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:36:26.924123 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 9 00:36:26.924278 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 9 00:36:26.924452 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 9 00:36:26.924469 kernel: PCI: CLS 0 bytes, default 64 May 9 00:36:26.924480 kernel: Initialise system trusted keyrings May 9 00:36:26.924491 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:36:26.924502 kernel: Key type asymmetric registered May 9 00:36:26.924513 kernel: Asymmetric key parser 'x509' registered May 9 00:36:26.924523 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:36:26.924534 kernel: io scheduler mq-deadline registered May 9 00:36:26.924544 kernel: io scheduler kyber registered May 9 00:36:26.924560 kernel: io scheduler bfq registered May 9 00:36:26.924570 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:36:26.924582 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:36:26.924593 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:36:26.924604 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 9 00:36:26.924614 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:36:26.924625 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:36:26.924636 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:36:26.924646 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:36:26.924661 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:36:26.924871 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 00:36:26.925037 kernel: rtc_cmos 00:04: registered as rtc0 May 9 00:36:26.925054 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 May 9 00:36:26.925214 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T00:36:26 UTC (1746750986) May 9 00:36:26.925389 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 9 00:36:26.925406 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:36:26.925417 kernel: efifb: probing for efifb May 9 00:36:26.925434 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 9 00:36:26.925445 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 9 00:36:26.925456 kernel: efifb: scrolling: redraw May 9 00:36:26.925466 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 9 00:36:26.925477 kernel: Console: switching to colour frame buffer device 100x37 May 9 00:36:26.925488 kernel: fb0: EFI VGA frame buffer device May 9 00:36:26.925522 kernel: pstore: Using crash dump compression: deflate May 9 00:36:26.925537 kernel: pstore: Registered efi_pstore as persistent store backend May 9 00:36:26.925548 kernel: NET: Registered PF_INET6 protocol family May 9 00:36:26.925563 kernel: Segment Routing with IPv6 May 9 00:36:26.925574 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:36:26.925585 kernel: NET: Registered PF_PACKET protocol family May 9 00:36:26.925596 kernel: Key type dns_resolver registered May 9 00:36:26.925607 kernel: IPI shorthand broadcast: enabled May 9 00:36:26.925618 kernel: sched_clock: Marking stable (686002995, 116353947)->(824626433, -22269491) May 9 00:36:26.925629 kernel: registered taskstats version 1 May 9 00:36:26.925640 kernel: Loading compiled-in X.509 certificates May 9 00:36:26.925651 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: fe5c896a3ca06bb89ebdfb7ed85f611806e4c1cc' May 9 00:36:26.925667 kernel: Key type .fscrypt registered May 9 00:36:26.925678 kernel: Key type fscrypt-provisioning registered May 9 00:36:26.925689 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:36:26.925700 kernel: ima: Allocated hash algorithm: sha1 May 9 00:36:26.925784 kernel: ima: No architecture policies found May 9 00:36:26.925797 kernel: clk: Disabling unused clocks May 9 00:36:26.925808 kernel: Freeing unused kernel image (initmem) memory: 42864K May 9 00:36:26.925819 kernel: Write protecting the kernel read-only data: 36864k May 9 00:36:26.925830 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 9 00:36:26.925846 kernel: Run /init as init process May 9 00:36:26.925857 kernel: with arguments: May 9 00:36:26.925868 kernel: /init May 9 00:36:26.925878 kernel: with environment: May 9 00:36:26.925889 kernel: HOME=/ May 9 00:36:26.925901 kernel: TERM=linux May 9 00:36:26.925912 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:36:26.925926 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:36:26.925945 systemd[1]: Detected virtualization kvm. May 9 00:36:26.925957 systemd[1]: Detected architecture x86-64. May 9 00:36:26.925968 systemd[1]: Running in initrd. May 9 00:36:26.925980 systemd[1]: No hostname configured, using default hostname. May 9 00:36:26.925998 systemd[1]: Hostname set to . May 9 00:36:26.926011 systemd[1]: Initializing machine ID from VM UUID. May 9 00:36:26.926022 systemd[1]: Queued start job for default target initrd.target. May 9 00:36:26.926034 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:36:26.926046 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:36:26.926059 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:36:26.926071 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:36:26.926083 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:36:26.926099 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:36:26.926114 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:36:26.926126 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:36:26.926137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:36:26.926150 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:36:26.926162 systemd[1]: Reached target paths.target - Path Units. May 9 00:36:26.926173 systemd[1]: Reached target slices.target - Slice Units. May 9 00:36:26.926189 systemd[1]: Reached target swap.target - Swaps. May 9 00:36:26.926200 systemd[1]: Reached target timers.target - Timer Units. May 9 00:36:26.926212 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:36:26.926224 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:36:26.926236 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:36:26.926247 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:36:26.926259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:36:26.926271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:36:26.926283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:36:26.926299 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:36:26.926311 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:36:26.926323 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:36:26.926344 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:36:26.926356 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:36:26.926368 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:36:26.926380 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:36:26.926391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:26.926407 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:36:26.926419 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:36:26.926431 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:36:26.926470 systemd-journald[193]: Collecting audit messages is disabled. May 9 00:36:26.926501 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:36:26.926514 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:26.926526 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:36:26.926538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:36:26.926550 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:36:26.926566 systemd-journald[193]: Journal started May 9 00:36:26.926590 systemd-journald[193]: Runtime Journal (/run/log/journal/c775cfd9ec614f3185f0ce8668bd5e4b) is 6.0M, max 48.3M, 42.2M free. May 9 00:36:26.911693 systemd-modules-load[194]: Inserted module 'overlay' May 9 00:36:26.930157 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:36:26.933909 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:36:26.938602 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:36:26.949745 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:36:26.951158 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:36:26.954892 systemd-modules-load[194]: Inserted module 'br_netfilter' May 9 00:36:26.955997 kernel: Bridge firewalling registered May 9 00:36:26.961931 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:36:26.962744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:36:26.964593 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:36:26.967848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:36:26.978164 dracut-cmdline[221]: dracut-dracut-053 May 9 00:36:26.981345 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:36:26.984208 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:36:26.993032 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:36:27.022951 systemd-resolved[239]: Positive Trust Anchors: May 9 00:36:27.022970 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:36:27.023002 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:36:27.025527 systemd-resolved[239]: Defaulting to hostname 'linux'. May 9 00:36:27.026640 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:36:27.032274 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:36:27.092753 kernel: SCSI subsystem initialized May 9 00:36:27.102740 kernel: Loading iSCSI transport class v2.0-870. May 9 00:36:27.112737 kernel: iscsi: registered transport (tcp) May 9 00:36:27.133924 kernel: iscsi: registered transport (qla4xxx) May 9 00:36:27.133960 kernel: QLogic iSCSI HBA Driver May 9 00:36:27.181459 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:36:27.192888 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:36:27.217758 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:36:27.217802 kernel: device-mapper: uevent: version 1.0.3 May 9 00:36:27.219364 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:36:27.260756 kernel: raid6: avx2x4 gen() 30287 MB/s May 9 00:36:27.277749 kernel: raid6: avx2x2 gen() 30682 MB/s May 9 00:36:27.294860 kernel: raid6: avx2x1 gen() 25426 MB/s May 9 00:36:27.294885 kernel: raid6: using algorithm avx2x2 gen() 30682 MB/s May 9 00:36:27.312854 kernel: raid6: .... xor() 19668 MB/s, rmw enabled May 9 00:36:27.312877 kernel: raid6: using avx2x2 recovery algorithm May 9 00:36:27.333744 kernel: xor: automatically using best checksumming function avx May 9 00:36:27.489751 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:36:27.503467 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:36:27.516029 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:36:27.530369 systemd-udevd[413]: Using default interface naming scheme 'v255'. May 9 00:36:27.535755 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:36:27.540848 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:36:27.557067 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation May 9 00:36:27.592851 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:36:27.606004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:36:27.673548 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:36:27.684909 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:36:27.698847 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:36:27.701549 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:36:27.706526 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:36:27.710869 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 9 00:36:27.709525 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:36:27.716686 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:36:27.718734 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:36:27.719870 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:36:27.732135 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:36:27.732167 kernel: GPT:9289727 != 19775487 May 9 00:36:27.732178 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:36:27.732188 kernel: GPT:9289727 != 19775487 May 9 00:36:27.734746 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:36:27.734774 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:36:27.737196 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:36:27.738333 kernel: libata version 3.00 loaded. May 9 00:36:27.737386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:36:27.740101 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:36:27.742293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:36:27.742688 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:27.753914 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:36:27.754110 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:36:27.754123 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:36:27.754308 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:36:27.749365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:27.759573 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:36:27.759593 kernel: AES CTR mode by8 optimization enabled May 9 00:36:27.763758 kernel: scsi host0: ahci May 9 00:36:27.764730 kernel: scsi host1: ahci May 9 00:36:27.766604 kernel: scsi host2: ahci May 9 00:36:27.766847 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) May 9 00:36:27.765925 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:27.780448 kernel: scsi host3: ahci May 9 00:36:27.780631 kernel: scsi host4: ahci May 9 00:36:27.781103 kernel: scsi host5: ahci May 9 00:36:27.781263 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 9 00:36:27.781275 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 9 00:36:27.781286 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 9 00:36:27.781297 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 9 00:36:27.781322 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 9 00:36:27.781334 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 9 00:36:27.781344 kernel: BTRFS: device fsid 8d57db23-a0fc-4362-9769-38fbda5747c1 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (472) May 9 00:36:27.769998 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:36:27.792340 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:27.799280 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:36:27.808975 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:36:27.818487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:36:27.823389 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:36:27.824039 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:36:27.841857 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:36:27.845103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:36:27.854370 disk-uuid[556]: Primary Header is updated. May 9 00:36:27.854370 disk-uuid[556]: Secondary Entries is updated. May 9 00:36:27.854370 disk-uuid[556]: Secondary Header is updated. May 9 00:36:27.859732 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:36:27.863020 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:36:27.867730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:36:28.082864 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:36:28.082937 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 9 00:36:28.083736 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:36:28.084862 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:36:28.084942 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:36:28.085735 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:36:28.087073 kernel: ata3.00: applying bridge limits May 9 00:36:28.087832 kernel: ata3.00: configured for UDMA/100 May 9 00:36:28.088743 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:36:28.093751 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:36:28.136740 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:36:28.137055 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:36:28.152735 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 9 00:36:28.866326 disk-uuid[561]: The operation has completed successfully. May 9 00:36:28.867730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:36:28.896761 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:36:28.896889 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:36:28.920854 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:36:28.927857 sh[593]: Success May 9 00:36:28.943740 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:36:28.977385 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:36:28.999527 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:36:29.003609 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:36:29.017472 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57db23-a0fc-4362-9769-38fbda5747c1 May 9 00:36:29.017503 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:36:29.017515 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:36:29.019298 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:36:29.019320 kernel: BTRFS info (device dm-0): using free space tree May 9 00:36:29.024152 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:36:29.025893 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:36:29.039884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:36:29.041796 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:36:29.050759 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:29.050808 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:36:29.052437 kernel: BTRFS info (device vda6): using free space tree May 9 00:36:29.054765 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:36:29.064503 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:36:29.066398 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:29.075278 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:36:29.080905 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:36:29.143666 ignition[683]: Ignition 2.19.0 May 9 00:36:29.143680 ignition[683]: Stage: fetch-offline May 9 00:36:29.143736 ignition[683]: no configs at "/usr/lib/ignition/base.d" May 9 00:36:29.143747 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:29.143849 ignition[683]: parsed url from cmdline: "" May 9 00:36:29.143853 ignition[683]: no config URL provided May 9 00:36:29.143859 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:36:29.143869 ignition[683]: no config at "/usr/lib/ignition/user.ign" May 9 00:36:29.143901 ignition[683]: op(1): [started] loading QEMU firmware config module May 9 00:36:29.143907 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:36:29.160510 ignition[683]: op(1): [finished] loading QEMU firmware config module May 9 00:36:29.165771 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:36:29.182843 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:36:29.203855 ignition[683]: parsing config with SHA512: c3e0880b07c7d8bbad5841641ef2a259a79b34c4e3bc9b7caaf184c886752969e6ecd4835eb02cebe613f4796aefa6b2a7cf8ecb0b9690a7971da5a778ce424d May 9 00:36:29.206487 systemd-networkd[781]: lo: Link UP May 9 00:36:29.206497 systemd-networkd[781]: lo: Gained carrier May 9 00:36:29.208741 unknown[683]: fetched base config from "system" May 9 00:36:29.209338 unknown[683]: fetched user config from "qemu" May 9 00:36:29.210473 ignition[683]: fetch-offline: fetch-offline passed May 9 00:36:29.210581 ignition[683]: Ignition finished successfully May 9 00:36:29.212638 systemd-networkd[781]: Enumeration completed May 9 00:36:29.212795 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:36:29.213162 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:36:29.215491 systemd[1]: Reached target network.target - Network. May 9 00:36:29.218245 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:36:29.221895 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:36:29.222823 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:36:29.224500 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:36:29.225944 systemd-networkd[781]: eth0: Link UP May 9 00:36:29.225948 systemd-networkd[781]: eth0: Gained carrier May 9 00:36:29.225958 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:36:29.240558 ignition[784]: Ignition 2.19.0 May 9 00:36:29.240568 ignition[784]: Stage: kargs May 9 00:36:29.240781 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 9 00:36:29.240793 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:29.241754 ignition[784]: kargs: kargs passed May 9 00:36:29.244763 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:36:29.241803 ignition[784]: Ignition finished successfully May 9 00:36:29.245845 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:36:29.270893 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:36:29.283251 ignition[792]: Ignition 2.19.0 May 9 00:36:29.283262 ignition[792]: Stage: disks May 9 00:36:29.283435 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 9 00:36:29.283446 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:29.286015 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:36:29.284383 ignition[792]: disks: disks passed May 9 00:36:29.288481 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:36:29.284428 ignition[792]: Ignition finished successfully May 9 00:36:29.290782 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:36:29.293823 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:36:29.295127 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:36:29.296471 systemd[1]: Reached target basic.target - Basic System. May 9 00:36:29.307864 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:36:29.320254 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:36:29.326632 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:36:29.337825 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:36:29.420731 kernel: EXT4-fs (vda9): mounted filesystem 4cb03022-f5a4-4664-b5b4-bc39fcc2f946 r/w with ordered data mode. Quota mode: none. May 9 00:36:29.421481 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:36:29.424054 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:36:29.432816 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:36:29.434802 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:36:29.437364 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:36:29.441751 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) May 9 00:36:29.437426 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:36:29.448721 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:29.448738 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:36:29.448749 kernel: BTRFS info (device vda6): using free space tree May 9 00:36:29.448760 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:36:29.437456 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:36:29.443904 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:36:29.449781 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:36:29.452492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:36:29.487498 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:36:29.492331 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory May 9 00:36:29.496307 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:36:29.499895 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:36:29.587529 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:36:29.596858 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:36:29.598721 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:36:29.605739 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:29.623091 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:36:29.636337 ignition[928]: INFO : Ignition 2.19.0 May 9 00:36:29.636337 ignition[928]: INFO : Stage: mount May 9 00:36:29.638092 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:36:29.638092 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:29.640739 ignition[928]: INFO : mount: mount passed May 9 00:36:29.641525 ignition[928]: INFO : Ignition finished successfully May 9 00:36:29.643869 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:36:29.656926 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:36:30.016064 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:36:30.027980 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:36:30.035664 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) May 9 00:36:30.035698 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:36:30.035710 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:36:30.037184 kernel: BTRFS info (device vda6): using free space tree May 9 00:36:30.040049 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:36:30.041327 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:36:30.059785 ignition[958]: INFO : Ignition 2.19.0 May 9 00:36:30.059785 ignition[958]: INFO : Stage: files May 9 00:36:30.061595 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:36:30.061595 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:30.061595 ignition[958]: DEBUG : files: compiled without relabeling support, skipping May 9 00:36:30.065828 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:36:30.065828 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:36:30.065828 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:36:30.065828 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:36:30.065828 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:36:30.065145 unknown[958]: wrote ssh authorized keys file for user: core May 9 00:36:30.074757 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 00:36:30.074757 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 9 00:36:30.074757 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:36:30.074757 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 9 00:36:30.109192 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 00:36:30.297563 systemd-networkd[781]: eth0: Gained IPv6LL May 9 00:36:30.302949 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 00:36:30.305063 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:36:30.305063 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 9 00:36:30.611896 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 9 00:36:30.726124 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:36:30.728210 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 9 00:36:31.023815 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 9 00:36:31.755760 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 00:36:31.758442 ignition[958]: INFO : files: op(d): [started] processing unit "containerd.service" May 9 00:36:31.759918 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 00:36:31.762681 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 9 00:36:31.762681 ignition[958]: INFO : files: op(d): [finished] processing unit "containerd.service" May 9 00:36:31.762681 ignition[958]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 9 00:36:31.767355 ignition[958]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:36:31.769278 ignition[958]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:36:31.769278 ignition[958]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 9 00:36:31.772356 ignition[958]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 9 00:36:31.772356 ignition[958]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:36:31.775676 ignition[958]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:36:31.775676 ignition[958]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 9 00:36:31.778898 ignition[958]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:36:31.806616 ignition[958]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:36:31.862615 ignition[958]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:36:31.864701 ignition[958]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:36:31.866427 ignition[958]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 9 00:36:31.868055 ignition[958]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:36:31.869739 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:36:31.871577 ignition[958]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:36:31.873279 ignition[958]: INFO : files: files passed May 9 00:36:31.874025 ignition[958]: INFO : Ignition finished successfully May 9 00:36:31.876996 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:36:31.887899 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:36:31.890325 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:36:31.891992 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:36:31.892125 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:36:31.905021 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:36:31.908900 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:36:31.908900 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:36:31.912065 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:36:31.915395 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:36:31.918005 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:36:31.934956 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:36:31.962656 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:36:31.962822 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:36:31.965429 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:36:31.967626 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:36:31.969844 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:36:31.971177 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:36:31.991789 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:36:31.994600 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:36:32.009057 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:36:32.010397 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:36:32.012755 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:36:32.014918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:36:32.015047 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:36:32.017338 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:36:32.019053 systemd[1]: Stopped target basic.target - Basic System. May 9 00:36:32.021126 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:36:32.023199 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:36:32.025235 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:36:32.027402 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:36:32.029535 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:36:32.031842 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:36:32.033857 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:36:32.036140 systemd[1]: Stopped target swap.target - Swaps. May 9 00:36:32.038080 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:36:32.038257 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:36:32.040488 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:36:32.042113 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:36:32.044209 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:36:32.044349 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:36:32.046498 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:36:32.046637 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:36:32.048841 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:36:32.048973 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:36:32.050984 systemd[1]: Stopped target paths.target - Path Units. May 9 00:36:32.052726 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:36:32.057785 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:36:32.059791 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:36:32.061665 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:36:32.063672 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:36:32.063788 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:36:32.066143 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:36:32.066244 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:36:32.068051 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:36:32.068161 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:36:32.070236 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:36:32.070342 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:36:32.080889 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:36:32.082689 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:36:32.083951 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:36:32.084119 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:36:32.086681 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:36:32.086854 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:36:32.096650 ignition[1013]: INFO : Ignition 2.19.0 May 9 00:36:32.096650 ignition[1013]: INFO : Stage: umount May 9 00:36:32.099451 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:36:32.099451 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:36:32.099451 ignition[1013]: INFO : umount: umount passed May 9 00:36:32.099451 ignition[1013]: INFO : Ignition finished successfully May 9 00:36:32.100376 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:36:32.100495 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:36:32.106164 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:36:32.108110 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:36:32.111616 systemd[1]: Stopped target network.target - Network. May 9 00:36:32.113695 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:36:32.114742 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:36:32.117559 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:36:32.118809 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:36:32.122693 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:36:32.124197 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:36:32.126864 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:36:32.126965 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:36:32.131561 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:36:32.134543 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:36:32.139100 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:36:32.143770 systemd-networkd[781]: eth0: DHCPv6 lease lost May 9 00:36:32.146994 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:36:32.147163 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:36:32.150392 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:36:32.150572 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:36:32.153676 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:36:32.153753 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:36:32.159804 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:36:32.160812 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:36:32.161949 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:36:32.165766 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:36:32.165840 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:36:32.169147 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:36:32.170156 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:36:32.172282 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:36:32.173305 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:36:32.175845 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:36:32.187200 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:36:32.187360 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:36:32.189543 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:36:32.189758 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:36:32.192146 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:36:32.192233 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:36:32.193467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:36:32.193510 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:36:32.195731 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:36:32.195782 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:36:32.198019 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:36:32.198071 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:36:32.200023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:36:32.200075 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:36:32.218055 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:36:32.220450 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:36:32.220546 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:36:32.223055 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:36:32.223129 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:36:32.225928 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:36:32.226019 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:36:32.228539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:36:32.228616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:32.231877 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:36:32.232023 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:36:32.318219 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:36:32.318389 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:36:32.320672 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:36:32.322634 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:36:32.322704 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:36:32.334851 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:36:32.341586 systemd[1]: Switching root. May 9 00:36:32.372478 systemd-journald[193]: Journal stopped May 9 00:36:33.617671 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 9 00:36:33.617779 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:36:33.617805 kernel: SELinux: policy capability open_perms=1 May 9 00:36:33.617830 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:36:33.617852 kernel: SELinux: policy capability always_check_network=0 May 9 00:36:33.617868 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:36:33.617883 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:36:33.617905 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:36:33.617923 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:36:33.617940 kernel: audit: type=1403 audit(1746750992.856:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:36:33.617956 systemd[1]: Successfully loaded SELinux policy in 40.039ms. May 9 00:36:33.617996 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.702ms. May 9 00:36:33.618026 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:36:33.618044 systemd[1]: Detected virtualization kvm. May 9 00:36:33.618062 systemd[1]: Detected architecture x86-64. May 9 00:36:33.618079 systemd[1]: Detected first boot. May 9 00:36:33.618095 systemd[1]: Initializing machine ID from VM UUID. May 9 00:36:33.618112 zram_generator::config[1079]: No configuration found. May 9 00:36:33.618130 systemd[1]: Populated /etc with preset unit settings. May 9 00:36:33.618146 systemd[1]: Queued start job for default target multi-user.target. May 9 00:36:33.618170 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:36:33.618198 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:36:33.618215 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:36:33.618232 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:36:33.618248 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:36:33.618264 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:36:33.618282 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:36:33.618298 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:36:33.618323 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:36:33.618343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:36:33.618361 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:36:33.618378 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:36:33.618394 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:36:33.618411 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:36:33.618427 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:36:33.618444 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:36:33.618460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:36:33.618484 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:36:33.618501 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:36:33.618518 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:36:33.618534 systemd[1]: Reached target slices.target - Slice Units. May 9 00:36:33.618550 systemd[1]: Reached target swap.target - Swaps. May 9 00:36:33.618566 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:36:33.618582 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:36:33.618598 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:36:33.618622 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:36:33.618638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:36:33.618654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:36:33.618670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:36:33.618687 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:36:33.618707 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:36:33.618947 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:36:33.618964 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:36:33.618981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:33.619008 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:36:33.619025 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:36:33.619042 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:36:33.619058 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:36:33.619075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:36:33.619092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:36:33.619111 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:36:33.619127 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:36:33.619147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:36:33.619184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:36:33.619202 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:36:33.619219 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:36:33.619236 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:36:33.619255 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 9 00:36:33.619272 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 9 00:36:33.619288 kernel: fuse: init (API version 7.39) May 9 00:36:33.619305 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:36:33.619330 kernel: loop: module loaded May 9 00:36:33.619347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:36:33.619389 systemd-journald[1171]: Collecting audit messages is disabled. May 9 00:36:33.619419 kernel: ACPI: bus type drm_connector registered May 9 00:36:33.619437 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:36:33.619453 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:36:33.619470 systemd-journald[1171]: Journal started May 9 00:36:33.619508 systemd-journald[1171]: Runtime Journal (/run/log/journal/c775cfd9ec614f3185f0ce8668bd5e4b) is 6.0M, max 48.3M, 42.2M free. May 9 00:36:33.628251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:36:33.628296 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:33.631928 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:36:33.633745 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:36:33.635189 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:36:33.637456 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:36:33.638745 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:36:33.640211 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:36:33.641636 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:36:33.643295 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:36:33.645109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:36:33.646960 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:36:33.647266 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:36:33.649065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:36:33.649371 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:36:33.651054 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:36:33.651353 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:36:33.653382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:36:33.653664 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:36:33.655496 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:36:33.655795 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:36:33.657475 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:36:33.657771 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:36:33.659943 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:36:33.661651 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:36:33.663908 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:36:33.680241 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:36:33.688808 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:36:33.691238 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:36:33.692437 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:36:33.695883 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:36:33.699834 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:36:33.701085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:36:33.702366 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:36:33.704488 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:36:33.705732 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:36:33.708007 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:36:33.710759 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:36:33.712134 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:36:33.733092 systemd-journald[1171]: Time spent on flushing to /var/log/journal/c775cfd9ec614f3185f0ce8668bd5e4b is 14.332ms for 986 entries. May 9 00:36:33.733092 systemd-journald[1171]: System Journal (/var/log/journal/c775cfd9ec614f3185f0ce8668bd5e4b) is 8.0M, max 195.6M, 187.6M free. May 9 00:36:33.824790 systemd-journald[1171]: Received client request to flush runtime journal. May 9 00:36:33.736210 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:36:33.740344 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:36:33.773510 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:36:33.777399 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. May 9 00:36:33.777412 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. May 9 00:36:33.814365 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:36:33.816206 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:36:33.817761 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:36:33.823294 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:36:33.827926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:36:33.831550 udevadm[1222]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 00:36:33.861563 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:36:33.875067 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:36:33.897661 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. May 9 00:36:33.897689 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. May 9 00:36:33.935272 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:36:34.673143 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:36:34.686883 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:36:34.712872 systemd-udevd[1240]: Using default interface naming scheme 'v255'. May 9 00:36:34.732081 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:36:34.742944 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:36:34.754873 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:36:34.776519 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 9 00:36:34.846029 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1243) May 9 00:36:34.855655 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:36:34.884743 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 9 00:36:34.895949 kernel: ACPI: button: Power Button [PWRF] May 9 00:36:34.906920 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 9 00:36:34.907220 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:36:34.911912 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:36:34.912436 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:36:34.918548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:36:34.929170 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 9 00:36:34.941593 systemd-networkd[1246]: lo: Link UP May 9 00:36:34.941606 systemd-networkd[1246]: lo: Gained carrier May 9 00:36:34.945410 systemd-networkd[1246]: Enumeration completed May 9 00:36:34.945548 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:36:34.947214 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:36:34.947227 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:36:34.950353 systemd-networkd[1246]: eth0: Link UP May 9 00:36:34.950368 systemd-networkd[1246]: eth0: Gained carrier May 9 00:36:34.950384 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:36:34.961900 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:36:34.967957 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:36:34.979076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:34.983515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:36:34.984029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:34.987546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:36:35.055063 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:36:35.066949 kernel: kvm_amd: TSC scaling supported May 9 00:36:35.067010 kernel: kvm_amd: Nested Virtualization enabled May 9 00:36:35.067030 kernel: kvm_amd: Nested Paging enabled May 9 00:36:35.068155 kernel: kvm_amd: LBR virtualization supported May 9 00:36:35.068192 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 9 00:36:35.068913 kernel: kvm_amd: Virtual GIF supported May 9 00:36:35.090751 kernel: EDAC MC: Ver: 3.0.0 May 9 00:36:35.100420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:36:35.132058 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:36:35.143978 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:36:35.153057 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:36:35.186383 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:36:35.188014 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:36:35.199878 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:36:35.206492 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:36:35.238227 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:36:35.239783 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:36:35.241063 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:36:35.241091 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:36:35.242157 systemd[1]: Reached target machines.target - Containers. May 9 00:36:35.244315 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:36:35.254910 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:36:35.257838 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:36:35.259229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:36:35.260326 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:36:35.263026 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:36:35.266884 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:36:35.269318 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:36:35.285822 kernel: loop0: detected capacity change from 0 to 210664 May 9 00:36:35.290432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:36:35.298114 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:36:35.299149 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:36:35.305750 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:36:35.334760 kernel: loop1: detected capacity change from 0 to 142488 May 9 00:36:35.370753 kernel: loop2: detected capacity change from 0 to 140768 May 9 00:36:35.406751 kernel: loop3: detected capacity change from 0 to 210664 May 9 00:36:35.415748 kernel: loop4: detected capacity change from 0 to 142488 May 9 00:36:35.424763 kernel: loop5: detected capacity change from 0 to 140768 May 9 00:36:35.434010 (sd-merge)[1314]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:36:35.434641 (sd-merge)[1314]: Merged extensions into '/usr'. May 9 00:36:35.439252 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:36:35.439270 systemd[1]: Reloading... May 9 00:36:35.486827 zram_generator::config[1343]: No configuration found. May 9 00:36:35.551275 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:36:35.625603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:36:35.690694 systemd[1]: Reloading finished in 250 ms. May 9 00:36:35.711217 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:36:35.712968 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:36:35.735873 systemd[1]: Starting ensure-sysext.service... May 9 00:36:35.743641 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:36:35.765644 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... May 9 00:36:35.765659 systemd[1]: Reloading... May 9 00:36:35.780572 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:36:35.780975 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:36:35.782140 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:36:35.782511 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. May 9 00:36:35.782607 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. May 9 00:36:35.786226 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:36:35.786238 systemd-tmpfiles[1387]: Skipping /boot May 9 00:36:35.805564 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:36:35.805584 systemd-tmpfiles[1387]: Skipping /boot May 9 00:36:35.812800 zram_generator::config[1416]: No configuration found. May 9 00:36:35.931655 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:36:36.000639 systemd[1]: Reloading finished in 234 ms. May 9 00:36:36.021997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:36:36.041593 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:36:36.044250 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:36:36.046965 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:36:36.051212 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:36:36.056155 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:36:36.062879 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:36.063112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:36:36.066894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:36:36.073310 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:36:36.081020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:36:36.084638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:36:36.084868 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:36.086104 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:36:36.088474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:36:36.088705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:36:36.092025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:36:36.092275 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:36:36.103052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:36.103302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:36:36.106739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:36:36.111018 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:36:36.112147 augenrules[1491]: No rules May 9 00:36:36.112516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:36:36.117225 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:36:36.118310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:36.120076 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:36:36.123056 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:36:36.123394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:36:36.125442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:36:36.125895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:36:36.128166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:36:36.128471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:36:36.132787 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:36:36.134833 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:36:36.142431 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:36:36.147147 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:36.147472 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:36:36.159966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:36:36.162501 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:36:36.164478 systemd-resolved[1465]: Positive Trust Anchors: May 9 00:36:36.164488 systemd-resolved[1465]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:36:36.164532 systemd-resolved[1465]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:36:36.165092 systemd-networkd[1246]: eth0: Gained IPv6LL May 9 00:36:36.168537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:36:36.168792 systemd-resolved[1465]: Defaulting to hostname 'linux'. May 9 00:36:36.179976 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:36:36.181209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:36:36.181345 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:36:36.181435 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:36:36.182425 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:36:36.184293 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:36:36.186253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:36:36.186477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:36:36.188132 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:36:36.188343 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:36:36.189936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:36:36.190160 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:36:36.191884 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:36:36.192129 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:36:36.196594 systemd[1]: Finished ensure-sysext.service. May 9 00:36:36.201021 systemd[1]: Reached target network.target - Network. May 9 00:36:36.201990 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:36:36.203201 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:36:36.204443 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:36:36.204508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:36:36.215877 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:36:36.279773 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:36:36.281359 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:36:36.926628 systemd-resolved[1465]: Clock change detected. Flushing caches. May 9 00:36:36.926696 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:36:36.926753 systemd-timesyncd[1531]: Initial clock synchronization to Fri 2025-05-09 00:36:36.926563 UTC. May 9 00:36:36.927413 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:36:36.928730 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:36:36.930005 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:36:36.931300 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:36:36.931332 systemd[1]: Reached target paths.target - Path Units. May 9 00:36:36.932276 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:36:36.933528 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:36:36.934777 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:36:36.936040 systemd[1]: Reached target timers.target - Timer Units. May 9 00:36:36.937787 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:36:36.940883 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:36:36.943613 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:36:36.948556 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:36:36.949984 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:36:36.951080 systemd[1]: Reached target basic.target - Basic System. May 9 00:36:36.952376 systemd[1]: System is tainted: cgroupsv1 May 9 00:36:36.952425 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:36:36.952459 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:36:36.953970 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:36:36.956706 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:36:36.959453 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:36:36.964387 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:36:36.970047 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:36:36.971929 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:36:36.974897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:36:36.978649 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:36:36.981390 jq[1538]: false May 9 00:36:36.984377 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:36:36.989000 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:36:36.995924 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:36:36.998933 extend-filesystems[1540]: Found loop3 May 9 00:36:37.006471 extend-filesystems[1540]: Found loop4 May 9 00:36:37.006471 extend-filesystems[1540]: Found loop5 May 9 00:36:37.006471 extend-filesystems[1540]: Found sr0 May 9 00:36:37.006471 extend-filesystems[1540]: Found vda May 9 00:36:37.006471 extend-filesystems[1540]: Found vda1 May 9 00:36:37.006471 extend-filesystems[1540]: Found vda2 May 9 00:36:37.006471 extend-filesystems[1540]: Found vda3 May 9 00:36:37.006471 extend-filesystems[1540]: Found usr May 9 00:36:37.006471 extend-filesystems[1540]: Found vda4 May 9 00:36:37.006471 extend-filesystems[1540]: Found vda6 May 9 00:36:37.006471 extend-filesystems[1540]: Found vda7 May 9 00:36:37.006471 extend-filesystems[1540]: Found vda9 May 9 00:36:37.006471 extend-filesystems[1540]: Checking size of /dev/vda9 May 9 00:36:37.002024 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:36:36.999885 dbus-daemon[1537]: [system] SELinux support is enabled May 9 00:36:37.028585 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:36:37.029814 extend-filesystems[1540]: Resized partition /dev/vda9 May 9 00:36:37.042462 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:36:37.030857 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:36:37.042864 extend-filesystems[1569]: resize2fs 1.47.1 (20-May-2024) May 9 00:36:37.039470 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:36:37.047408 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:36:37.051593 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:36:37.058434 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1248) May 9 00:36:37.072406 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:36:37.072891 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:36:37.090586 update_engine[1570]: I20250509 00:36:37.090469 1570 main.cc:92] Flatcar Update Engine starting May 9 00:36:37.093023 update_engine[1570]: I20250509 00:36:37.092983 1570 update_check_scheduler.cc:74] Next update check in 2m12s May 9 00:36:37.093806 jq[1573]: true May 9 00:36:37.095695 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:36:37.097612 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:36:37.103599 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:36:37.120716 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:36:37.121238 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:36:37.168976 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:36:37.185443 jq[1584]: true May 9 00:36:37.183572 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:36:37.184105 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:36:37.211825 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:36:37.235310 systemd[1]: Started update-engine.service - Update Engine. May 9 00:36:37.237913 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:36:37.238079 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:36:37.238112 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:36:37.240106 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:36:37.240137 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:36:37.243688 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:36:37.252641 systemd-logind[1564]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:36:37.252761 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:36:37.267674 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:36:37.267674 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:36:37.267674 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:36:37.259893 systemd-logind[1564]: New seat seat0. May 9 00:36:37.278282 extend-filesystems[1540]: Resized filesystem in /dev/vda9 May 9 00:36:37.266948 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:36:37.269164 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:36:37.272566 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:36:37.278300 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:36:37.292202 tar[1583]: linux-amd64/helm May 9 00:36:37.340016 sshd_keygen[1568]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:36:37.357753 bash[1621]: Updated "/home/core/.ssh/authorized_keys" May 9 00:36:37.358960 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:36:37.363132 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:36:37.376887 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:36:37.381897 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:36:37.391950 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:36:37.404205 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:36:37.404826 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:36:37.418418 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:36:37.552812 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:36:37.565025 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:36:37.575286 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:36:37.577106 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:36:37.824886 containerd[1585]: time="2025-05-09T00:36:37.824718659Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:36:37.917617 containerd[1585]: time="2025-05-09T00:36:37.917548290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:36:37.919996 containerd[1585]: time="2025-05-09T00:36:37.919938544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:37.919996 containerd[1585]: time="2025-05-09T00:36:37.919994439Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:36:37.920063 containerd[1585]: time="2025-05-09T00:36:37.920018273Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:36:37.920323 containerd[1585]: time="2025-05-09T00:36:37.920305813Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:36:37.921680 containerd[1585]: time="2025-05-09T00:36:37.920329407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:36:37.921680 containerd[1585]: time="2025-05-09T00:36:37.920413054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:37.921680 containerd[1585]: time="2025-05-09T00:36:37.920428303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:36:37.921680 containerd[1585]: time="2025-05-09T00:36:37.920840436Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:37.921680 containerd[1585]: time="2025-05-09T00:36:37.920858860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:36:37.921680 containerd[1585]: time="2025-05-09T00:36:37.920872746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:37.921680 containerd[1585]: time="2025-05-09T00:36:37.920882364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:36:37.921680 containerd[1585]: time="2025-05-09T00:36:37.921060989Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:36:37.921680 containerd[1585]: time="2025-05-09T00:36:37.921385909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:36:37.921866 containerd[1585]: time="2025-05-09T00:36:37.921740243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:36:37.921866 containerd[1585]: time="2025-05-09T00:36:37.921757476Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:36:37.922446 containerd[1585]: time="2025-05-09T00:36:37.921975465Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:36:37.922446 containerd[1585]: time="2025-05-09T00:36:37.922057158Z" level=info msg="metadata content store policy set" policy=shared May 9 00:36:37.972518 containerd[1585]: time="2025-05-09T00:36:37.972429362Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:36:37.972518 containerd[1585]: time="2025-05-09T00:36:37.972535861Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:36:37.972727 containerd[1585]: time="2025-05-09T00:36:37.972560257Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:36:37.972727 containerd[1585]: time="2025-05-09T00:36:37.972581968Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:36:37.972727 containerd[1585]: time="2025-05-09T00:36:37.972610682Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:36:37.973043 containerd[1585]: time="2025-05-09T00:36:37.972920984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.973665370Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.973918104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.973935887Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.973949443Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.973969951Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.973986733Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.973999196Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.974013864Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.974032749Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.974053057Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.974079958Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.974098663Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.974135772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975644 containerd[1585]: time="2025-05-09T00:36:37.974157643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974171519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974184073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974200754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974217035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974228637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974243504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974279332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974303938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974322382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974344614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974369070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974399888Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974430325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974446014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:36:37.975998 containerd[1585]: time="2025-05-09T00:36:37.974463257Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:36:37.976480 containerd[1585]: time="2025-05-09T00:36:37.974523780Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:36:37.976480 containerd[1585]: time="2025-05-09T00:36:37.974549438Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:36:37.976480 containerd[1585]: time="2025-05-09T00:36:37.974565528Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:36:37.976480 containerd[1585]: time="2025-05-09T00:36:37.974587039Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:36:37.976480 containerd[1585]: time="2025-05-09T00:36:37.974601175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:36:37.976480 containerd[1585]: time="2025-05-09T00:36:37.974646430Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:36:37.976480 containerd[1585]: time="2025-05-09T00:36:37.974667730Z" level=info msg="NRI interface is disabled by configuration." May 9 00:36:37.976480 containerd[1585]: time="2025-05-09T00:36:37.974696344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:36:37.976639 containerd[1585]: time="2025-05-09T00:36:37.975120680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:36:37.976639 containerd[1585]: time="2025-05-09T00:36:37.975214696Z" level=info msg="Connect containerd service" May 9 00:36:37.976639 containerd[1585]: time="2025-05-09T00:36:37.975278285Z" level=info msg="using legacy CRI server" May 9 00:36:37.976639 containerd[1585]: time="2025-05-09T00:36:37.975286982Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:36:37.976639 containerd[1585]: time="2025-05-09T00:36:37.975418639Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:36:37.979673 containerd[1585]: time="2025-05-09T00:36:37.979600703Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:36:37.979830 containerd[1585]: time="2025-05-09T00:36:37.979738391Z" level=info msg="Start subscribing containerd event" May 9 00:36:37.979830 containerd[1585]: time="2025-05-09T00:36:37.979788405Z" level=info msg="Start recovering state" May 9 00:36:37.979922 containerd[1585]: time="2025-05-09T00:36:37.979880468Z" level=info msg="Start event monitor" May 9 00:36:37.979922 containerd[1585]: time="2025-05-09T00:36:37.979904753Z" level=info msg="Start snapshots syncer" May 9 00:36:37.979922 containerd[1585]: time="2025-05-09T00:36:37.979918890Z" level=info msg="Start cni network conf syncer for default" May 9 00:36:37.980094 containerd[1585]: time="2025-05-09T00:36:37.979930942Z" level=info msg="Start streaming server" May 9 00:36:37.980759 containerd[1585]: time="2025-05-09T00:36:37.980712969Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:36:37.980811 containerd[1585]: time="2025-05-09T00:36:37.980795294Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:36:37.981365 containerd[1585]: time="2025-05-09T00:36:37.981332341Z" level=info msg="containerd successfully booted in 0.158259s" May 9 00:36:37.982230 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:36:38.165321 tar[1583]: linux-amd64/LICENSE May 9 00:36:38.165469 tar[1583]: linux-amd64/README.md May 9 00:36:38.182951 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:36:38.821538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:36:38.823555 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:36:38.826499 systemd[1]: Startup finished in 6.967s (kernel) + 5.364s (userspace) = 12.332s. May 9 00:36:38.830815 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:36:40.222657 kubelet[1671]: E0509 00:36:40.222561 1671 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:36:40.227765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:36:40.228272 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:36:40.433086 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:36:40.444489 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:42906.service - OpenSSH per-connection server daemon (10.0.0.1:42906). May 9 00:36:40.483560 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 42906 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:40.485828 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:40.496132 systemd-logind[1564]: New session 1 of user core. May 9 00:36:40.497435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:36:40.509549 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:36:40.527658 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:36:40.541606 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:36:40.544715 (systemd)[1691]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:36:40.714155 systemd[1691]: Queued start job for default target default.target. May 9 00:36:40.714613 systemd[1691]: Created slice app.slice - User Application Slice. May 9 00:36:40.714632 systemd[1691]: Reached target paths.target - Paths. May 9 00:36:40.714644 systemd[1691]: Reached target timers.target - Timers. May 9 00:36:40.725359 systemd[1691]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:36:40.733732 systemd[1691]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:36:40.733838 systemd[1691]: Reached target sockets.target - Sockets. May 9 00:36:40.733859 systemd[1691]: Reached target basic.target - Basic System. May 9 00:36:40.733917 systemd[1691]: Reached target default.target - Main User Target. May 9 00:36:40.733966 systemd[1691]: Startup finished in 182ms. May 9 00:36:40.734493 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:36:40.736146 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:36:40.800826 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:42920.service - OpenSSH per-connection server daemon (10.0.0.1:42920). May 9 00:36:40.845944 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 42920 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:40.849650 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:40.864633 systemd-logind[1564]: New session 2 of user core. May 9 00:36:40.876918 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:36:40.960433 sshd[1703]: pam_unix(sshd:session): session closed for user core May 9 00:36:40.981233 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:42928.service - OpenSSH per-connection server daemon (10.0.0.1:42928). May 9 00:36:40.981952 systemd[1]: sshd@1-10.0.0.109:22-10.0.0.1:42920.service: Deactivated successfully. May 9 00:36:40.993150 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. May 9 00:36:40.996309 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:36:40.997890 systemd-logind[1564]: Removed session 2. May 9 00:36:41.036293 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 42928 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:41.038848 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:41.048623 systemd-logind[1564]: New session 3 of user core. May 9 00:36:41.062899 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:36:41.124130 sshd[1708]: pam_unix(sshd:session): session closed for user core May 9 00:36:41.135816 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:42932.service - OpenSSH per-connection server daemon (10.0.0.1:42932). May 9 00:36:41.136663 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:42928.service: Deactivated successfully. May 9 00:36:41.140709 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. May 9 00:36:41.141847 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:36:41.147934 systemd-logind[1564]: Removed session 3. May 9 00:36:41.221343 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 42932 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:41.222228 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:41.231787 systemd-logind[1564]: New session 4 of user core. May 9 00:36:41.243948 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:36:41.347968 sshd[1716]: pam_unix(sshd:session): session closed for user core May 9 00:36:41.355862 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:42934.service - OpenSSH per-connection server daemon (10.0.0.1:42934). May 9 00:36:41.357351 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:42932.service: Deactivated successfully. May 9 00:36:41.366477 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. May 9 00:36:41.372314 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:36:41.377646 systemd-logind[1564]: Removed session 4. May 9 00:36:41.401645 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 42934 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:41.404986 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:41.415857 systemd-logind[1564]: New session 5 of user core. May 9 00:36:41.425009 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:36:41.521884 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:36:41.522456 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:36:41.561459 sudo[1731]: pam_unix(sudo:session): session closed for user root May 9 00:36:41.566344 sshd[1724]: pam_unix(sshd:session): session closed for user core May 9 00:36:41.576169 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:42934.service: Deactivated successfully. May 9 00:36:41.582498 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:36:41.586855 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. May 9 00:36:41.597813 systemd[1]: Started sshd@5-10.0.0.109:22-10.0.0.1:42938.service - OpenSSH per-connection server daemon (10.0.0.1:42938). May 9 00:36:41.605982 systemd-logind[1564]: Removed session 5. May 9 00:36:41.639566 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 42938 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:41.642357 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:41.657063 systemd-logind[1564]: New session 6 of user core. May 9 00:36:41.670048 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:36:41.739587 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:36:41.740122 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:36:41.753343 sudo[1741]: pam_unix(sudo:session): session closed for user root May 9 00:36:41.771691 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 00:36:41.773443 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:36:41.806739 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 00:36:41.810648 auditctl[1744]: No rules May 9 00:36:41.811620 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:36:41.812073 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 00:36:41.821366 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:36:41.867244 augenrules[1763]: No rules May 9 00:36:41.868589 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:36:41.871125 sudo[1740]: pam_unix(sudo:session): session closed for user root May 9 00:36:41.874282 sshd[1736]: pam_unix(sshd:session): session closed for user core May 9 00:36:41.883737 systemd[1]: Started sshd@6-10.0.0.109:22-10.0.0.1:42950.service - OpenSSH per-connection server daemon (10.0.0.1:42950). May 9 00:36:41.884551 systemd[1]: sshd@5-10.0.0.109:22-10.0.0.1:42938.service: Deactivated successfully. May 9 00:36:41.889508 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. May 9 00:36:41.890738 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:36:41.892294 systemd-logind[1564]: Removed session 6. May 9 00:36:41.920148 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 42950 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:36:41.922496 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:36:41.928778 systemd-logind[1564]: New session 7 of user core. May 9 00:36:41.938744 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:36:41.996038 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:36:41.996416 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:36:42.617493 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:36:42.617761 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:36:43.397219 dockerd[1795]: time="2025-05-09T00:36:43.397107257Z" level=info msg="Starting up" May 9 00:36:44.118293 dockerd[1795]: time="2025-05-09T00:36:44.118225358Z" level=info msg="Loading containers: start." May 9 00:36:44.249293 kernel: Initializing XFRM netlink socket May 9 00:36:44.325850 systemd-networkd[1246]: docker0: Link UP May 9 00:36:44.348005 dockerd[1795]: time="2025-05-09T00:36:44.347969743Z" level=info msg="Loading containers: done." May 9 00:36:44.364354 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2970361904-merged.mount: Deactivated successfully. May 9 00:36:44.364876 dockerd[1795]: time="2025-05-09T00:36:44.364438514Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:36:44.364876 dockerd[1795]: time="2025-05-09T00:36:44.364561535Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 00:36:44.364876 dockerd[1795]: time="2025-05-09T00:36:44.364690977Z" level=info msg="Daemon has completed initialization" May 9 00:36:44.407681 dockerd[1795]: time="2025-05-09T00:36:44.407287606Z" level=info msg="API listen on /run/docker.sock" May 9 00:36:44.407554 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:36:45.515153 containerd[1585]: time="2025-05-09T00:36:45.515104013Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 00:36:46.637009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491079003.mount: Deactivated successfully. May 9 00:36:48.490003 containerd[1585]: time="2025-05-09T00:36:48.489944173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:48.490728 containerd[1585]: time="2025-05-09T00:36:48.490666929Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674873" May 9 00:36:48.491799 containerd[1585]: time="2025-05-09T00:36:48.491762384Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:48.494834 containerd[1585]: time="2025-05-09T00:36:48.494797887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:48.495963 containerd[1585]: time="2025-05-09T00:36:48.495905685Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.980746018s" May 9 00:36:48.495963 containerd[1585]: time="2025-05-09T00:36:48.495971188Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 9 00:36:48.522002 containerd[1585]: time="2025-05-09T00:36:48.521914991Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 00:36:50.163999 containerd[1585]: time="2025-05-09T00:36:50.163921687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:50.165140 containerd[1585]: time="2025-05-09T00:36:50.165088435Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617534" May 9 00:36:50.166418 containerd[1585]: time="2025-05-09T00:36:50.166329894Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:50.168955 containerd[1585]: time="2025-05-09T00:36:50.168913961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:50.169798 containerd[1585]: time="2025-05-09T00:36:50.169765889Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.647811152s" May 9 00:36:50.169856 containerd[1585]: time="2025-05-09T00:36:50.169797498Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 9 00:36:50.194830 containerd[1585]: time="2025-05-09T00:36:50.194771141Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 00:36:50.478161 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:36:50.492448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:36:50.700109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:36:50.706269 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:36:50.874775 kubelet[2032]: E0509 00:36:50.874461 2032 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:36:50.881312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:36:50.881675 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:36:51.874411 containerd[1585]: time="2025-05-09T00:36:51.874329483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:51.875865 containerd[1585]: time="2025-05-09T00:36:51.875811523Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903682" May 9 00:36:51.877390 containerd[1585]: time="2025-05-09T00:36:51.877359145Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:51.882346 containerd[1585]: time="2025-05-09T00:36:51.882270237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:51.883289 containerd[1585]: time="2025-05-09T00:36:51.883234917Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.688421336s" May 9 00:36:51.883367 containerd[1585]: time="2025-05-09T00:36:51.883305168Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 9 00:36:51.906709 containerd[1585]: time="2025-05-09T00:36:51.906665406Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 00:36:52.902965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801594468.mount: Deactivated successfully. May 9 00:36:53.725110 containerd[1585]: time="2025-05-09T00:36:53.725015962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:53.751282 containerd[1585]: time="2025-05-09T00:36:53.751159810Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185817" May 9 00:36:53.756959 containerd[1585]: time="2025-05-09T00:36:53.756890539Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:53.759767 containerd[1585]: time="2025-05-09T00:36:53.759687736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:53.760373 containerd[1585]: time="2025-05-09T00:36:53.760334038Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.853627235s" May 9 00:36:53.760415 containerd[1585]: time="2025-05-09T00:36:53.760374003Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 9 00:36:53.785973 containerd[1585]: time="2025-05-09T00:36:53.785924569Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 00:36:54.390205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2837704568.mount: Deactivated successfully. May 9 00:36:55.602293 containerd[1585]: time="2025-05-09T00:36:55.602200934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:55.603069 containerd[1585]: time="2025-05-09T00:36:55.602922618Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" May 9 00:36:55.604374 containerd[1585]: time="2025-05-09T00:36:55.604329436Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:55.607841 containerd[1585]: time="2025-05-09T00:36:55.607766253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:55.609416 containerd[1585]: time="2025-05-09T00:36:55.609351607Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.82338571s" May 9 00:36:55.609416 containerd[1585]: time="2025-05-09T00:36:55.609394798Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 9 00:36:55.641069 containerd[1585]: time="2025-05-09T00:36:55.640997084Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 00:36:56.232230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445876683.mount: Deactivated successfully. May 9 00:36:56.238888 containerd[1585]: time="2025-05-09T00:36:56.238816332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:56.239674 containerd[1585]: time="2025-05-09T00:36:56.239599301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" May 9 00:36:56.240939 containerd[1585]: time="2025-05-09T00:36:56.240896344Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:56.243212 containerd[1585]: time="2025-05-09T00:36:56.243147997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:36:56.244177 containerd[1585]: time="2025-05-09T00:36:56.244126192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 603.087961ms" May 9 00:36:56.244177 containerd[1585]: time="2025-05-09T00:36:56.244168321Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 9 00:36:56.273470 containerd[1585]: time="2025-05-09T00:36:56.273411573Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 00:36:57.083434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921993199.mount: Deactivated successfully. May 9 00:37:01.070088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:37:01.080928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:37:01.504518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:01.524166 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:37:01.842287 kubelet[2184]: E0509 00:37:01.837094 2184 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:37:01.843615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:37:01.843969 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:37:03.862051 containerd[1585]: time="2025-05-09T00:37:03.861930977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:03.863683 containerd[1585]: time="2025-05-09T00:37:03.863604926Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" May 9 00:37:03.866647 containerd[1585]: time="2025-05-09T00:37:03.866575228Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:03.880909 containerd[1585]: time="2025-05-09T00:37:03.877590409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:03.880909 containerd[1585]: time="2025-05-09T00:37:03.879395435Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 7.605938216s" May 9 00:37:03.880909 containerd[1585]: time="2025-05-09T00:37:03.879440560Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 9 00:37:08.959607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:08.975774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:37:09.021803 systemd[1]: Reloading requested from client PID 2279 ('systemctl') (unit session-7.scope)... May 9 00:37:09.022024 systemd[1]: Reloading... May 9 00:37:09.131773 zram_generator::config[2318]: No configuration found. May 9 00:37:09.537526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:37:09.664883 systemd[1]: Reloading finished in 642 ms. May 9 00:37:09.786354 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 00:37:09.786555 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 00:37:09.787159 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:09.807821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:37:10.067833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:10.075894 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:37:10.371442 kubelet[2377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:37:10.371442 kubelet[2377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:37:10.371442 kubelet[2377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:37:10.371442 kubelet[2377]: I0509 00:37:10.368774 2377 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:37:10.762591 kubelet[2377]: I0509 00:37:10.762370 2377 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:37:10.762591 kubelet[2377]: I0509 00:37:10.762430 2377 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:37:10.762767 kubelet[2377]: I0509 00:37:10.762714 2377 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:37:10.822760 kubelet[2377]: I0509 00:37:10.822485 2377 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:37:10.828187 kubelet[2377]: E0509 00:37:10.828133 2377 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:10.864895 kubelet[2377]: I0509 00:37:10.864821 2377 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:37:10.868133 kubelet[2377]: I0509 00:37:10.867973 2377 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:37:10.869127 kubelet[2377]: I0509 00:37:10.868115 2377 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:37:10.869127 kubelet[2377]: I0509 00:37:10.868499 2377 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:37:10.869127 kubelet[2377]: I0509 00:37:10.868516 2377 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:37:10.869127 kubelet[2377]: I0509 00:37:10.868782 2377 state_mem.go:36] "Initialized new in-memory state store" May 9 00:37:10.873971 kubelet[2377]: I0509 00:37:10.873715 2377 kubelet.go:400] "Attempting to sync node with API server" May 9 00:37:10.873971 kubelet[2377]: I0509 00:37:10.873774 2377 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:37:10.873971 kubelet[2377]: I0509 00:37:10.873832 2377 kubelet.go:312] "Adding apiserver pod source" May 9 00:37:10.873971 kubelet[2377]: I0509 00:37:10.873871 2377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:37:10.879733 kubelet[2377]: W0509 00:37:10.878766 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:10.879733 kubelet[2377]: W0509 00:37:10.878818 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:10.879733 kubelet[2377]: E0509 00:37:10.878886 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:10.879733 kubelet[2377]: E0509 00:37:10.878918 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:10.889935 kubelet[2377]: I0509 00:37:10.887849 2377 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:37:10.894031 kubelet[2377]: I0509 00:37:10.892527 2377 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:37:10.894031 kubelet[2377]: W0509 00:37:10.892678 2377 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:37:10.894031 kubelet[2377]: I0509 00:37:10.893814 2377 server.go:1264] "Started kubelet" May 9 00:37:10.896677 kubelet[2377]: I0509 00:37:10.894323 2377 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:37:10.896677 kubelet[2377]: I0509 00:37:10.895844 2377 server.go:455] "Adding debug handlers to kubelet server" May 9 00:37:10.900080 kubelet[2377]: I0509 00:37:10.897145 2377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:37:10.900080 kubelet[2377]: I0509 00:37:10.897589 2377 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:37:10.905988 kubelet[2377]: I0509 00:37:10.905149 2377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:37:10.906824 kubelet[2377]: I0509 00:37:10.906179 2377 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:37:10.906824 kubelet[2377]: I0509 00:37:10.906345 2377 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:37:10.906824 kubelet[2377]: I0509 00:37:10.906455 2377 reconciler.go:26] "Reconciler: start to sync state" May 9 00:37:10.910006 kubelet[2377]: E0509 00:37:10.909918 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="200ms" May 9 00:37:10.910193 kubelet[2377]: E0509 00:37:10.907877 2377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db4ced3788ce6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:37:10.893776102 +0000 UTC m=+0.736212824,LastTimestamp:2025-05-09 00:37:10.893776102 +0000 UTC m=+0.736212824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:37:10.910374 kubelet[2377]: W0509 00:37:10.910133 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:10.910844 kubelet[2377]: E0509 00:37:10.910580 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:10.912862 kubelet[2377]: I0509 00:37:10.912813 2377 factory.go:221] Registration of the systemd container factory successfully May 9 00:37:10.913036 kubelet[2377]: I0509 00:37:10.913005 2377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:37:10.917003 kubelet[2377]: E0509 00:37:10.914312 2377 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:37:10.927037 kubelet[2377]: I0509 00:37:10.924809 2377 factory.go:221] Registration of the containerd container factory successfully May 9 00:37:10.968971 kubelet[2377]: I0509 00:37:10.968890 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:37:10.979242 kubelet[2377]: I0509 00:37:10.978859 2377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:37:10.979242 kubelet[2377]: I0509 00:37:10.978937 2377 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:37:10.979242 kubelet[2377]: I0509 00:37:10.978980 2377 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:37:10.979242 kubelet[2377]: E0509 00:37:10.979071 2377 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:37:10.987854 kubelet[2377]: W0509 00:37:10.979766 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:10.987854 kubelet[2377]: E0509 00:37:10.979819 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:11.003300 kubelet[2377]: I0509 00:37:11.002835 2377 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:37:11.003300 kubelet[2377]: I0509 00:37:11.002860 2377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:37:11.003300 kubelet[2377]: I0509 00:37:11.002894 2377 state_mem.go:36] "Initialized new in-memory state store" May 9 00:37:11.012107 kubelet[2377]: I0509 00:37:11.012038 2377 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:37:11.012686 kubelet[2377]: E0509 00:37:11.012562 2377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 9 00:37:11.079338 kubelet[2377]: E0509 00:37:11.079173 2377 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:37:11.113872 kubelet[2377]: E0509 00:37:11.113658 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="400ms" May 9 00:37:11.227362 kubelet[2377]: I0509 00:37:11.226634 2377 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:37:11.227362 kubelet[2377]: E0509 00:37:11.227286 2377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 9 00:37:11.281223 kubelet[2377]: E0509 00:37:11.280363 2377 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:37:11.518785 kubelet[2377]: E0509 00:37:11.518656 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="800ms" May 9 00:37:11.636815 kubelet[2377]: I0509 00:37:11.636285 2377 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:37:11.636815 kubelet[2377]: E0509 00:37:11.636740 2377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 9 00:37:11.666952 kubelet[2377]: I0509 00:37:11.656205 2377 policy_none.go:49] "None policy: Start" May 9 00:37:11.666952 kubelet[2377]: I0509 00:37:11.664853 2377 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:37:11.675721 kubelet[2377]: I0509 00:37:11.673681 2377 state_mem.go:35] "Initializing new in-memory state store" May 9 00:37:11.682349 kubelet[2377]: E0509 00:37:11.682189 2377 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:37:11.717974 kubelet[2377]: W0509 00:37:11.711792 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:11.717974 kubelet[2377]: E0509 00:37:11.711894 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:11.752025 kubelet[2377]: I0509 00:37:11.748006 2377 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:37:11.752025 kubelet[2377]: I0509 00:37:11.748378 2377 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:37:11.752025 kubelet[2377]: I0509 00:37:11.748558 2377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:37:11.766313 kubelet[2377]: E0509 00:37:11.763550 2377 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:37:12.088366 kubelet[2377]: W0509 00:37:12.087991 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:12.088366 kubelet[2377]: E0509 00:37:12.088084 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:12.168592 kubelet[2377]: W0509 00:37:12.168457 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:12.168592 kubelet[2377]: E0509 00:37:12.168576 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:12.294308 kubelet[2377]: W0509 00:37:12.293391 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:12.294308 kubelet[2377]: E0509 00:37:12.293514 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:12.320289 kubelet[2377]: E0509 00:37:12.320150 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="1.6s" May 9 00:37:12.446052 kubelet[2377]: I0509 00:37:12.445495 2377 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:37:12.446365 kubelet[2377]: E0509 00:37:12.446332 2377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 9 00:37:12.484967 kubelet[2377]: I0509 00:37:12.482930 2377 topology_manager.go:215] "Topology Admit Handler" podUID="7b1f4da2ee2f6926f0183eb89ff22816" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 00:37:12.488763 kubelet[2377]: I0509 00:37:12.487935 2377 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 00:37:12.496280 kubelet[2377]: I0509 00:37:12.494055 2377 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 00:37:12.527697 kubelet[2377]: I0509 00:37:12.527283 2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b1f4da2ee2f6926f0183eb89ff22816-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b1f4da2ee2f6926f0183eb89ff22816\") " pod="kube-system/kube-apiserver-localhost" May 9 00:37:12.527697 kubelet[2377]: I0509 00:37:12.527346 2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:12.527697 kubelet[2377]: I0509 00:37:12.527380 2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:12.527697 kubelet[2377]: I0509 00:37:12.527400 2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:12.527697 kubelet[2377]: I0509 00:37:12.527422 2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:12.528489 kubelet[2377]: I0509 00:37:12.527443 2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:12.528489 kubelet[2377]: I0509 00:37:12.527468 2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 00:37:12.528489 kubelet[2377]: I0509 00:37:12.527491 2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b1f4da2ee2f6926f0183eb89ff22816-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b1f4da2ee2f6926f0183eb89ff22816\") " pod="kube-system/kube-apiserver-localhost" May 9 00:37:12.528489 kubelet[2377]: I0509 00:37:12.527523 2377 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b1f4da2ee2f6926f0183eb89ff22816-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b1f4da2ee2f6926f0183eb89ff22816\") " pod="kube-system/kube-apiserver-localhost" May 9 00:37:12.812699 kubelet[2377]: E0509 00:37:12.809004 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:12.812826 containerd[1585]: time="2025-05-09T00:37:12.810148280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b1f4da2ee2f6926f0183eb89ff22816,Namespace:kube-system,Attempt:0,}" May 9 00:37:12.817442 kubelet[2377]: E0509 00:37:12.816432 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:12.817442 kubelet[2377]: E0509 00:37:12.816670 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:12.817615 containerd[1585]: time="2025-05-09T00:37:12.817145506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 9 00:37:12.817615 containerd[1585]: time="2025-05-09T00:37:12.817517253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 9 00:37:12.955735 kubelet[2377]: E0509 00:37:12.954610 2377 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:13.396125 kubelet[2377]: E0509 00:37:13.395920 2377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db4ced3788ce6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:37:10.893776102 +0000 UTC m=+0.736212824,LastTimestamp:2025-05-09 00:37:10.893776102 +0000 UTC m=+0.736212824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:37:13.711025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600865905.mount: Deactivated successfully. May 9 00:37:13.889407 kubelet[2377]: W0509 00:37:13.889316 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:13.889407 kubelet[2377]: E0509 00:37:13.889376 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:13.921564 kubelet[2377]: E0509 00:37:13.921409 2377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="3.2s" May 9 00:37:13.933057 containerd[1585]: time="2025-05-09T00:37:13.932961422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:37:13.973768 containerd[1585]: time="2025-05-09T00:37:13.959496518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:37:13.993685 containerd[1585]: time="2025-05-09T00:37:13.991927605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:37:14.017545 containerd[1585]: time="2025-05-09T00:37:14.017369421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:37:14.026143 containerd[1585]: time="2025-05-09T00:37:14.025579916Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:37:14.051545 kubelet[2377]: I0509 00:37:14.050923 2377 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:37:14.051545 kubelet[2377]: E0509 00:37:14.051357 2377 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 9 00:37:14.071734 containerd[1585]: time="2025-05-09T00:37:14.071557730Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:37:14.120773 containerd[1585]: time="2025-05-09T00:37:14.120660214Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:37:14.142301 containerd[1585]: time="2025-05-09T00:37:14.142164655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:37:14.143559 containerd[1585]: time="2025-05-09T00:37:14.143452664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.326180577s" May 9 00:37:14.201320 containerd[1585]: time="2025-05-09T00:37:14.197691683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.380059111s" May 9 00:37:14.201320 containerd[1585]: time="2025-05-09T00:37:14.198425599Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.388138714s" May 9 00:37:14.244204 kubelet[2377]: W0509 00:37:14.244007 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:14.244204 kubelet[2377]: E0509 00:37:14.244071 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:14.893126 containerd[1585]: time="2025-05-09T00:37:14.892961111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:14.893126 containerd[1585]: time="2025-05-09T00:37:14.893036399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:14.893126 containerd[1585]: time="2025-05-09T00:37:14.893068342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:14.893612 containerd[1585]: time="2025-05-09T00:37:14.893490023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:14.894006 containerd[1585]: time="2025-05-09T00:37:14.893889229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:14.894006 containerd[1585]: time="2025-05-09T00:37:14.893966502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:14.895293 containerd[1585]: time="2025-05-09T00:37:14.893987283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:14.895293 containerd[1585]: time="2025-05-09T00:37:14.894121087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:14.895293 containerd[1585]: time="2025-05-09T00:37:14.894972265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:14.895293 containerd[1585]: time="2025-05-09T00:37:14.895147220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:14.895293 containerd[1585]: time="2025-05-09T00:37:14.895188060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:14.896249 containerd[1585]: time="2025-05-09T00:37:14.896095209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:14.968521 systemd[1]: run-containerd-runc-k8s.io-7ecd42f91a555862aafd991e8c651fd7807fb164b23f9806a7967d630b036131-runc.tcFoIN.mount: Deactivated successfully. May 9 00:37:15.066557 containerd[1585]: time="2025-05-09T00:37:15.066144166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7b1f4da2ee2f6926f0183eb89ff22816,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f6684f778cc6cdba9e117dfe633d79f6b8bc3360e915efd65be25e798bb6ce4\"" May 9 00:37:15.067465 containerd[1585]: time="2025-05-09T00:37:15.067428009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"39fdbb2d495d6ca0b4fa155d613289bbff728eeef95bf82884f13d0f1a473017\"" May 9 00:37:15.068013 kubelet[2377]: E0509 00:37:15.067987 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:15.068342 kubelet[2377]: E0509 00:37:15.068177 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:15.070687 containerd[1585]: time="2025-05-09T00:37:15.070643668Z" level=info msg="CreateContainer within sandbox \"7f6684f778cc6cdba9e117dfe633d79f6b8bc3360e915efd65be25e798bb6ce4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:37:15.070941 containerd[1585]: time="2025-05-09T00:37:15.070912476Z" level=info msg="CreateContainer within sandbox \"39fdbb2d495d6ca0b4fa155d613289bbff728eeef95bf82884f13d0f1a473017\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:37:15.082992 containerd[1585]: time="2025-05-09T00:37:15.082936268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ecd42f91a555862aafd991e8c651fd7807fb164b23f9806a7967d630b036131\"" May 9 00:37:15.084013 kubelet[2377]: E0509 00:37:15.083980 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:15.086768 containerd[1585]: time="2025-05-09T00:37:15.086726897Z" level=info msg="CreateContainer within sandbox \"7ecd42f91a555862aafd991e8c651fd7807fb164b23f9806a7967d630b036131\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:37:15.101103 containerd[1585]: time="2025-05-09T00:37:15.101034896Z" level=info msg="CreateContainer within sandbox \"7f6684f778cc6cdba9e117dfe633d79f6b8bc3360e915efd65be25e798bb6ce4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ced9fee2e499ebaa4ad89d6e2f8b093ef1e26ede948099ac52fde9d82b5d0e7a\"" May 9 00:37:15.102047 containerd[1585]: time="2025-05-09T00:37:15.101997067Z" level=info msg="StartContainer for \"ced9fee2e499ebaa4ad89d6e2f8b093ef1e26ede948099ac52fde9d82b5d0e7a\"" May 9 00:37:15.114598 containerd[1585]: time="2025-05-09T00:37:15.114537453Z" level=info msg="CreateContainer within sandbox \"39fdbb2d495d6ca0b4fa155d613289bbff728eeef95bf82884f13d0f1a473017\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ea23439231d5c10cb512ae98f324313f1242fb125334dcee4accd1ec2a15e73\"" May 9 00:37:15.115199 containerd[1585]: time="2025-05-09T00:37:15.115142772Z" level=info msg="StartContainer for \"4ea23439231d5c10cb512ae98f324313f1242fb125334dcee4accd1ec2a15e73\"" May 9 00:37:15.121350 containerd[1585]: time="2025-05-09T00:37:15.121289433Z" level=info msg="CreateContainer within sandbox \"7ecd42f91a555862aafd991e8c651fd7807fb164b23f9806a7967d630b036131\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"862ba0eb7e9db725e83e5a9e4c4f95145a7f2a4ff0516f33928b9b82f904eee2\"" May 9 00:37:15.123279 containerd[1585]: time="2025-05-09T00:37:15.121782171Z" level=info msg="StartContainer for \"862ba0eb7e9db725e83e5a9e4c4f95145a7f2a4ff0516f33928b9b82f904eee2\"" May 9 00:37:15.336542 kubelet[2377]: W0509 00:37:15.336462 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:15.336542 kubelet[2377]: E0509 00:37:15.336541 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:15.363019 kubelet[2377]: W0509 00:37:15.362971 2377 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:15.363019 kubelet[2377]: E0509 00:37:15.363021 2377 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.109:6443: connect: connection refused May 9 00:37:15.489001 containerd[1585]: time="2025-05-09T00:37:15.488912027Z" level=info msg="StartContainer for \"ced9fee2e499ebaa4ad89d6e2f8b093ef1e26ede948099ac52fde9d82b5d0e7a\" returns successfully" May 9 00:37:15.489156 containerd[1585]: time="2025-05-09T00:37:15.489130066Z" level=info msg="StartContainer for \"862ba0eb7e9db725e83e5a9e4c4f95145a7f2a4ff0516f33928b9b82f904eee2\" returns successfully" May 9 00:37:15.489720 containerd[1585]: time="2025-05-09T00:37:15.489653404Z" level=info msg="StartContainer for \"4ea23439231d5c10cb512ae98f324313f1242fb125334dcee4accd1ec2a15e73\" returns successfully" May 9 00:37:16.059015 kubelet[2377]: E0509 00:37:16.058912 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:16.062116 kubelet[2377]: E0509 00:37:16.062068 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:16.063657 kubelet[2377]: E0509 00:37:16.063567 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:17.066674 kubelet[2377]: E0509 00:37:17.066626 2377 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:17.127650 kubelet[2377]: E0509 00:37:17.127592 2377 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 00:37:17.128732 kubelet[2377]: E0509 00:37:17.128690 2377 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 9 00:37:17.253640 kubelet[2377]: I0509 00:37:17.253581 2377 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:37:17.264729 kubelet[2377]: I0509 00:37:17.264680 2377 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 00:37:17.272583 kubelet[2377]: E0509 00:37:17.272551 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:17.373468 kubelet[2377]: E0509 00:37:17.373393 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:17.474371 kubelet[2377]: E0509 00:37:17.474320 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:17.574791 kubelet[2377]: E0509 00:37:17.574739 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:17.675424 kubelet[2377]: E0509 00:37:17.675295 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:17.776192 kubelet[2377]: E0509 00:37:17.776146 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:17.877191 kubelet[2377]: E0509 00:37:17.877134 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:17.978341 kubelet[2377]: E0509 00:37:17.978175 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:18.078544 kubelet[2377]: E0509 00:37:18.078485 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:18.179233 kubelet[2377]: E0509 00:37:18.179150 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:18.279927 kubelet[2377]: E0509 00:37:18.279767 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:18.380388 kubelet[2377]: E0509 00:37:18.380318 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:18.481143 kubelet[2377]: E0509 00:37:18.481081 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:18.581725 kubelet[2377]: E0509 00:37:18.581679 2377 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:37:18.626283 systemd[1]: Reloading requested from client PID 2655 ('systemctl') (unit session-7.scope)... May 9 00:37:18.626302 systemd[1]: Reloading... May 9 00:37:18.695312 zram_generator::config[2697]: No configuration found. May 9 00:37:18.823239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:37:18.890815 kubelet[2377]: I0509 00:37:18.890678 2377 apiserver.go:52] "Watching apiserver" May 9 00:37:18.907100 kubelet[2377]: I0509 00:37:18.907062 2377 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:37:18.913877 systemd[1]: Reloading finished in 287 ms. May 9 00:37:18.949004 kubelet[2377]: I0509 00:37:18.948955 2377 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:37:18.949052 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:37:18.972683 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:37:18.973109 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:18.979796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:37:19.172096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:37:19.177694 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:37:19.231693 kubelet[2749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:37:19.231693 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 00:37:19.231693 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:37:19.232785 kubelet[2749]: I0509 00:37:19.232701 2749 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:37:19.239541 kubelet[2749]: I0509 00:37:19.239484 2749 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 00:37:19.239541 kubelet[2749]: I0509 00:37:19.239518 2749 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:37:19.239759 kubelet[2749]: I0509 00:37:19.239738 2749 server.go:927] "Client rotation is on, will bootstrap in background" May 9 00:37:19.240991 kubelet[2749]: I0509 00:37:19.240966 2749 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:37:19.253934 kubelet[2749]: I0509 00:37:19.253899 2749 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:37:19.262281 kubelet[2749]: I0509 00:37:19.262227 2749 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:37:19.262861 kubelet[2749]: I0509 00:37:19.262808 2749 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:37:19.263014 kubelet[2749]: I0509 00:37:19.262839 2749 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 00:37:19.263089 kubelet[2749]: I0509 00:37:19.263037 2749 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:37:19.263089 kubelet[2749]: I0509 00:37:19.263048 2749 container_manager_linux.go:301] "Creating device plugin manager" May 9 00:37:19.263137 kubelet[2749]: I0509 00:37:19.263093 2749 state_mem.go:36] "Initialized new in-memory state store" May 9 00:37:19.263235 kubelet[2749]: I0509 00:37:19.263211 2749 kubelet.go:400] "Attempting to sync node with API server" May 9 00:37:19.263235 kubelet[2749]: I0509 00:37:19.263228 2749 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:37:19.264500 kubelet[2749]: I0509 00:37:19.263267 2749 kubelet.go:312] "Adding apiserver pod source" May 9 00:37:19.264500 kubelet[2749]: I0509 00:37:19.263280 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:37:19.265469 kubelet[2749]: I0509 00:37:19.265413 2749 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:37:19.265775 kubelet[2749]: I0509 00:37:19.265621 2749 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:37:19.266068 kubelet[2749]: I0509 00:37:19.266043 2749 server.go:1264] "Started kubelet" May 9 00:37:19.268666 kubelet[2749]: I0509 00:37:19.268503 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:37:19.275941 kubelet[2749]: I0509 00:37:19.275869 2749 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:37:19.279876 kubelet[2749]: I0509 00:37:19.279617 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:37:19.280925 kubelet[2749]: I0509 00:37:19.280874 2749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:37:19.281375 kubelet[2749]: I0509 00:37:19.281357 2749 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:37:19.281617 kubelet[2749]: E0509 00:37:19.281597 2749 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 00:37:19.281887 kubelet[2749]: I0509 00:37:19.281857 2749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:37:19.281929 kubelet[2749]: I0509 00:37:19.281903 2749 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 00:37:19.281929 kubelet[2749]: I0509 00:37:19.281925 2749 kubelet.go:2337] "Starting kubelet main sync loop" May 9 00:37:19.282003 kubelet[2749]: E0509 00:37:19.281982 2749 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:37:19.286321 kubelet[2749]: I0509 00:37:19.285970 2749 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 00:37:19.286441 kubelet[2749]: I0509 00:37:19.286413 2749 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:37:19.286765 kubelet[2749]: I0509 00:37:19.286738 2749 reconciler.go:26] "Reconciler: start to sync state" May 9 00:37:19.289028 kubelet[2749]: I0509 00:37:19.288992 2749 server.go:455] "Adding debug handlers to kubelet server" May 9 00:37:19.289499 kubelet[2749]: I0509 00:37:19.289464 2749 factory.go:221] Registration of the systemd container factory successfully May 9 00:37:19.289787 kubelet[2749]: I0509 00:37:19.289617 2749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:37:19.291624 kubelet[2749]: I0509 00:37:19.291604 2749 factory.go:221] Registration of the containerd container factory successfully May 9 00:37:19.342988 kubelet[2749]: I0509 00:37:19.342938 2749 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 00:37:19.342988 kubelet[2749]: I0509 00:37:19.342962 2749 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 00:37:19.342988 kubelet[2749]: I0509 00:37:19.342989 2749 state_mem.go:36] "Initialized new in-memory state store" May 9 00:37:19.343396 kubelet[2749]: I0509 00:37:19.343177 2749 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:37:19.344335 kubelet[2749]: I0509 00:37:19.343190 2749 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:37:19.345423 kubelet[2749]: I0509 00:37:19.344343 2749 policy_none.go:49] "None policy: Start" May 9 00:37:19.345530 kubelet[2749]: I0509 00:37:19.345516 2749 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 00:37:19.345608 kubelet[2749]: I0509 00:37:19.345555 2749 state_mem.go:35] "Initializing new in-memory state store" May 9 00:37:19.345712 kubelet[2749]: I0509 00:37:19.345695 2749 state_mem.go:75] "Updated machine memory state" May 9 00:37:19.347296 kubelet[2749]: I0509 00:37:19.347276 2749 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:37:19.347660 kubelet[2749]: I0509 00:37:19.347465 2749 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:37:19.347660 kubelet[2749]: I0509 00:37:19.347588 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:37:19.382448 kubelet[2749]: I0509 00:37:19.382374 2749 topology_manager.go:215] "Topology Admit Handler" podUID="7b1f4da2ee2f6926f0183eb89ff22816" podNamespace="kube-system" podName="kube-apiserver-localhost" May 9 00:37:19.382618 kubelet[2749]: I0509 00:37:19.382503 2749 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 9 00:37:19.382618 kubelet[2749]: I0509 00:37:19.382579 2749 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 9 00:37:19.387009 kubelet[2749]: I0509 00:37:19.386935 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:19.387009 kubelet[2749]: I0509 00:37:19.386979 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:19.387009 kubelet[2749]: I0509 00:37:19.387007 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 9 00:37:19.387009 kubelet[2749]: I0509 00:37:19.387029 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b1f4da2ee2f6926f0183eb89ff22816-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b1f4da2ee2f6926f0183eb89ff22816\") " pod="kube-system/kube-apiserver-localhost" May 9 00:37:19.387009 kubelet[2749]: I0509 00:37:19.387048 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b1f4da2ee2f6926f0183eb89ff22816-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7b1f4da2ee2f6926f0183eb89ff22816\") " pod="kube-system/kube-apiserver-localhost" May 9 00:37:19.387423 kubelet[2749]: I0509 00:37:19.387067 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:19.387423 kubelet[2749]: I0509 00:37:19.387088 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:19.387423 kubelet[2749]: I0509 00:37:19.387109 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b1f4da2ee2f6926f0183eb89ff22816-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7b1f4da2ee2f6926f0183eb89ff22816\") " pod="kube-system/kube-apiserver-localhost" May 9 00:37:19.387423 kubelet[2749]: I0509 00:37:19.387131 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:37:19.393334 kubelet[2749]: I0509 00:37:19.393303 2749 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 9 00:37:19.623271 kubelet[2749]: E0509 00:37:19.623217 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:19.727280 kubelet[2749]: E0509 00:37:19.727196 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:19.727753 kubelet[2749]: I0509 00:37:19.727694 2749 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 9 00:37:19.728295 kubelet[2749]: I0509 00:37:19.727886 2749 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 9 00:37:19.728295 kubelet[2749]: E0509 00:37:19.728217 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:19.804475 sudo[2782]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 00:37:19.804942 sudo[2782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 00:37:20.264544 kubelet[2749]: I0509 00:37:20.264478 2749 apiserver.go:52] "Watching apiserver" May 9 00:37:20.287441 kubelet[2749]: I0509 00:37:20.287386 2749 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:37:20.298779 kubelet[2749]: E0509 00:37:20.298474 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:20.299228 kubelet[2749]: E0509 00:37:20.299191 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:20.309281 kubelet[2749]: E0509 00:37:20.308378 2749 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:37:20.309281 kubelet[2749]: E0509 00:37:20.308829 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:20.338529 sudo[2782]: pam_unix(sudo:session): session closed for user root May 9 00:37:20.355319 kubelet[2749]: I0509 00:37:20.353684 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.353638338 podStartE2EDuration="1.353638338s" podCreationTimestamp="2025-05-09 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:37:20.351019646 +0000 UTC m=+1.168797776" watchObservedRunningTime="2025-05-09 00:37:20.353638338 +0000 UTC m=+1.171416457" May 9 00:37:20.363167 kubelet[2749]: I0509 00:37:20.363095 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.363076633 podStartE2EDuration="1.363076633s" podCreationTimestamp="2025-05-09 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:37:20.362656058 +0000 UTC m=+1.180434197" watchObservedRunningTime="2025-05-09 00:37:20.363076633 +0000 UTC m=+1.180854743" May 9 00:37:20.381141 kubelet[2749]: I0509 00:37:20.381073 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.381046686 podStartE2EDuration="1.381046686s" podCreationTimestamp="2025-05-09 00:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:37:20.37130893 +0000 UTC m=+1.189087049" watchObservedRunningTime="2025-05-09 00:37:20.381046686 +0000 UTC m=+1.198824795" May 9 00:37:21.299759 kubelet[2749]: E0509 00:37:21.299720 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:21.616787 sudo[1777]: pam_unix(sudo:session): session closed for user root May 9 00:37:21.619691 sshd[1769]: pam_unix(sshd:session): session closed for user core May 9 00:37:21.624561 systemd[1]: sshd@6-10.0.0.109:22-10.0.0.1:42950.service: Deactivated successfully. May 9 00:37:21.627166 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. May 9 00:37:21.627292 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:37:21.629013 systemd-logind[1564]: Removed session 7. May 9 00:37:22.301359 kubelet[2749]: E0509 00:37:22.301324 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:22.495802 update_engine[1570]: I20250509 00:37:22.495687 1570 update_attempter.cc:509] Updating boot flags... May 9 00:37:22.528291 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2830) May 9 00:37:22.571297 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2830) May 9 00:37:22.615732 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2830) May 9 00:37:26.191779 kubelet[2749]: E0509 00:37:26.191730 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:26.307637 kubelet[2749]: E0509 00:37:26.307599 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:28.042278 kubelet[2749]: E0509 00:37:28.042202 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:28.311690 kubelet[2749]: E0509 00:37:28.311242 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:30.832191 kubelet[2749]: E0509 00:37:30.832147 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:32.646198 kubelet[2749]: I0509 00:37:32.646154 2749 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:37:32.646751 containerd[1585]: time="2025-05-09T00:37:32.646627568Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:37:32.647138 kubelet[2749]: I0509 00:37:32.646811 2749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:37:33.651342 kubelet[2749]: I0509 00:37:33.651238 2749 topology_manager.go:215] "Topology Admit Handler" podUID="be77c021-9b53-438a-8c55-0fcc67f4b1e0" podNamespace="kube-system" podName="kube-proxy-8t94r" May 9 00:37:33.663419 kubelet[2749]: I0509 00:37:33.663342 2749 topology_manager.go:215] "Topology Admit Handler" podUID="28ef146f-3a25-47ef-9256-f5347ee08fcd" podNamespace="kube-system" podName="cilium-qm7jb" May 9 00:37:33.669050 kubelet[2749]: I0509 00:37:33.668997 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-etc-cni-netd\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669050 kubelet[2749]: I0509 00:37:33.669043 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cni-path\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669225 kubelet[2749]: I0509 00:37:33.669068 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-xtables-lock\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669225 kubelet[2749]: I0509 00:37:33.669092 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-host-proc-sys-kernel\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669225 kubelet[2749]: I0509 00:37:33.669115 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-lib-modules\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669225 kubelet[2749]: I0509 00:37:33.669133 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28ef146f-3a25-47ef-9256-f5347ee08fcd-hubble-tls\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669225 kubelet[2749]: I0509 00:37:33.669150 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-hostproc\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669225 kubelet[2749]: I0509 00:37:33.669168 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzv96\" (UniqueName: \"kubernetes.io/projected/28ef146f-3a25-47ef-9256-f5347ee08fcd-kube-api-access-nzv96\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669487 kubelet[2749]: I0509 00:37:33.669185 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be77c021-9b53-438a-8c55-0fcc67f4b1e0-lib-modules\") pod \"kube-proxy-8t94r\" (UID: \"be77c021-9b53-438a-8c55-0fcc67f4b1e0\") " pod="kube-system/kube-proxy-8t94r" May 9 00:37:33.669487 kubelet[2749]: I0509 00:37:33.669203 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-run\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669487 kubelet[2749]: I0509 00:37:33.669222 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-bpf-maps\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.669487 kubelet[2749]: I0509 00:37:33.669242 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-host-proc-sys-net\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.670529 kubelet[2749]: I0509 00:37:33.670487 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c9ss\" (UniqueName: \"kubernetes.io/projected/be77c021-9b53-438a-8c55-0fcc67f4b1e0-kube-api-access-4c9ss\") pod \"kube-proxy-8t94r\" (UID: \"be77c021-9b53-438a-8c55-0fcc67f4b1e0\") " pod="kube-system/kube-proxy-8t94r" May 9 00:37:33.671384 kubelet[2749]: I0509 00:37:33.671344 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-cgroup\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.671450 kubelet[2749]: I0509 00:37:33.671394 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be77c021-9b53-438a-8c55-0fcc67f4b1e0-kube-proxy\") pod \"kube-proxy-8t94r\" (UID: \"be77c021-9b53-438a-8c55-0fcc67f4b1e0\") " pod="kube-system/kube-proxy-8t94r" May 9 00:37:33.671450 kubelet[2749]: I0509 00:37:33.671421 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be77c021-9b53-438a-8c55-0fcc67f4b1e0-xtables-lock\") pod \"kube-proxy-8t94r\" (UID: \"be77c021-9b53-438a-8c55-0fcc67f4b1e0\") " pod="kube-system/kube-proxy-8t94r" May 9 00:37:33.671532 kubelet[2749]: I0509 00:37:33.671448 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28ef146f-3a25-47ef-9256-f5347ee08fcd-clustermesh-secrets\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:33.671532 kubelet[2749]: I0509 00:37:33.671475 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-config-path\") pod \"cilium-qm7jb\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " pod="kube-system/cilium-qm7jb" May 9 00:37:34.265062 kubelet[2749]: E0509 00:37:34.265025 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:34.265764 containerd[1585]: time="2025-05-09T00:37:34.265722956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8t94r,Uid:be77c021-9b53-438a-8c55-0fcc67f4b1e0,Namespace:kube-system,Attempt:0,}" May 9 00:37:34.272036 kubelet[2749]: E0509 00:37:34.272001 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:34.272459 containerd[1585]: time="2025-05-09T00:37:34.272414384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qm7jb,Uid:28ef146f-3a25-47ef-9256-f5347ee08fcd,Namespace:kube-system,Attempt:0,}" May 9 00:37:34.394100 kubelet[2749]: I0509 00:37:34.393824 2749 topology_manager.go:215] "Topology Admit Handler" podUID="09701c7f-5a6d-4bde-8e10-19799e14d3ab" podNamespace="kube-system" podName="cilium-operator-599987898-b42hm" May 9 00:37:34.478497 kubelet[2749]: I0509 00:37:34.478438 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09701c7f-5a6d-4bde-8e10-19799e14d3ab-cilium-config-path\") pod \"cilium-operator-599987898-b42hm\" (UID: \"09701c7f-5a6d-4bde-8e10-19799e14d3ab\") " pod="kube-system/cilium-operator-599987898-b42hm" May 9 00:37:34.478497 kubelet[2749]: I0509 00:37:34.478495 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hbjs\" (UniqueName: \"kubernetes.io/projected/09701c7f-5a6d-4bde-8e10-19799e14d3ab-kube-api-access-8hbjs\") pod \"cilium-operator-599987898-b42hm\" (UID: \"09701c7f-5a6d-4bde-8e10-19799e14d3ab\") " pod="kube-system/cilium-operator-599987898-b42hm" May 9 00:37:34.998235 kubelet[2749]: E0509 00:37:34.998176 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:34.999284 containerd[1585]: time="2025-05-09T00:37:34.999016468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b42hm,Uid:09701c7f-5a6d-4bde-8e10-19799e14d3ab,Namespace:kube-system,Attempt:0,}" May 9 00:37:35.448387 containerd[1585]: time="2025-05-09T00:37:35.448190330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:35.448387 containerd[1585]: time="2025-05-09T00:37:35.448335735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:35.448387 containerd[1585]: time="2025-05-09T00:37:35.448359130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:35.449054 containerd[1585]: time="2025-05-09T00:37:35.448484759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:35.499106 containerd[1585]: time="2025-05-09T00:37:35.499036315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8t94r,Uid:be77c021-9b53-438a-8c55-0fcc67f4b1e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"98f4864fed7fc74b2fcf7a5e934a6eaccb39f41005a9d194680bcce0e337e467\"" May 9 00:37:35.500337 kubelet[2749]: E0509 00:37:35.500299 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:35.503332 containerd[1585]: time="2025-05-09T00:37:35.503158729Z" level=info msg="CreateContainer within sandbox \"98f4864fed7fc74b2fcf7a5e934a6eaccb39f41005a9d194680bcce0e337e467\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:37:35.596896 containerd[1585]: time="2025-05-09T00:37:35.596646189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:35.596896 containerd[1585]: time="2025-05-09T00:37:35.596711392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:35.596896 containerd[1585]: time="2025-05-09T00:37:35.596723234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:35.596896 containerd[1585]: time="2025-05-09T00:37:35.596831370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:35.644738 containerd[1585]: time="2025-05-09T00:37:35.644682494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qm7jb,Uid:28ef146f-3a25-47ef-9256-f5347ee08fcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\"" May 9 00:37:35.645647 kubelet[2749]: E0509 00:37:35.645621 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:35.646776 containerd[1585]: time="2025-05-09T00:37:35.646748420Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 00:37:36.159668 containerd[1585]: time="2025-05-09T00:37:36.159598191Z" level=info msg="CreateContainer within sandbox \"98f4864fed7fc74b2fcf7a5e934a6eaccb39f41005a9d194680bcce0e337e467\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70e9189b50628ebe77332029e13a014267a6824aadef60fe9411c1730525701b\"" May 9 00:37:36.160908 containerd[1585]: time="2025-05-09T00:37:36.160844216Z" level=info msg="StartContainer for \"70e9189b50628ebe77332029e13a014267a6824aadef60fe9411c1730525701b\"" May 9 00:37:36.167431 containerd[1585]: time="2025-05-09T00:37:36.167025715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:36.167431 containerd[1585]: time="2025-05-09T00:37:36.167121887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:36.167431 containerd[1585]: time="2025-05-09T00:37:36.167143217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:36.167431 containerd[1585]: time="2025-05-09T00:37:36.167318991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:36.254965 containerd[1585]: time="2025-05-09T00:37:36.254196706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b42hm,Uid:09701c7f-5a6d-4bde-8e10-19799e14d3ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14\"" May 9 00:37:36.255249 kubelet[2749]: E0509 00:37:36.255206 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:36.266922 containerd[1585]: time="2025-05-09T00:37:36.266831176Z" level=info msg="StartContainer for \"70e9189b50628ebe77332029e13a014267a6824aadef60fe9411c1730525701b\" returns successfully" May 9 00:37:36.329739 kubelet[2749]: E0509 00:37:36.329686 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:39.850569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126380146.mount: Deactivated successfully. May 9 00:37:40.488438 systemd-resolved[1465]: Under memory pressure, flushing caches. May 9 00:37:40.488510 systemd-resolved[1465]: Flushed all caches. May 9 00:37:40.498286 systemd-journald[1171]: Under memory pressure, flushing caches. May 9 00:37:43.922952 containerd[1585]: time="2025-05-09T00:37:43.922864860Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:43.924343 containerd[1585]: time="2025-05-09T00:37:43.924286408Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 9 00:37:43.926014 containerd[1585]: time="2025-05-09T00:37:43.925976843Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:43.927691 containerd[1585]: time="2025-05-09T00:37:43.927645748Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.280858284s" May 9 00:37:43.927756 containerd[1585]: time="2025-05-09T00:37:43.927688409Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 9 00:37:43.928991 containerd[1585]: time="2025-05-09T00:37:43.928753943Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 00:37:43.929939 containerd[1585]: time="2025-05-09T00:37:43.929900932Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:37:43.949734 containerd[1585]: time="2025-05-09T00:37:43.949656658Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\"" May 9 00:37:43.950580 containerd[1585]: time="2025-05-09T00:37:43.950428617Z" level=info msg="StartContainer for \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\"" May 9 00:37:44.080546 containerd[1585]: time="2025-05-09T00:37:44.080481859Z" level=info msg="StartContainer for \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\" returns successfully" May 9 00:37:44.636392 kubelet[2749]: E0509 00:37:44.636352 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:44.731847 kubelet[2749]: I0509 00:37:44.731745 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8t94r" podStartSLOduration=11.731708851 podStartE2EDuration="11.731708851s" podCreationTimestamp="2025-05-09 00:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:37:36.34623031 +0000 UTC m=+17.164008419" watchObservedRunningTime="2025-05-09 00:37:44.731708851 +0000 UTC m=+25.549486970" May 9 00:37:44.942550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e-rootfs.mount: Deactivated successfully. May 9 00:37:45.165503 containerd[1585]: time="2025-05-09T00:37:45.163624577Z" level=info msg="shim disconnected" id=b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e namespace=k8s.io May 9 00:37:45.165503 containerd[1585]: time="2025-05-09T00:37:45.165502123Z" level=warning msg="cleaning up after shim disconnected" id=b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e namespace=k8s.io May 9 00:37:45.165503 containerd[1585]: time="2025-05-09T00:37:45.165514106Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:37:45.639568 kubelet[2749]: E0509 00:37:45.639527 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:45.641905 containerd[1585]: time="2025-05-09T00:37:45.641735817Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:37:45.662831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2991788188.mount: Deactivated successfully. May 9 00:37:45.663382 containerd[1585]: time="2025-05-09T00:37:45.663233368Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\"" May 9 00:37:45.663855 containerd[1585]: time="2025-05-09T00:37:45.663821589Z" level=info msg="StartContainer for \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\"" May 9 00:37:45.782615 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:37:45.782941 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:37:45.783018 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 00:37:45.788612 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:37:45.809830 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:37:45.819755 containerd[1585]: time="2025-05-09T00:37:45.819704395Z" level=info msg="StartContainer for \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\" returns successfully" May 9 00:37:45.942614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f-rootfs.mount: Deactivated successfully. May 9 00:37:46.028206 containerd[1585]: time="2025-05-09T00:37:46.028112560Z" level=info msg="shim disconnected" id=fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f namespace=k8s.io May 9 00:37:46.028206 containerd[1585]: time="2025-05-09T00:37:46.028201819Z" level=warning msg="cleaning up after shim disconnected" id=fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f namespace=k8s.io May 9 00:37:46.028206 containerd[1585]: time="2025-05-09T00:37:46.028215716Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:37:46.424556 systemd[1]: Started sshd@7-10.0.0.109:22-10.0.0.1:34460.service - OpenSSH per-connection server daemon (10.0.0.1:34460). May 9 00:37:46.476013 sshd[3288]: Accepted publickey for core from 10.0.0.1 port 34460 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:37:46.477736 sshd[3288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:46.482492 systemd-logind[1564]: New session 8 of user core. May 9 00:37:46.492514 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:37:46.613693 sshd[3288]: pam_unix(sshd:session): session closed for user core May 9 00:37:46.617770 systemd[1]: sshd@7-10.0.0.109:22-10.0.0.1:34460.service: Deactivated successfully. May 9 00:37:46.620044 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. May 9 00:37:46.620139 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:37:46.621476 systemd-logind[1564]: Removed session 8. May 9 00:37:46.642927 kubelet[2749]: E0509 00:37:46.642896 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:46.644635 containerd[1585]: time="2025-05-09T00:37:46.644593641Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:37:47.241081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3174770382.mount: Deactivated successfully. May 9 00:37:47.262372 containerd[1585]: time="2025-05-09T00:37:47.262313155Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\"" May 9 00:37:47.262962 containerd[1585]: time="2025-05-09T00:37:47.262926542Z" level=info msg="StartContainer for \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\"" May 9 00:37:47.334733 containerd[1585]: time="2025-05-09T00:37:47.334346998Z" level=info msg="StartContainer for \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\" returns successfully" May 9 00:37:47.382043 containerd[1585]: time="2025-05-09T00:37:47.381975767Z" level=info msg="shim disconnected" id=956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17 namespace=k8s.io May 9 00:37:47.382043 containerd[1585]: time="2025-05-09T00:37:47.382036973Z" level=warning msg="cleaning up after shim disconnected" id=956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17 namespace=k8s.io May 9 00:37:47.382043 containerd[1585]: time="2025-05-09T00:37:47.382048775Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:37:47.396503 containerd[1585]: time="2025-05-09T00:37:47.396411719Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:37:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:37:47.646947 kubelet[2749]: E0509 00:37:47.646910 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:47.650117 containerd[1585]: time="2025-05-09T00:37:47.650071834Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:37:47.725737 containerd[1585]: time="2025-05-09T00:37:47.725666326Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\"" May 9 00:37:47.727293 containerd[1585]: time="2025-05-09T00:37:47.726460956Z" level=info msg="StartContainer for \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\"" May 9 00:37:47.735174 containerd[1585]: time="2025-05-09T00:37:47.735119866Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:47.736241 containerd[1585]: time="2025-05-09T00:37:47.736104484Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 9 00:37:47.737287 containerd[1585]: time="2025-05-09T00:37:47.737245878Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:37:47.739607 containerd[1585]: time="2025-05-09T00:37:47.739519699Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.810728436s" May 9 00:37:47.739607 containerd[1585]: time="2025-05-09T00:37:47.739564684Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 9 00:37:47.747982 containerd[1585]: time="2025-05-09T00:37:47.747931503Z" level=info msg="CreateContainer within sandbox \"8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 00:37:47.956877 containerd[1585]: time="2025-05-09T00:37:47.956720393Z" level=info msg="StartContainer for \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\" returns successfully" May 9 00:37:47.960573 containerd[1585]: time="2025-05-09T00:37:47.960527136Z" level=info msg="CreateContainer within sandbox \"8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\"" May 9 00:37:47.961679 containerd[1585]: time="2025-05-09T00:37:47.961560276Z" level=info msg="StartContainer for \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\"" May 9 00:37:47.983090 containerd[1585]: time="2025-05-09T00:37:47.983009292Z" level=info msg="shim disconnected" id=e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b namespace=k8s.io May 9 00:37:47.983090 containerd[1585]: time="2025-05-09T00:37:47.983068452Z" level=warning msg="cleaning up after shim disconnected" id=e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b namespace=k8s.io May 9 00:37:47.983090 containerd[1585]: time="2025-05-09T00:37:47.983076808Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:37:48.029306 containerd[1585]: time="2025-05-09T00:37:48.029207050Z" level=info msg="StartContainer for \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\" returns successfully" May 9 00:37:48.240096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17-rootfs.mount: Deactivated successfully. May 9 00:37:48.657441 kubelet[2749]: E0509 00:37:48.657384 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:48.664292 kubelet[2749]: E0509 00:37:48.663318 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:48.670188 containerd[1585]: time="2025-05-09T00:37:48.670137336Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:37:48.690854 containerd[1585]: time="2025-05-09T00:37:48.690798110Z" level=info msg="CreateContainer within sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\"" May 9 00:37:48.691706 containerd[1585]: time="2025-05-09T00:37:48.691674112Z" level=info msg="StartContainer for \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\"" May 9 00:37:48.709544 kubelet[2749]: I0509 00:37:48.709463 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-b42hm" podStartSLOduration=4.229128068 podStartE2EDuration="15.709439575s" podCreationTimestamp="2025-05-09 00:37:33 +0000 UTC" firstStartedPulling="2025-05-09 00:37:36.260028319 +0000 UTC m=+17.077806428" lastFinishedPulling="2025-05-09 00:37:47.740339816 +0000 UTC m=+28.558117935" observedRunningTime="2025-05-09 00:37:48.679875235 +0000 UTC m=+29.497653344" watchObservedRunningTime="2025-05-09 00:37:48.709439575 +0000 UTC m=+29.527217684" May 9 00:37:48.850408 containerd[1585]: time="2025-05-09T00:37:48.850224966Z" level=info msg="StartContainer for \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\" returns successfully" May 9 00:37:49.015600 kubelet[2749]: I0509 00:37:49.014378 2749 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 00:37:49.037792 kubelet[2749]: I0509 00:37:49.037702 2749 topology_manager.go:215] "Topology Admit Handler" podUID="ae001446-2377-4404-89fe-4cbb848be366" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8ltcw" May 9 00:37:49.042134 kubelet[2749]: I0509 00:37:49.040547 2749 topology_manager.go:215] "Topology Admit Handler" podUID="07dfeb8f-5928-4a79-ae4c-16bb85cc6207" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qfmdm" May 9 00:37:49.140905 kubelet[2749]: I0509 00:37:49.140843 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnclm\" (UniqueName: \"kubernetes.io/projected/07dfeb8f-5928-4a79-ae4c-16bb85cc6207-kube-api-access-xnclm\") pod \"coredns-7db6d8ff4d-qfmdm\" (UID: \"07dfeb8f-5928-4a79-ae4c-16bb85cc6207\") " pod="kube-system/coredns-7db6d8ff4d-qfmdm" May 9 00:37:49.140905 kubelet[2749]: I0509 00:37:49.140902 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qcvg\" (UniqueName: \"kubernetes.io/projected/ae001446-2377-4404-89fe-4cbb848be366-kube-api-access-7qcvg\") pod \"coredns-7db6d8ff4d-8ltcw\" (UID: \"ae001446-2377-4404-89fe-4cbb848be366\") " pod="kube-system/coredns-7db6d8ff4d-8ltcw" May 9 00:37:49.141128 kubelet[2749]: I0509 00:37:49.140925 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae001446-2377-4404-89fe-4cbb848be366-config-volume\") pod \"coredns-7db6d8ff4d-8ltcw\" (UID: \"ae001446-2377-4404-89fe-4cbb848be366\") " pod="kube-system/coredns-7db6d8ff4d-8ltcw" May 9 00:37:49.141128 kubelet[2749]: I0509 00:37:49.140947 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07dfeb8f-5928-4a79-ae4c-16bb85cc6207-config-volume\") pod \"coredns-7db6d8ff4d-qfmdm\" (UID: \"07dfeb8f-5928-4a79-ae4c-16bb85cc6207\") " pod="kube-system/coredns-7db6d8ff4d-qfmdm" May 9 00:37:49.345809 kubelet[2749]: E0509 00:37:49.345742 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:49.346891 containerd[1585]: time="2025-05-09T00:37:49.346504269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8ltcw,Uid:ae001446-2377-4404-89fe-4cbb848be366,Namespace:kube-system,Attempt:0,}" May 9 00:37:49.347773 kubelet[2749]: E0509 00:37:49.347727 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:49.348113 containerd[1585]: time="2025-05-09T00:37:49.348079279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qfmdm,Uid:07dfeb8f-5928-4a79-ae4c-16bb85cc6207,Namespace:kube-system,Attempt:0,}" May 9 00:37:49.732793 kubelet[2749]: E0509 00:37:49.732470 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:49.734639 kubelet[2749]: E0509 00:37:49.734573 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:49.805566 kubelet[2749]: I0509 00:37:49.805451 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qm7jb" podStartSLOduration=8.52308824 podStartE2EDuration="16.805421147s" podCreationTimestamp="2025-05-09 00:37:33 +0000 UTC" firstStartedPulling="2025-05-09 00:37:35.646230296 +0000 UTC m=+16.464008405" lastFinishedPulling="2025-05-09 00:37:43.928563202 +0000 UTC m=+24.746341312" observedRunningTime="2025-05-09 00:37:49.805106574 +0000 UTC m=+30.622884703" watchObservedRunningTime="2025-05-09 00:37:49.805421147 +0000 UTC m=+30.623199256" May 9 00:37:50.735069 kubelet[2749]: E0509 00:37:50.735031 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:51.623509 systemd[1]: Started sshd@8-10.0.0.109:22-10.0.0.1:53010.service - OpenSSH per-connection server daemon (10.0.0.1:53010). May 9 00:37:51.656670 sshd[3620]: Accepted publickey for core from 10.0.0.1 port 53010 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:37:51.658746 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:51.663572 systemd-logind[1564]: New session 9 of user core. May 9 00:37:51.673552 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:37:51.737252 kubelet[2749]: E0509 00:37:51.737204 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:51.798383 sshd[3620]: pam_unix(sshd:session): session closed for user core May 9 00:37:51.803093 systemd[1]: sshd@8-10.0.0.109:22-10.0.0.1:53010.service: Deactivated successfully. May 9 00:37:51.806107 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:37:51.807149 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. May 9 00:37:51.808101 systemd-logind[1564]: Removed session 9. May 9 00:37:51.961699 systemd-networkd[1246]: cilium_host: Link UP May 9 00:37:51.961879 systemd-networkd[1246]: cilium_net: Link UP May 9 00:37:51.962567 systemd-networkd[1246]: cilium_net: Gained carrier May 9 00:37:51.962844 systemd-networkd[1246]: cilium_host: Gained carrier May 9 00:37:51.965005 systemd-networkd[1246]: cilium_net: Gained IPv6LL May 9 00:37:51.965327 systemd-networkd[1246]: cilium_host: Gained IPv6LL May 9 00:37:52.083855 systemd-networkd[1246]: cilium_vxlan: Link UP May 9 00:37:52.083866 systemd-networkd[1246]: cilium_vxlan: Gained carrier May 9 00:37:52.320292 kernel: NET: Registered PF_ALG protocol family May 9 00:37:53.093186 systemd-networkd[1246]: lxc_health: Link UP May 9 00:37:53.098621 systemd-networkd[1246]: lxc_health: Gained carrier May 9 00:37:53.352526 systemd-networkd[1246]: cilium_vxlan: Gained IPv6LL May 9 00:37:53.435095 systemd-networkd[1246]: lxc20816be7f0c6: Link UP May 9 00:37:53.442297 kernel: eth0: renamed from tmpf7720 May 9 00:37:53.450672 systemd-networkd[1246]: lxc20816be7f0c6: Gained carrier May 9 00:37:53.469849 systemd-networkd[1246]: lxc2fd12dfdbf3e: Link UP May 9 00:37:53.472471 kernel: eth0: renamed from tmpb5b39 May 9 00:37:53.484542 systemd-networkd[1246]: lxc2fd12dfdbf3e: Gained carrier May 9 00:37:54.275181 kubelet[2749]: E0509 00:37:54.275138 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:54.504476 systemd-networkd[1246]: lxc20816be7f0c6: Gained IPv6LL May 9 00:37:54.632513 systemd-networkd[1246]: lxc_health: Gained IPv6LL May 9 00:37:54.888982 systemd-networkd[1246]: lxc2fd12dfdbf3e: Gained IPv6LL May 9 00:37:56.810003 systemd[1]: Started sshd@9-10.0.0.109:22-10.0.0.1:58216.service - OpenSSH per-connection server daemon (10.0.0.1:58216). May 9 00:37:56.854024 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 58216 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:37:56.854838 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:37:56.859585 systemd-logind[1564]: New session 10 of user core. May 9 00:37:56.868593 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:37:57.094964 containerd[1585]: time="2025-05-09T00:37:57.094848572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:57.094964 containerd[1585]: time="2025-05-09T00:37:57.094925096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:57.094964 containerd[1585]: time="2025-05-09T00:37:57.094940004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:57.095639 containerd[1585]: time="2025-05-09T00:37:57.095065861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:57.124046 sshd[4019]: pam_unix(sshd:session): session closed for user core May 9 00:37:57.128066 systemd[1]: sshd@9-10.0.0.109:22-10.0.0.1:58216.service: Deactivated successfully. May 9 00:37:57.129194 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:37:57.132491 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:37:57.134160 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. May 9 00:37:57.135503 systemd-logind[1564]: Removed session 10. May 9 00:37:57.149359 containerd[1585]: time="2025-05-09T00:37:57.148629149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:37:57.149359 containerd[1585]: time="2025-05-09T00:37:57.148689823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:37:57.149359 containerd[1585]: time="2025-05-09T00:37:57.148704711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:57.149359 containerd[1585]: time="2025-05-09T00:37:57.148791796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:37:57.160919 containerd[1585]: time="2025-05-09T00:37:57.160864112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8ltcw,Uid:ae001446-2377-4404-89fe-4cbb848be366,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7720e79449de342ac006a25a04d5508a5f6ddf6279824b9bcdc4cea173cc3fe\"" May 9 00:37:57.164604 kubelet[2749]: E0509 00:37:57.161613 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:57.171734 containerd[1585]: time="2025-05-09T00:37:57.171575457Z" level=info msg="CreateContainer within sandbox \"f7720e79449de342ac006a25a04d5508a5f6ddf6279824b9bcdc4cea173cc3fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:37:57.192314 systemd-resolved[1465]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:37:57.222802 containerd[1585]: time="2025-05-09T00:37:57.222734762Z" level=info msg="CreateContainer within sandbox \"f7720e79449de342ac006a25a04d5508a5f6ddf6279824b9bcdc4cea173cc3fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bac141ea033111970e0455c85254aada9387927b1d82e090d8471885a085e793\"" May 9 00:37:57.223486 containerd[1585]: time="2025-05-09T00:37:57.223444286Z" level=info msg="StartContainer for \"bac141ea033111970e0455c85254aada9387927b1d82e090d8471885a085e793\"" May 9 00:37:57.227122 containerd[1585]: time="2025-05-09T00:37:57.226097669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qfmdm,Uid:07dfeb8f-5928-4a79-ae4c-16bb85cc6207,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5b39060cbd7b8bf56134eb42d72d6b3cd344214211fdf4ac49bb384510fe023\"" May 9 00:37:57.227210 kubelet[2749]: E0509 00:37:57.227028 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:57.230406 containerd[1585]: time="2025-05-09T00:37:57.230369067Z" level=info msg="CreateContainer within sandbox \"b5b39060cbd7b8bf56134eb42d72d6b3cd344214211fdf4ac49bb384510fe023\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:37:57.245982 containerd[1585]: time="2025-05-09T00:37:57.245928034Z" level=info msg="CreateContainer within sandbox \"b5b39060cbd7b8bf56134eb42d72d6b3cd344214211fdf4ac49bb384510fe023\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf442e562cc3da468cabebe36623f06d84a91d00f06091e0a2f4541b07e7b751\"" May 9 00:37:57.246801 containerd[1585]: time="2025-05-09T00:37:57.246736565Z" level=info msg="StartContainer for \"cf442e562cc3da468cabebe36623f06d84a91d00f06091e0a2f4541b07e7b751\"" May 9 00:37:57.296290 containerd[1585]: time="2025-05-09T00:37:57.296227822Z" level=info msg="StartContainer for \"bac141ea033111970e0455c85254aada9387927b1d82e090d8471885a085e793\" returns successfully" May 9 00:37:57.320586 containerd[1585]: time="2025-05-09T00:37:57.320538316Z" level=info msg="StartContainer for \"cf442e562cc3da468cabebe36623f06d84a91d00f06091e0a2f4541b07e7b751\" returns successfully" May 9 00:37:57.751793 kubelet[2749]: E0509 00:37:57.751689 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:57.753892 kubelet[2749]: E0509 00:37:57.753502 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:57.762407 kubelet[2749]: I0509 00:37:57.762250 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8ltcw" podStartSLOduration=23.762229684 podStartE2EDuration="23.762229684s" podCreationTimestamp="2025-05-09 00:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:37:57.761963844 +0000 UTC m=+38.579741953" watchObservedRunningTime="2025-05-09 00:37:57.762229684 +0000 UTC m=+38.580007793" May 9 00:37:57.775390 kubelet[2749]: I0509 00:37:57.775306 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qfmdm" podStartSLOduration=24.775246347 podStartE2EDuration="24.775246347s" podCreationTimestamp="2025-05-09 00:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:37:57.774533987 +0000 UTC m=+38.592312096" watchObservedRunningTime="2025-05-09 00:37:57.775246347 +0000 UTC m=+38.593024456" May 9 00:37:58.103717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367361253.mount: Deactivated successfully. May 9 00:37:58.755375 kubelet[2749]: E0509 00:37:58.755332 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:58.756110 kubelet[2749]: E0509 00:37:58.755486 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:59.111627 kubelet[2749]: I0509 00:37:59.111572 2749 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 00:37:59.112452 kubelet[2749]: E0509 00:37:59.112418 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:59.756892 kubelet[2749]: E0509 00:37:59.756859 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:59.756892 kubelet[2749]: E0509 00:37:59.756907 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:37:59.757422 kubelet[2749]: E0509 00:37:59.756961 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:02.134659 systemd[1]: Started sshd@10-10.0.0.109:22-10.0.0.1:58228.service - OpenSSH per-connection server daemon (10.0.0.1:58228). May 9 00:38:02.163468 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 58228 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:02.165070 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:02.169088 systemd-logind[1564]: New session 11 of user core. May 9 00:38:02.176505 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:38:02.296712 sshd[4204]: pam_unix(sshd:session): session closed for user core May 9 00:38:02.300772 systemd[1]: sshd@10-10.0.0.109:22-10.0.0.1:58228.service: Deactivated successfully. May 9 00:38:02.303462 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. May 9 00:38:02.303659 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:38:02.304881 systemd-logind[1564]: Removed session 11. May 9 00:38:07.311551 systemd[1]: Started sshd@11-10.0.0.109:22-10.0.0.1:51184.service - OpenSSH per-connection server daemon (10.0.0.1:51184). May 9 00:38:07.340479 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 51184 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:07.342521 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:07.346937 systemd-logind[1564]: New session 12 of user core. May 9 00:38:07.355550 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:38:07.467509 sshd[4222]: pam_unix(sshd:session): session closed for user core May 9 00:38:07.475703 systemd[1]: Started sshd@12-10.0.0.109:22-10.0.0.1:51192.service - OpenSSH per-connection server daemon (10.0.0.1:51192). May 9 00:38:07.476491 systemd[1]: sshd@11-10.0.0.109:22-10.0.0.1:51184.service: Deactivated successfully. May 9 00:38:07.486316 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:38:07.487355 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. May 9 00:38:07.488611 systemd-logind[1564]: Removed session 12. May 9 00:38:07.504595 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 51192 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:07.506775 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:07.511000 systemd-logind[1564]: New session 13 of user core. May 9 00:38:07.521584 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:38:07.676152 sshd[4235]: pam_unix(sshd:session): session closed for user core May 9 00:38:07.687729 systemd[1]: Started sshd@13-10.0.0.109:22-10.0.0.1:51202.service - OpenSSH per-connection server daemon (10.0.0.1:51202). May 9 00:38:07.688328 systemd[1]: sshd@12-10.0.0.109:22-10.0.0.1:51192.service: Deactivated successfully. May 9 00:38:07.697170 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:38:07.698196 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. May 9 00:38:07.703192 systemd-logind[1564]: Removed session 13. May 9 00:38:07.722839 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 51202 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:07.724605 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:07.729165 systemd-logind[1564]: New session 14 of user core. May 9 00:38:07.742540 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:38:07.855274 sshd[4248]: pam_unix(sshd:session): session closed for user core May 9 00:38:07.860330 systemd[1]: sshd@13-10.0.0.109:22-10.0.0.1:51202.service: Deactivated successfully. May 9 00:38:07.862936 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. May 9 00:38:07.862963 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:38:07.864804 systemd-logind[1564]: Removed session 14. May 9 00:38:12.865529 systemd[1]: Started sshd@14-10.0.0.109:22-10.0.0.1:51206.service - OpenSSH per-connection server daemon (10.0.0.1:51206). May 9 00:38:12.892756 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 51206 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:12.894367 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:12.898550 systemd-logind[1564]: New session 15 of user core. May 9 00:38:12.908544 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:38:13.020159 sshd[4266]: pam_unix(sshd:session): session closed for user core May 9 00:38:13.025220 systemd[1]: sshd@14-10.0.0.109:22-10.0.0.1:51206.service: Deactivated successfully. May 9 00:38:13.028566 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:38:13.029417 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. May 9 00:38:13.031012 systemd-logind[1564]: Removed session 15. May 9 00:38:18.035543 systemd[1]: Started sshd@15-10.0.0.109:22-10.0.0.1:35658.service - OpenSSH per-connection server daemon (10.0.0.1:35658). May 9 00:38:18.062452 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 35658 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:18.063935 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:18.067789 systemd-logind[1564]: New session 16 of user core. May 9 00:38:18.077516 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:38:18.190533 sshd[4282]: pam_unix(sshd:session): session closed for user core May 9 00:38:18.197549 systemd[1]: Started sshd@16-10.0.0.109:22-10.0.0.1:35670.service - OpenSSH per-connection server daemon (10.0.0.1:35670). May 9 00:38:18.198167 systemd[1]: sshd@15-10.0.0.109:22-10.0.0.1:35658.service: Deactivated successfully. May 9 00:38:18.202708 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:38:18.203781 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. May 9 00:38:18.204744 systemd-logind[1564]: Removed session 16. May 9 00:38:18.227732 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 35670 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:18.229559 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:18.234324 systemd-logind[1564]: New session 17 of user core. May 9 00:38:18.243625 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:38:18.526054 sshd[4294]: pam_unix(sshd:session): session closed for user core May 9 00:38:18.534522 systemd[1]: Started sshd@17-10.0.0.109:22-10.0.0.1:35672.service - OpenSSH per-connection server daemon (10.0.0.1:35672). May 9 00:38:18.535024 systemd[1]: sshd@16-10.0.0.109:22-10.0.0.1:35670.service: Deactivated successfully. May 9 00:38:18.538710 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:38:18.539006 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. May 9 00:38:18.540633 systemd-logind[1564]: Removed session 17. May 9 00:38:18.565157 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 35672 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:18.566915 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:18.571004 systemd-logind[1564]: New session 18 of user core. May 9 00:38:18.580517 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:38:20.000536 sshd[4307]: pam_unix(sshd:session): session closed for user core May 9 00:38:20.016648 systemd[1]: Started sshd@18-10.0.0.109:22-10.0.0.1:35686.service - OpenSSH per-connection server daemon (10.0.0.1:35686). May 9 00:38:20.020062 systemd[1]: sshd@17-10.0.0.109:22-10.0.0.1:35672.service: Deactivated successfully. May 9 00:38:20.025563 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:38:20.028052 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. May 9 00:38:20.029728 systemd-logind[1564]: Removed session 18. May 9 00:38:20.051964 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 35686 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:20.054251 sshd[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:20.059190 systemd-logind[1564]: New session 19 of user core. May 9 00:38:20.071769 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:38:20.304236 sshd[4330]: pam_unix(sshd:session): session closed for user core May 9 00:38:20.314654 systemd[1]: Started sshd@19-10.0.0.109:22-10.0.0.1:35694.service - OpenSSH per-connection server daemon (10.0.0.1:35694). May 9 00:38:20.315297 systemd[1]: sshd@18-10.0.0.109:22-10.0.0.1:35686.service: Deactivated successfully. May 9 00:38:20.318070 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. May 9 00:38:20.319024 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:38:20.320583 systemd-logind[1564]: Removed session 19. May 9 00:38:20.345210 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 35694 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:20.346849 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:20.350954 systemd-logind[1564]: New session 20 of user core. May 9 00:38:20.365561 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:38:20.476709 sshd[4345]: pam_unix(sshd:session): session closed for user core May 9 00:38:20.480780 systemd[1]: sshd@19-10.0.0.109:22-10.0.0.1:35694.service: Deactivated successfully. May 9 00:38:20.483468 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. May 9 00:38:20.483474 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:38:20.484648 systemd-logind[1564]: Removed session 20. May 9 00:38:25.491484 systemd[1]: Started sshd@20-10.0.0.109:22-10.0.0.1:35704.service - OpenSSH per-connection server daemon (10.0.0.1:35704). May 9 00:38:25.518735 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 35704 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:25.520462 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:25.524302 systemd-logind[1564]: New session 21 of user core. May 9 00:38:25.531505 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:38:25.652813 sshd[4363]: pam_unix(sshd:session): session closed for user core May 9 00:38:25.657475 systemd[1]: sshd@20-10.0.0.109:22-10.0.0.1:35704.service: Deactivated successfully. May 9 00:38:25.660021 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:38:25.660030 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. May 9 00:38:25.661526 systemd-logind[1564]: Removed session 21. May 9 00:38:30.663500 systemd[1]: Started sshd@21-10.0.0.109:22-10.0.0.1:58388.service - OpenSSH per-connection server daemon (10.0.0.1:58388). May 9 00:38:30.690726 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 58388 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:30.692409 sshd[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:30.696348 systemd-logind[1564]: New session 22 of user core. May 9 00:38:30.702503 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:38:30.808714 sshd[4381]: pam_unix(sshd:session): session closed for user core May 9 00:38:30.811777 systemd[1]: sshd@21-10.0.0.109:22-10.0.0.1:58388.service: Deactivated successfully. May 9 00:38:30.817082 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:38:30.818383 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. May 9 00:38:30.819348 systemd-logind[1564]: Removed session 22. May 9 00:38:35.818494 systemd[1]: Started sshd@22-10.0.0.109:22-10.0.0.1:58392.service - OpenSSH per-connection server daemon (10.0.0.1:58392). May 9 00:38:35.846448 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 58392 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:35.848357 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:35.852741 systemd-logind[1564]: New session 23 of user core. May 9 00:38:35.863581 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 00:38:35.970536 sshd[4397]: pam_unix(sshd:session): session closed for user core May 9 00:38:35.974331 systemd[1]: sshd@22-10.0.0.109:22-10.0.0.1:58392.service: Deactivated successfully. May 9 00:38:35.977016 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. May 9 00:38:35.977175 systemd[1]: session-23.scope: Deactivated successfully. May 9 00:38:35.978223 systemd-logind[1564]: Removed session 23. May 9 00:38:36.283678 kubelet[2749]: E0509 00:38:36.283638 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:37.283675 kubelet[2749]: E0509 00:38:37.283605 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:40.985552 systemd[1]: Started sshd@23-10.0.0.109:22-10.0.0.1:42272.service - OpenSSH per-connection server daemon (10.0.0.1:42272). May 9 00:38:41.012662 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 42272 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:41.014621 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:41.019151 systemd-logind[1564]: New session 24 of user core. May 9 00:38:41.032558 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 00:38:41.146097 sshd[4414]: pam_unix(sshd:session): session closed for user core May 9 00:38:41.154505 systemd[1]: Started sshd@24-10.0.0.109:22-10.0.0.1:42282.service - OpenSSH per-connection server daemon (10.0.0.1:42282). May 9 00:38:41.154988 systemd[1]: sshd@23-10.0.0.109:22-10.0.0.1:42272.service: Deactivated successfully. May 9 00:38:41.158588 systemd-logind[1564]: Session 24 logged out. Waiting for processes to exit. May 9 00:38:41.159317 systemd[1]: session-24.scope: Deactivated successfully. May 9 00:38:41.160489 systemd-logind[1564]: Removed session 24. May 9 00:38:41.182094 sshd[4426]: Accepted publickey for core from 10.0.0.1 port 42282 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:41.183831 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:41.188711 systemd-logind[1564]: New session 25 of user core. May 9 00:38:41.198599 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 00:38:42.800845 containerd[1585]: time="2025-05-09T00:38:42.800785455Z" level=info msg="StopContainer for \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\" with timeout 30 (s)" May 9 00:38:42.801560 containerd[1585]: time="2025-05-09T00:38:42.801246843Z" level=info msg="Stop container \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\" with signal terminated" May 9 00:38:42.842281 containerd[1585]: time="2025-05-09T00:38:42.840660237Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:38:42.849164 containerd[1585]: time="2025-05-09T00:38:42.849116610Z" level=info msg="StopContainer for \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\" with timeout 2 (s)" May 9 00:38:42.849740 containerd[1585]: time="2025-05-09T00:38:42.849505311Z" level=info msg="Stop container \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\" with signal terminated" May 9 00:38:42.850500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75-rootfs.mount: Deactivated successfully. May 9 00:38:42.857925 systemd-networkd[1246]: lxc_health: Link DOWN May 9 00:38:42.857937 systemd-networkd[1246]: lxc_health: Lost carrier May 9 00:38:42.864019 containerd[1585]: time="2025-05-09T00:38:42.863947514Z" level=info msg="shim disconnected" id=86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75 namespace=k8s.io May 9 00:38:42.864019 containerd[1585]: time="2025-05-09T00:38:42.864019940Z" level=warning msg="cleaning up after shim disconnected" id=86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75 namespace=k8s.io May 9 00:38:42.864116 containerd[1585]: time="2025-05-09T00:38:42.864032113Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:42.883909 containerd[1585]: time="2025-05-09T00:38:42.883859848Z" level=info msg="StopContainer for \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\" returns successfully" May 9 00:38:42.884837 containerd[1585]: time="2025-05-09T00:38:42.884795135Z" level=info msg="StopPodSandbox for \"8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14\"" May 9 00:38:42.884901 containerd[1585]: time="2025-05-09T00:38:42.884838035Z" level=info msg="Container to stop \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:38:42.887777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14-shm.mount: Deactivated successfully. May 9 00:38:42.903844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346-rootfs.mount: Deactivated successfully. May 9 00:38:42.913797 containerd[1585]: time="2025-05-09T00:38:42.913717882Z" level=info msg="shim disconnected" id=c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346 namespace=k8s.io May 9 00:38:42.913797 containerd[1585]: time="2025-05-09T00:38:42.913794537Z" level=warning msg="cleaning up after shim disconnected" id=c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346 namespace=k8s.io May 9 00:38:42.913797 containerd[1585]: time="2025-05-09T00:38:42.913804235Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:42.917844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14-rootfs.mount: Deactivated successfully. May 9 00:38:42.921031 containerd[1585]: time="2025-05-09T00:38:42.920935788Z" level=info msg="shim disconnected" id=8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14 namespace=k8s.io May 9 00:38:42.921031 containerd[1585]: time="2025-05-09T00:38:42.921019646Z" level=warning msg="cleaning up after shim disconnected" id=8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14 namespace=k8s.io May 9 00:38:42.921031 containerd[1585]: time="2025-05-09T00:38:42.921029094Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:42.936248 containerd[1585]: time="2025-05-09T00:38:42.936202140Z" level=info msg="TearDown network for sandbox \"8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14\" successfully" May 9 00:38:42.936248 containerd[1585]: time="2025-05-09T00:38:42.936240162Z" level=info msg="StopPodSandbox for \"8b78f058a043de76b2896cc92aa6d644849d9e5e932637a1936ff09860bbfe14\" returns successfully" May 9 00:38:42.937918 containerd[1585]: time="2025-05-09T00:38:42.937881356Z" level=info msg="StopContainer for \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\" returns successfully" May 9 00:38:42.938397 containerd[1585]: time="2025-05-09T00:38:42.938373611Z" level=info msg="StopPodSandbox for \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\"" May 9 00:38:42.938454 containerd[1585]: time="2025-05-09T00:38:42.938410821Z" level=info msg="Container to stop \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:38:42.938454 containerd[1585]: time="2025-05-09T00:38:42.938424918Z" level=info msg="Container to stop \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:38:42.938454 containerd[1585]: time="2025-05-09T00:38:42.938435387Z" level=info msg="Container to stop \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:38:42.938454 containerd[1585]: time="2025-05-09T00:38:42.938447510Z" level=info msg="Container to stop \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:38:42.938588 containerd[1585]: time="2025-05-09T00:38:42.938460454Z" level=info msg="Container to stop \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:38:42.971399 containerd[1585]: time="2025-05-09T00:38:42.971298441Z" level=info msg="shim disconnected" id=0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20 namespace=k8s.io May 9 00:38:42.971399 containerd[1585]: time="2025-05-09T00:38:42.971358243Z" level=warning msg="cleaning up after shim disconnected" id=0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20 namespace=k8s.io May 9 00:38:42.971399 containerd[1585]: time="2025-05-09T00:38:42.971367080Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:42.987837 containerd[1585]: time="2025-05-09T00:38:42.987782691Z" level=info msg="TearDown network for sandbox \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" successfully" May 9 00:38:42.987837 containerd[1585]: time="2025-05-09T00:38:42.987823940Z" level=info msg="StopPodSandbox for \"0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20\" returns successfully" May 9 00:38:43.039622 kubelet[2749]: I0509 00:38:43.039550 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cni-path\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.039622 kubelet[2749]: I0509 00:38:43.039598 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-run\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.039622 kubelet[2749]: I0509 00:38:43.039617 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-xtables-lock\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.039622 kubelet[2749]: I0509 00:38:43.039633 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-hostproc\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040241 kubelet[2749]: I0509 00:38:43.039649 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-host-proc-sys-net\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040241 kubelet[2749]: I0509 00:38:43.039665 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-bpf-maps\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040241 kubelet[2749]: I0509 00:38:43.039678 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-cgroup\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040241 kubelet[2749]: I0509 00:38:43.039705 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.040241 kubelet[2749]: I0509 00:38:43.039730 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-hostproc" (OuterVolumeSpecName: "hostproc") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.040241 kubelet[2749]: I0509 00:38:43.039740 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09701c7f-5a6d-4bde-8e10-19799e14d3ab-cilium-config-path\") pod \"09701c7f-5a6d-4bde-8e10-19799e14d3ab\" (UID: \"09701c7f-5a6d-4bde-8e10-19799e14d3ab\") " May 9 00:38:43.040438 kubelet[2749]: I0509 00:38:43.039705 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cni-path" (OuterVolumeSpecName: "cni-path") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.040438 kubelet[2749]: I0509 00:38:43.039764 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.040438 kubelet[2749]: I0509 00:38:43.039770 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28ef146f-3a25-47ef-9256-f5347ee08fcd-clustermesh-secrets\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040438 kubelet[2749]: I0509 00:38:43.039778 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.040438 kubelet[2749]: I0509 00:38:43.039781 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.040559 kubelet[2749]: I0509 00:38:43.039786 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-etc-cni-netd\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040559 kubelet[2749]: I0509 00:38:43.039796 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.040559 kubelet[2749]: I0509 00:38:43.039804 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.040559 kubelet[2749]: I0509 00:38:43.039816 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-config-path\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040559 kubelet[2749]: I0509 00:38:43.039835 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hbjs\" (UniqueName: \"kubernetes.io/projected/09701c7f-5a6d-4bde-8e10-19799e14d3ab-kube-api-access-8hbjs\") pod \"09701c7f-5a6d-4bde-8e10-19799e14d3ab\" (UID: \"09701c7f-5a6d-4bde-8e10-19799e14d3ab\") " May 9 00:38:43.040688 kubelet[2749]: I0509 00:38:43.039853 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-lib-modules\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040688 kubelet[2749]: I0509 00:38:43.039869 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzv96\" (UniqueName: \"kubernetes.io/projected/28ef146f-3a25-47ef-9256-f5347ee08fcd-kube-api-access-nzv96\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040688 kubelet[2749]: I0509 00:38:43.039886 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28ef146f-3a25-47ef-9256-f5347ee08fcd-hubble-tls\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040688 kubelet[2749]: I0509 00:38:43.039900 2749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-host-proc-sys-kernel\") pod \"28ef146f-3a25-47ef-9256-f5347ee08fcd\" (UID: \"28ef146f-3a25-47ef-9256-f5347ee08fcd\") " May 9 00:38:43.040688 kubelet[2749]: I0509 00:38:43.039932 2749 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-run\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.040688 kubelet[2749]: I0509 00:38:43.039942 2749 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cni-path\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.040688 kubelet[2749]: I0509 00:38:43.039950 2749 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.040878 kubelet[2749]: I0509 00:38:43.039958 2749 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-hostproc\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.040878 kubelet[2749]: I0509 00:38:43.039967 2749 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.040878 kubelet[2749]: I0509 00:38:43.039977 2749 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.040878 kubelet[2749]: I0509 00:38:43.039985 2749 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.040878 kubelet[2749]: I0509 00:38:43.039994 2749 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.040878 kubelet[2749]: I0509 00:38:43.040011 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.040878 kubelet[2749]: I0509 00:38:43.040306 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 9 00:38:43.044102 kubelet[2749]: I0509 00:38:43.044069 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:38:43.044451 kubelet[2749]: I0509 00:38:43.044434 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09701c7f-5a6d-4bde-8e10-19799e14d3ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "09701c7f-5a6d-4bde-8e10-19799e14d3ab" (UID: "09701c7f-5a6d-4bde-8e10-19799e14d3ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 9 00:38:43.044875 kubelet[2749]: I0509 00:38:43.044843 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ef146f-3a25-47ef-9256-f5347ee08fcd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 9 00:38:43.046696 kubelet[2749]: I0509 00:38:43.046675 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ef146f-3a25-47ef-9256-f5347ee08fcd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:38:43.046739 kubelet[2749]: I0509 00:38:43.046690 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09701c7f-5a6d-4bde-8e10-19799e14d3ab-kube-api-access-8hbjs" (OuterVolumeSpecName: "kube-api-access-8hbjs") pod "09701c7f-5a6d-4bde-8e10-19799e14d3ab" (UID: "09701c7f-5a6d-4bde-8e10-19799e14d3ab"). InnerVolumeSpecName "kube-api-access-8hbjs". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:38:43.046783 kubelet[2749]: I0509 00:38:43.046764 2749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ef146f-3a25-47ef-9256-f5347ee08fcd-kube-api-access-nzv96" (OuterVolumeSpecName: "kube-api-access-nzv96") pod "28ef146f-3a25-47ef-9256-f5347ee08fcd" (UID: "28ef146f-3a25-47ef-9256-f5347ee08fcd"). InnerVolumeSpecName "kube-api-access-nzv96". PluginName "kubernetes.io/projected", VolumeGidValue "" May 9 00:38:43.141011 kubelet[2749]: I0509 00:38:43.140980 2749 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/28ef146f-3a25-47ef-9256-f5347ee08fcd-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.141011 kubelet[2749]: I0509 00:38:43.141009 2749 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.141176 kubelet[2749]: I0509 00:38:43.141020 2749 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09701c7f-5a6d-4bde-8e10-19799e14d3ab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.141176 kubelet[2749]: I0509 00:38:43.141031 2749 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/28ef146f-3a25-47ef-9256-f5347ee08fcd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.141176 kubelet[2749]: I0509 00:38:43.141039 2749 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/28ef146f-3a25-47ef-9256-f5347ee08fcd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.141176 kubelet[2749]: I0509 00:38:43.141047 2749 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8hbjs\" (UniqueName: \"kubernetes.io/projected/09701c7f-5a6d-4bde-8e10-19799e14d3ab-kube-api-access-8hbjs\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.141176 kubelet[2749]: I0509 00:38:43.141059 2749 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28ef146f-3a25-47ef-9256-f5347ee08fcd-lib-modules\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.141176 kubelet[2749]: I0509 00:38:43.141067 2749 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nzv96\" (UniqueName: \"kubernetes.io/projected/28ef146f-3a25-47ef-9256-f5347ee08fcd-kube-api-access-nzv96\") on node \"localhost\" DevicePath \"\"" May 9 00:38:43.283763 kubelet[2749]: E0509 00:38:43.283692 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:43.826881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20-rootfs.mount: Deactivated successfully. May 9 00:38:43.827085 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f64b1f39683b05955681ef41230ea8a59ccd36aeab215128302c4fb038d9e20-shm.mount: Deactivated successfully. May 9 00:38:43.827240 systemd[1]: var-lib-kubelet-pods-09701c7f\x2d5a6d\x2d4bde\x2d8e10\x2d19799e14d3ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8hbjs.mount: Deactivated successfully. May 9 00:38:43.827417 systemd[1]: var-lib-kubelet-pods-28ef146f\x2d3a25\x2d47ef\x2d9256\x2df5347ee08fcd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnzv96.mount: Deactivated successfully. May 9 00:38:43.827564 systemd[1]: var-lib-kubelet-pods-28ef146f\x2d3a25\x2d47ef\x2d9256\x2df5347ee08fcd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 00:38:43.827709 systemd[1]: var-lib-kubelet-pods-28ef146f\x2d3a25\x2d47ef\x2d9256\x2df5347ee08fcd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 00:38:43.851812 kubelet[2749]: I0509 00:38:43.851773 2749 scope.go:117] "RemoveContainer" containerID="c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346" May 9 00:38:43.853245 containerd[1585]: time="2025-05-09T00:38:43.853211599Z" level=info msg="RemoveContainer for \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\"" May 9 00:38:43.873282 containerd[1585]: time="2025-05-09T00:38:43.872360946Z" level=info msg="RemoveContainer for \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\" returns successfully" May 9 00:38:43.874494 kubelet[2749]: I0509 00:38:43.874456 2749 scope.go:117] "RemoveContainer" containerID="e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b" May 9 00:38:43.877056 containerd[1585]: time="2025-05-09T00:38:43.876552494Z" level=info msg="RemoveContainer for \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\"" May 9 00:38:43.880789 containerd[1585]: time="2025-05-09T00:38:43.880753469Z" level=info msg="RemoveContainer for \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\" returns successfully" May 9 00:38:43.880989 kubelet[2749]: I0509 00:38:43.880960 2749 scope.go:117] "RemoveContainer" containerID="956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17" May 9 00:38:43.881906 containerd[1585]: time="2025-05-09T00:38:43.881875348Z" level=info msg="RemoveContainer for \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\"" May 9 00:38:43.884767 containerd[1585]: time="2025-05-09T00:38:43.884724452Z" level=info msg="RemoveContainer for \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\" returns successfully" May 9 00:38:43.884879 kubelet[2749]: I0509 00:38:43.884852 2749 scope.go:117] "RemoveContainer" containerID="fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f" May 9 00:38:43.885871 containerd[1585]: time="2025-05-09T00:38:43.885655542Z" level=info msg="RemoveContainer for \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\"" May 9 00:38:43.888917 containerd[1585]: time="2025-05-09T00:38:43.888884139Z" level=info msg="RemoveContainer for \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\" returns successfully" May 9 00:38:43.889031 kubelet[2749]: I0509 00:38:43.889008 2749 scope.go:117] "RemoveContainer" containerID="b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e" May 9 00:38:43.889749 containerd[1585]: time="2025-05-09T00:38:43.889717996Z" level=info msg="RemoveContainer for \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\"" May 9 00:38:43.892724 containerd[1585]: time="2025-05-09T00:38:43.892691685Z" level=info msg="RemoveContainer for \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\" returns successfully" May 9 00:38:43.892852 kubelet[2749]: I0509 00:38:43.892824 2749 scope.go:117] "RemoveContainer" containerID="c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346" May 9 00:38:43.893109 containerd[1585]: time="2025-05-09T00:38:43.893046151Z" level=error msg="ContainerStatus for \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\": not found" May 9 00:38:43.893189 kubelet[2749]: E0509 00:38:43.893154 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\": not found" containerID="c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346" May 9 00:38:43.893321 kubelet[2749]: I0509 00:38:43.893195 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346"} err="failed to get container status \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\": rpc error: code = NotFound desc = an error occurred when try to find container \"c56abb245fdb5c6dd97ba0893e651ce8e687e5be58da9691211876c8d09f1346\": not found" May 9 00:38:43.893321 kubelet[2749]: I0509 00:38:43.893286 2749 scope.go:117] "RemoveContainer" containerID="e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b" May 9 00:38:43.893465 containerd[1585]: time="2025-05-09T00:38:43.893431906Z" level=error msg="ContainerStatus for \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\": not found" May 9 00:38:43.893557 kubelet[2749]: E0509 00:38:43.893535 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\": not found" containerID="e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b" May 9 00:38:43.893602 kubelet[2749]: I0509 00:38:43.893558 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b"} err="failed to get container status \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e7d37f3cdb4fd3974b7ea76c4df918438d1675089a6497c7f2e01598dcd8e01b\": not found" May 9 00:38:43.893602 kubelet[2749]: I0509 00:38:43.893573 2749 scope.go:117] "RemoveContainer" containerID="956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17" May 9 00:38:43.893744 containerd[1585]: time="2025-05-09T00:38:43.893708957Z" level=error msg="ContainerStatus for \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\": not found" May 9 00:38:43.893870 kubelet[2749]: E0509 00:38:43.893844 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\": not found" containerID="956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17" May 9 00:38:43.893908 kubelet[2749]: I0509 00:38:43.893878 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17"} err="failed to get container status \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\": rpc error: code = NotFound desc = an error occurred when try to find container \"956e79d360d69b10fd234e8c78b2e01e439c17a11677156e21c7f56a39512a17\": not found" May 9 00:38:43.893908 kubelet[2749]: I0509 00:38:43.893904 2749 scope.go:117] "RemoveContainer" containerID="fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f" May 9 00:38:43.894103 containerd[1585]: time="2025-05-09T00:38:43.894074975Z" level=error msg="ContainerStatus for \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\": not found" May 9 00:38:43.894179 kubelet[2749]: E0509 00:38:43.894164 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\": not found" containerID="fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f" May 9 00:38:43.894220 kubelet[2749]: I0509 00:38:43.894183 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f"} err="failed to get container status \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"fdd9830206494bdab4f415c44f67a52bf6c0edfd8da7c844fcb83075a1816a7f\": not found" May 9 00:38:43.894220 kubelet[2749]: I0509 00:38:43.894197 2749 scope.go:117] "RemoveContainer" containerID="b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e" May 9 00:38:43.894395 containerd[1585]: time="2025-05-09T00:38:43.894362415Z" level=error msg="ContainerStatus for \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\": not found" May 9 00:38:43.894507 kubelet[2749]: E0509 00:38:43.894488 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\": not found" containerID="b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e" May 9 00:38:43.894552 kubelet[2749]: I0509 00:38:43.894510 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e"} err="failed to get container status \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0d993cda11107c16848852395a509fc51d4c7f29a5a2c3270b5eaf6445bb41e\": not found" May 9 00:38:43.894552 kubelet[2749]: I0509 00:38:43.894525 2749 scope.go:117] "RemoveContainer" containerID="86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75" May 9 00:38:43.895313 containerd[1585]: time="2025-05-09T00:38:43.895289377Z" level=info msg="RemoveContainer for \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\"" May 9 00:38:43.898605 containerd[1585]: time="2025-05-09T00:38:43.898574450Z" level=info msg="RemoveContainer for \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\" returns successfully" May 9 00:38:43.898748 kubelet[2749]: I0509 00:38:43.898710 2749 scope.go:117] "RemoveContainer" containerID="86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75" May 9 00:38:43.898948 containerd[1585]: time="2025-05-09T00:38:43.898911054Z" level=error msg="ContainerStatus for \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\": not found" May 9 00:38:43.899057 kubelet[2749]: E0509 00:38:43.899035 2749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\": not found" containerID="86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75" May 9 00:38:43.899106 kubelet[2749]: I0509 00:38:43.899057 2749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75"} err="failed to get container status \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\": rpc error: code = NotFound desc = an error occurred when try to find container \"86b4a4c8cf7000416fa402514ba524ba0a0f459de544c77c9662438ee2704a75\": not found" May 9 00:38:44.367905 kubelet[2749]: E0509 00:38:44.367859 2749 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:38:44.765553 sshd[4426]: pam_unix(sshd:session): session closed for user core May 9 00:38:44.773658 systemd[1]: Started sshd@25-10.0.0.109:22-10.0.0.1:42290.service - OpenSSH per-connection server daemon (10.0.0.1:42290). May 9 00:38:44.775189 systemd[1]: sshd@24-10.0.0.109:22-10.0.0.1:42282.service: Deactivated successfully. May 9 00:38:44.778314 systemd[1]: session-25.scope: Deactivated successfully. May 9 00:38:44.779414 systemd-logind[1564]: Session 25 logged out. Waiting for processes to exit. May 9 00:38:44.781796 systemd-logind[1564]: Removed session 25. May 9 00:38:44.803306 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 42290 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:44.804747 sshd[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:44.808904 systemd-logind[1564]: New session 26 of user core. May 9 00:38:44.816501 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 00:38:45.284874 kubelet[2749]: I0509 00:38:45.284828 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09701c7f-5a6d-4bde-8e10-19799e14d3ab" path="/var/lib/kubelet/pods/09701c7f-5a6d-4bde-8e10-19799e14d3ab/volumes" May 9 00:38:45.285559 kubelet[2749]: I0509 00:38:45.285533 2749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ef146f-3a25-47ef-9256-f5347ee08fcd" path="/var/lib/kubelet/pods/28ef146f-3a25-47ef-9256-f5347ee08fcd/volumes" May 9 00:38:45.399663 sshd[4594]: pam_unix(sshd:session): session closed for user core May 9 00:38:45.410112 kubelet[2749]: I0509 00:38:45.408529 2749 topology_manager.go:215] "Topology Admit Handler" podUID="2938962f-16e6-4342-b1db-f4dfe817a2fe" podNamespace="kube-system" podName="cilium-cx5bn" May 9 00:38:45.410112 kubelet[2749]: E0509 00:38:45.410122 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09701c7f-5a6d-4bde-8e10-19799e14d3ab" containerName="cilium-operator" May 9 00:38:45.415025 kubelet[2749]: E0509 00:38:45.410136 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28ef146f-3a25-47ef-9256-f5347ee08fcd" containerName="cilium-agent" May 9 00:38:45.415025 kubelet[2749]: E0509 00:38:45.410143 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28ef146f-3a25-47ef-9256-f5347ee08fcd" containerName="apply-sysctl-overwrites" May 9 00:38:45.415025 kubelet[2749]: E0509 00:38:45.410150 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28ef146f-3a25-47ef-9256-f5347ee08fcd" containerName="mount-bpf-fs" May 9 00:38:45.415025 kubelet[2749]: E0509 00:38:45.410157 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28ef146f-3a25-47ef-9256-f5347ee08fcd" containerName="clean-cilium-state" May 9 00:38:45.415025 kubelet[2749]: E0509 00:38:45.410166 2749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="28ef146f-3a25-47ef-9256-f5347ee08fcd" containerName="mount-cgroup" May 9 00:38:45.415025 kubelet[2749]: I0509 00:38:45.410193 2749 memory_manager.go:354] "RemoveStaleState removing state" podUID="09701c7f-5a6d-4bde-8e10-19799e14d3ab" containerName="cilium-operator" May 9 00:38:45.415025 kubelet[2749]: I0509 00:38:45.410202 2749 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ef146f-3a25-47ef-9256-f5347ee08fcd" containerName="cilium-agent" May 9 00:38:45.416085 systemd[1]: Started sshd@26-10.0.0.109:22-10.0.0.1:42296.service - OpenSSH per-connection server daemon (10.0.0.1:42296). May 9 00:38:45.420593 systemd[1]: sshd@25-10.0.0.109:22-10.0.0.1:42290.service: Deactivated successfully. May 9 00:38:45.436362 systemd[1]: session-26.scope: Deactivated successfully. May 9 00:38:45.440087 systemd-logind[1564]: Session 26 logged out. Waiting for processes to exit. May 9 00:38:45.443317 systemd-logind[1564]: Removed session 26. May 9 00:38:45.451919 kubelet[2749]: I0509 00:38:45.451404 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-hostproc\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.452374 kubelet[2749]: I0509 00:38:45.452220 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-etc-cni-netd\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.452374 kubelet[2749]: I0509 00:38:45.452251 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-xtables-lock\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.452374 kubelet[2749]: I0509 00:38:45.452328 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-cilium-cgroup\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.452649 kubelet[2749]: I0509 00:38:45.452523 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-cilium-run\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.452649 kubelet[2749]: I0509 00:38:45.452550 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2938962f-16e6-4342-b1db-f4dfe817a2fe-clustermesh-secrets\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.452649 kubelet[2749]: I0509 00:38:45.452613 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-host-proc-sys-kernel\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.453135 kubelet[2749]: I0509 00:38:45.452839 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2938962f-16e6-4342-b1db-f4dfe817a2fe-hubble-tls\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.453135 kubelet[2749]: I0509 00:38:45.452922 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-bpf-maps\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.453135 kubelet[2749]: I0509 00:38:45.452943 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-cni-path\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.453135 kubelet[2749]: I0509 00:38:45.452979 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2938962f-16e6-4342-b1db-f4dfe817a2fe-cilium-config-path\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.453135 kubelet[2749]: I0509 00:38:45.453002 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2938962f-16e6-4342-b1db-f4dfe817a2fe-cilium-ipsec-secrets\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.453135 kubelet[2749]: I0509 00:38:45.453023 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-lib-modules\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.453386 kubelet[2749]: I0509 00:38:45.453045 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2938962f-16e6-4342-b1db-f4dfe817a2fe-host-proc-sys-net\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.453386 kubelet[2749]: I0509 00:38:45.453064 2749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9w4\" (UniqueName: \"kubernetes.io/projected/2938962f-16e6-4342-b1db-f4dfe817a2fe-kube-api-access-tf9w4\") pod \"cilium-cx5bn\" (UID: \"2938962f-16e6-4342-b1db-f4dfe817a2fe\") " pod="kube-system/cilium-cx5bn" May 9 00:38:45.461665 sshd[4608]: Accepted publickey for core from 10.0.0.1 port 42296 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:45.463300 sshd[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:45.467355 systemd-logind[1564]: New session 27 of user core. May 9 00:38:45.478551 systemd[1]: Started session-27.scope - Session 27 of User core. May 9 00:38:45.531557 sshd[4608]: pam_unix(sshd:session): session closed for user core May 9 00:38:45.544522 systemd[1]: Started sshd@27-10.0.0.109:22-10.0.0.1:42302.service - OpenSSH per-connection server daemon (10.0.0.1:42302). May 9 00:38:45.545572 systemd[1]: sshd@26-10.0.0.109:22-10.0.0.1:42296.service: Deactivated successfully. May 9 00:38:45.549510 systemd-logind[1564]: Session 27 logged out. Waiting for processes to exit. May 9 00:38:45.550320 systemd[1]: session-27.scope: Deactivated successfully. May 9 00:38:45.551982 systemd-logind[1564]: Removed session 27. May 9 00:38:45.591443 sshd[4617]: Accepted publickey for core from 10.0.0.1 port 42302 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:38:45.593252 sshd[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:38:45.597609 systemd-logind[1564]: New session 28 of user core. May 9 00:38:45.603654 systemd[1]: Started session-28.scope - Session 28 of User core. May 9 00:38:45.722826 kubelet[2749]: E0509 00:38:45.722785 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:45.723877 containerd[1585]: time="2025-05-09T00:38:45.723831801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cx5bn,Uid:2938962f-16e6-4342-b1db-f4dfe817a2fe,Namespace:kube-system,Attempt:0,}" May 9 00:38:45.746405 containerd[1585]: time="2025-05-09T00:38:45.746283171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:38:45.746405 containerd[1585]: time="2025-05-09T00:38:45.746341872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:38:45.746405 containerd[1585]: time="2025-05-09T00:38:45.746359134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:45.746595 containerd[1585]: time="2025-05-09T00:38:45.746475402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:38:45.793613 containerd[1585]: time="2025-05-09T00:38:45.793552348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cx5bn,Uid:2938962f-16e6-4342-b1db-f4dfe817a2fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\"" May 9 00:38:45.794411 kubelet[2749]: E0509 00:38:45.794376 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:45.797092 containerd[1585]: time="2025-05-09T00:38:45.796972926Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:38:45.820775 containerd[1585]: time="2025-05-09T00:38:45.820683771Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d519bea0ee9f2f293c5f87ed1f733822c4b38ba7d2649540d4d5d0a8bbaf1bcf\"" May 9 00:38:45.821419 containerd[1585]: time="2025-05-09T00:38:45.821374039Z" level=info msg="StartContainer for \"d519bea0ee9f2f293c5f87ed1f733822c4b38ba7d2649540d4d5d0a8bbaf1bcf\"" May 9 00:38:45.884288 containerd[1585]: time="2025-05-09T00:38:45.884217482Z" level=info msg="StartContainer for \"d519bea0ee9f2f293c5f87ed1f733822c4b38ba7d2649540d4d5d0a8bbaf1bcf\" returns successfully" May 9 00:38:45.929068 containerd[1585]: time="2025-05-09T00:38:45.928982873Z" level=info msg="shim disconnected" id=d519bea0ee9f2f293c5f87ed1f733822c4b38ba7d2649540d4d5d0a8bbaf1bcf namespace=k8s.io May 9 00:38:45.929068 containerd[1585]: time="2025-05-09T00:38:45.929058215Z" level=warning msg="cleaning up after shim disconnected" id=d519bea0ee9f2f293c5f87ed1f733822c4b38ba7d2649540d4d5d0a8bbaf1bcf namespace=k8s.io May 9 00:38:45.929068 containerd[1585]: time="2025-05-09T00:38:45.929070568Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:46.864309 kubelet[2749]: E0509 00:38:46.864248 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:46.866473 containerd[1585]: time="2025-05-09T00:38:46.866173665Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:38:46.902364 containerd[1585]: time="2025-05-09T00:38:46.902295209Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db27b32fa0dce0235543f3f644c5fdac20a17617e4cf05dd4dd9f54b9ade801f\"" May 9 00:38:46.903027 containerd[1585]: time="2025-05-09T00:38:46.902982441Z" level=info msg="StartContainer for \"db27b32fa0dce0235543f3f644c5fdac20a17617e4cf05dd4dd9f54b9ade801f\"" May 9 00:38:46.965043 containerd[1585]: time="2025-05-09T00:38:46.964799029Z" level=info msg="StartContainer for \"db27b32fa0dce0235543f3f644c5fdac20a17617e4cf05dd4dd9f54b9ade801f\" returns successfully" May 9 00:38:47.031584 containerd[1585]: time="2025-05-09T00:38:47.031500374Z" level=info msg="shim disconnected" id=db27b32fa0dce0235543f3f644c5fdac20a17617e4cf05dd4dd9f54b9ade801f namespace=k8s.io May 9 00:38:47.031584 containerd[1585]: time="2025-05-09T00:38:47.031569554Z" level=warning msg="cleaning up after shim disconnected" id=db27b32fa0dce0235543f3f644c5fdac20a17617e4cf05dd4dd9f54b9ade801f namespace=k8s.io May 9 00:38:47.031584 containerd[1585]: time="2025-05-09T00:38:47.031581617Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:47.560139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db27b32fa0dce0235543f3f644c5fdac20a17617e4cf05dd4dd9f54b9ade801f-rootfs.mount: Deactivated successfully. May 9 00:38:47.867716 kubelet[2749]: E0509 00:38:47.867679 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:47.869622 containerd[1585]: time="2025-05-09T00:38:47.869575017Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:38:47.894341 containerd[1585]: time="2025-05-09T00:38:47.894274356Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"49c0d83f29a5d0f45688ebea1a2d285004cdeb3c88225a3f522c05c7c147506e\"" May 9 00:38:47.896758 containerd[1585]: time="2025-05-09T00:38:47.894961047Z" level=info msg="StartContainer for \"49c0d83f29a5d0f45688ebea1a2d285004cdeb3c88225a3f522c05c7c147506e\"" May 9 00:38:47.966914 containerd[1585]: time="2025-05-09T00:38:47.966799682Z" level=info msg="StartContainer for \"49c0d83f29a5d0f45688ebea1a2d285004cdeb3c88225a3f522c05c7c147506e\" returns successfully" May 9 00:38:48.004106 containerd[1585]: time="2025-05-09T00:38:48.004042981Z" level=info msg="shim disconnected" id=49c0d83f29a5d0f45688ebea1a2d285004cdeb3c88225a3f522c05c7c147506e namespace=k8s.io May 9 00:38:48.004106 containerd[1585]: time="2025-05-09T00:38:48.004092115Z" level=warning msg="cleaning up after shim disconnected" id=49c0d83f29a5d0f45688ebea1a2d285004cdeb3c88225a3f522c05c7c147506e namespace=k8s.io May 9 00:38:48.004106 containerd[1585]: time="2025-05-09T00:38:48.004101412Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:48.021584 containerd[1585]: time="2025-05-09T00:38:48.021512766Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:38:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:38:48.559963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49c0d83f29a5d0f45688ebea1a2d285004cdeb3c88225a3f522c05c7c147506e-rootfs.mount: Deactivated successfully. May 9 00:38:48.871461 kubelet[2749]: E0509 00:38:48.871396 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:48.873226 containerd[1585]: time="2025-05-09T00:38:48.873185855Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:38:48.902479 containerd[1585]: time="2025-05-09T00:38:48.902430634Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ae4191dac224aec71bfcbd0c1d528af6e89b06b40f301750a7371a3d59096774\"" May 9 00:38:48.903033 containerd[1585]: time="2025-05-09T00:38:48.902980767Z" level=info msg="StartContainer for \"ae4191dac224aec71bfcbd0c1d528af6e89b06b40f301750a7371a3d59096774\"" May 9 00:38:48.959711 containerd[1585]: time="2025-05-09T00:38:48.959647671Z" level=info msg="StartContainer for \"ae4191dac224aec71bfcbd0c1d528af6e89b06b40f301750a7371a3d59096774\" returns successfully" May 9 00:38:48.982986 containerd[1585]: time="2025-05-09T00:38:48.982919828Z" level=info msg="shim disconnected" id=ae4191dac224aec71bfcbd0c1d528af6e89b06b40f301750a7371a3d59096774 namespace=k8s.io May 9 00:38:48.982986 containerd[1585]: time="2025-05-09T00:38:48.982973478Z" level=warning msg="cleaning up after shim disconnected" id=ae4191dac224aec71bfcbd0c1d528af6e89b06b40f301750a7371a3d59096774 namespace=k8s.io May 9 00:38:48.982986 containerd[1585]: time="2025-05-09T00:38:48.982981644Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:38:49.369311 kubelet[2749]: E0509 00:38:49.369275 2749 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:38:49.496091 update_engine[1570]: I20250509 00:38:49.496014 1570 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 9 00:38:49.496091 update_engine[1570]: I20250509 00:38:49.496065 1570 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 9 00:38:49.496918 update_engine[1570]: I20250509 00:38:49.496330 1570 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 9 00:38:49.496918 update_engine[1570]: I20250509 00:38:49.496838 1570 omaha_request_params.cc:62] Current group set to lts May 9 00:38:49.497689 update_engine[1570]: I20250509 00:38:49.497624 1570 update_attempter.cc:499] Already updated boot flags. Skipping. May 9 00:38:49.497689 update_engine[1570]: I20250509 00:38:49.497669 1570 update_attempter.cc:643] Scheduling an action processor start. May 9 00:38:49.497829 update_engine[1570]: I20250509 00:38:49.497695 1570 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 9 00:38:49.497829 update_engine[1570]: I20250509 00:38:49.497776 1570 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 9 00:38:49.497883 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 9 00:38:49.498167 update_engine[1570]: I20250509 00:38:49.497892 1570 omaha_request_action.cc:271] Posting an Omaha request to disabled May 9 00:38:49.498167 update_engine[1570]: I20250509 00:38:49.497906 1570 omaha_request_action.cc:272] Request: May 9 00:38:49.498167 update_engine[1570]: May 9 00:38:49.498167 update_engine[1570]: May 9 00:38:49.498167 update_engine[1570]: May 9 00:38:49.498167 update_engine[1570]: May 9 00:38:49.498167 update_engine[1570]: May 9 00:38:49.498167 update_engine[1570]: May 9 00:38:49.498167 update_engine[1570]: May 9 00:38:49.498167 update_engine[1570]: May 9 00:38:49.498167 update_engine[1570]: I20250509 00:38:49.497917 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 00:38:49.501331 update_engine[1570]: I20250509 00:38:49.501296 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 00:38:49.501693 update_engine[1570]: I20250509 00:38:49.501653 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 00:38:49.556779 update_engine[1570]: E20250509 00:38:49.556705 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 00:38:49.556855 update_engine[1570]: I20250509 00:38:49.556823 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 9 00:38:49.559917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae4191dac224aec71bfcbd0c1d528af6e89b06b40f301750a7371a3d59096774-rootfs.mount: Deactivated successfully. May 9 00:38:49.876593 kubelet[2749]: E0509 00:38:49.876558 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:49.878663 containerd[1585]: time="2025-05-09T00:38:49.878597445Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:38:49.895366 containerd[1585]: time="2025-05-09T00:38:49.895314594Z" level=info msg="CreateContainer within sandbox \"3cd356b7cb507a2ccdbde57635cd89a311872bf510ced54de35d9733d9b421be\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e94c5af40f95b2e8b3e7518e0e24a5818567b9bde427b1480cf71845e6ecbcdb\"" May 9 00:38:49.895892 containerd[1585]: time="2025-05-09T00:38:49.895847214Z" level=info msg="StartContainer for \"e94c5af40f95b2e8b3e7518e0e24a5818567b9bde427b1480cf71845e6ecbcdb\"" May 9 00:38:49.958457 containerd[1585]: time="2025-05-09T00:38:49.958408247Z" level=info msg="StartContainer for \"e94c5af40f95b2e8b3e7518e0e24a5818567b9bde427b1480cf71845e6ecbcdb\" returns successfully" May 9 00:38:50.375293 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 9 00:38:50.881490 kubelet[2749]: E0509 00:38:50.881461 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:50.892978 kubelet[2749]: I0509 00:38:50.892908 2749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cx5bn" podStartSLOduration=5.892874918 podStartE2EDuration="5.892874918s" podCreationTimestamp="2025-05-09 00:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:38:50.89276411 +0000 UTC m=+91.710542220" watchObservedRunningTime="2025-05-09 00:38:50.892874918 +0000 UTC m=+91.710653027" May 9 00:38:51.893717 kubelet[2749]: E0509 00:38:51.893662 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:52.224497 kubelet[2749]: I0509 00:38:52.224347 2749 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T00:38:52Z","lastTransitionTime":"2025-05-09T00:38:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 00:38:52.894853 kubelet[2749]: E0509 00:38:52.894813 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:53.529415 systemd-networkd[1246]: lxc_health: Link UP May 9 00:38:53.539417 systemd-networkd[1246]: lxc_health: Gained carrier May 9 00:38:53.896834 kubelet[2749]: E0509 00:38:53.896347 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:54.728545 systemd-networkd[1246]: lxc_health: Gained IPv6LL May 9 00:38:54.898185 kubelet[2749]: E0509 00:38:54.898142 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:55.900428 kubelet[2749]: E0509 00:38:55.900380 2749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:38:59.497683 update_engine[1570]: I20250509 00:38:59.497572 1570 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 00:38:59.498320 update_engine[1570]: I20250509 00:38:59.497934 1570 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 00:38:59.498320 update_engine[1570]: I20250509 00:38:59.498164 1570 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 00:38:59.551227 update_engine[1570]: E20250509 00:38:59.551157 1570 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 00:38:59.551412 update_engine[1570]: I20250509 00:38:59.551273 1570 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 9 00:39:00.512861 sshd[4617]: pam_unix(sshd:session): session closed for user core May 9 00:39:00.517000 systemd[1]: sshd@27-10.0.0.109:22-10.0.0.1:42302.service: Deactivated successfully. May 9 00:39:00.519791 systemd-logind[1564]: Session 28 logged out. Waiting for processes to exit. May 9 00:39:00.519896 systemd[1]: session-28.scope: Deactivated successfully. May 9 00:39:00.521337 systemd-logind[1564]: Removed session 28.