May 9 00:33:13.935721 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:52:37 -00 2025 May 9 00:33:13.935753 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:33:13.935765 kernel: BIOS-provided physical RAM map: May 9 00:33:13.935771 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 9 00:33:13.935777 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 9 00:33:13.935783 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 9 00:33:13.935791 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 9 00:33:13.935797 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 9 00:33:13.935804 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 9 00:33:13.935810 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 9 00:33:13.935841 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 9 00:33:13.935867 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved May 9 00:33:13.935885 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 May 9 00:33:13.935892 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved May 9 00:33:13.935900 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 9 00:33:13.935909 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 9 00:33:13.935922 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 9 00:33:13.935930 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 9 00:33:13.935939 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 9 00:33:13.935947 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 9 00:33:13.935956 kernel: NX (Execute Disable) protection: active May 9 00:33:13.935965 kernel: APIC: Static calls initialized May 9 00:33:13.935973 kernel: efi: EFI v2.7 by EDK II May 9 00:33:13.935982 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 May 9 00:33:13.935990 kernel: SMBIOS 2.8 present. May 9 00:33:13.935999 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 9 00:33:13.936006 kernel: Hypervisor detected: KVM May 9 00:33:13.936015 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 00:33:13.936022 kernel: kvm-clock: using sched offset of 5363119930 cycles May 9 00:33:13.936031 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 00:33:13.936041 kernel: tsc: Detected 2794.748 MHz processor May 9 00:33:13.936051 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 00:33:13.936061 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 00:33:13.936070 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 9 00:33:13.936077 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 9 00:33:13.936084 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 00:33:13.936094 kernel: Using GB pages for direct mapping May 9 00:33:13.936104 kernel: Secure boot disabled May 9 00:33:13.936114 kernel: ACPI: Early table checksum verification disabled May 9 00:33:13.936125 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 9 00:33:13.936142 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 9 00:33:13.936150 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:33:13.936157 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:33:13.936167 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 9 00:33:13.936174 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:33:13.936185 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:33:13.936193 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:33:13.936200 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 00:33:13.936207 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 9 00:33:13.936214 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 9 00:33:13.936225 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 9 00:33:13.936234 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 9 00:33:13.936244 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 9 00:33:13.936254 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 9 00:33:13.936261 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 9 00:33:13.936268 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 9 00:33:13.936276 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 9 00:33:13.936283 kernel: No NUMA configuration found May 9 00:33:13.936296 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 9 00:33:13.936310 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 9 00:33:13.936321 kernel: Zone ranges: May 9 00:33:13.936331 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 00:33:13.936344 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 9 00:33:13.936359 kernel: Normal empty May 9 00:33:13.936369 kernel: Movable zone start for each node May 9 00:33:13.936379 kernel: Early memory node ranges May 9 00:33:13.936389 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 9 00:33:13.936399 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 9 00:33:13.936409 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 9 00:33:13.936422 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 9 00:33:13.936429 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 9 00:33:13.936445 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 9 00:33:13.936453 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 9 00:33:13.936460 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:33:13.936467 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 9 00:33:13.936475 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 9 00:33:13.936482 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 00:33:13.936489 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 9 00:33:13.936499 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 9 00:33:13.936506 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 9 00:33:13.936513 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 00:33:13.936521 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 00:33:13.936528 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 00:33:13.936535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 00:33:13.936542 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 00:33:13.936549 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 00:33:13.936556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 00:33:13.936566 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 00:33:13.936573 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 00:33:13.936580 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 9 00:33:13.936587 kernel: TSC deadline timer available May 9 00:33:13.936594 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 9 00:33:13.936601 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 00:33:13.936611 kernel: kvm-guest: KVM setup pv remote TLB flush May 9 00:33:13.936621 kernel: kvm-guest: setup PV sched yield May 9 00:33:13.936631 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 9 00:33:13.936645 kernel: Booting paravirtualized kernel on KVM May 9 00:33:13.936652 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 00:33:13.936660 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 May 9 00:33:13.936667 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 May 9 00:33:13.936674 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 May 9 00:33:13.936681 kernel: pcpu-alloc: [0] 0 1 2 3 May 9 00:33:13.936688 kernel: kvm-guest: PV spinlocks enabled May 9 00:33:13.936695 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 9 00:33:13.936704 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:33:13.936718 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 00:33:13.936725 kernel: random: crng init done May 9 00:33:13.936732 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 00:33:13.936739 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 00:33:13.936747 kernel: Fallback order for Node 0: 0 May 9 00:33:13.936754 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 9 00:33:13.936761 kernel: Policy zone: DMA32 May 9 00:33:13.936768 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 00:33:13.936778 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42864K init, 2328K bss, 166140K reserved, 0K cma-reserved) May 9 00:33:13.936786 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 9 00:33:13.936793 kernel: ftrace: allocating 37944 entries in 149 pages May 9 00:33:13.936800 kernel: ftrace: allocated 149 pages with 4 groups May 9 00:33:13.936807 kernel: Dynamic Preempt: voluntary May 9 00:33:13.936838 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 00:33:13.936849 kernel: rcu: RCU event tracing is enabled. May 9 00:33:13.936857 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 9 00:33:13.936864 kernel: Trampoline variant of Tasks RCU enabled. May 9 00:33:13.936872 kernel: Rude variant of Tasks RCU enabled. May 9 00:33:13.936880 kernel: Tracing variant of Tasks RCU enabled. May 9 00:33:13.936887 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 00:33:13.936897 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 9 00:33:13.936905 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 9 00:33:13.936912 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 00:33:13.936920 kernel: Console: colour dummy device 80x25 May 9 00:33:13.936928 kernel: printk: console [ttyS0] enabled May 9 00:33:13.936938 kernel: ACPI: Core revision 20230628 May 9 00:33:13.936945 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 9 00:33:13.936953 kernel: APIC: Switch to symmetric I/O mode setup May 9 00:33:13.936960 kernel: x2apic enabled May 9 00:33:13.936968 kernel: APIC: Switched APIC routing to: physical x2apic May 9 00:33:13.936976 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() May 9 00:33:13.936983 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() May 9 00:33:13.936991 kernel: kvm-guest: setup PV IPIs May 9 00:33:13.936998 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 00:33:13.937008 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 00:33:13.937016 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 9 00:33:13.937024 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 9 00:33:13.937031 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 9 00:33:13.937039 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 9 00:33:13.937046 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 00:33:13.937054 kernel: Spectre V2 : Mitigation: Retpolines May 9 00:33:13.937061 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 00:33:13.937069 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 9 00:33:13.937080 kernel: RETBleed: Mitigation: untrained return thunk May 9 00:33:13.937087 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 9 00:33:13.937095 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 9 00:33:13.937103 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! May 9 00:33:13.937114 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. May 9 00:33:13.937121 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode May 9 00:33:13.937129 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 9 00:33:13.937137 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 9 00:33:13.937147 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 9 00:33:13.937155 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 9 00:33:13.937162 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 9 00:33:13.937170 kernel: Freeing SMP alternatives memory: 32K May 9 00:33:13.937178 kernel: pid_max: default: 32768 minimum: 301 May 9 00:33:13.937185 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 00:33:13.937193 kernel: landlock: Up and running. May 9 00:33:13.937200 kernel: SELinux: Initializing. May 9 00:33:13.937208 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:33:13.937220 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 00:33:13.937228 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 9 00:33:13.937236 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:33:13.937245 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:33:13.937254 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 9 00:33:13.937263 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 9 00:33:13.937272 kernel: ... version: 0 May 9 00:33:13.937280 kernel: ... bit width: 48 May 9 00:33:13.937288 kernel: ... generic registers: 6 May 9 00:33:13.937298 kernel: ... value mask: 0000ffffffffffff May 9 00:33:13.937306 kernel: ... max period: 00007fffffffffff May 9 00:33:13.937313 kernel: ... fixed-purpose events: 0 May 9 00:33:13.937321 kernel: ... event mask: 000000000000003f May 9 00:33:13.937329 kernel: signal: max sigframe size: 1776 May 9 00:33:13.937336 kernel: rcu: Hierarchical SRCU implementation. May 9 00:33:13.937344 kernel: rcu: Max phase no-delay instances is 400. May 9 00:33:13.937352 kernel: smp: Bringing up secondary CPUs ... May 9 00:33:13.937359 kernel: smpboot: x86: Booting SMP configuration: May 9 00:33:13.937369 kernel: .... node #0, CPUs: #1 #2 #3 May 9 00:33:13.937377 kernel: smp: Brought up 1 node, 4 CPUs May 9 00:33:13.937384 kernel: smpboot: Max logical packages: 1 May 9 00:33:13.937392 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 9 00:33:13.937400 kernel: devtmpfs: initialized May 9 00:33:13.937407 kernel: x86/mm: Memory block size: 128MB May 9 00:33:13.937415 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 9 00:33:13.937422 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 9 00:33:13.937430 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 9 00:33:13.937447 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 9 00:33:13.937457 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 9 00:33:13.937465 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 00:33:13.937473 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 9 00:33:13.937480 kernel: pinctrl core: initialized pinctrl subsystem May 9 00:33:13.937488 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 00:33:13.937495 kernel: audit: initializing netlink subsys (disabled) May 9 00:33:13.937503 kernel: audit: type=2000 audit(1746750792.932:1): state=initialized audit_enabled=0 res=1 May 9 00:33:13.937511 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 00:33:13.937521 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 00:33:13.937529 kernel: cpuidle: using governor menu May 9 00:33:13.937536 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 00:33:13.937544 kernel: dca service started, version 1.12.1 May 9 00:33:13.937551 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 9 00:33:13.937559 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 9 00:33:13.937567 kernel: PCI: Using configuration type 1 for base access May 9 00:33:13.937574 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 00:33:13.937582 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 00:33:13.937593 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 9 00:33:13.937600 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 00:33:13.937608 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 00:33:13.937615 kernel: ACPI: Added _OSI(Module Device) May 9 00:33:13.937623 kernel: ACPI: Added _OSI(Processor Device) May 9 00:33:13.937630 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 00:33:13.937638 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 00:33:13.937646 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 00:33:13.937653 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 00:33:13.937664 kernel: ACPI: Interpreter enabled May 9 00:33:13.937671 kernel: ACPI: PM: (supports S0 S3 S5) May 9 00:33:13.937679 kernel: ACPI: Using IOAPIC for interrupt routing May 9 00:33:13.937686 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 00:33:13.937694 kernel: PCI: Using E820 reservations for host bridge windows May 9 00:33:13.937702 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 9 00:33:13.937709 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 00:33:13.937969 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 00:33:13.938135 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 9 00:33:13.938276 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 9 00:33:13.938292 kernel: PCI host bridge to bus 0000:00 May 9 00:33:13.938447 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 00:33:13.938567 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 00:33:13.938680 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 00:33:13.938792 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 9 00:33:13.938930 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 00:33:13.939044 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 9 00:33:13.939157 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 00:33:13.939317 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 9 00:33:13.939500 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 9 00:33:13.939657 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 9 00:33:13.939835 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 9 00:33:13.939992 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 9 00:33:13.940125 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 9 00:33:13.941445 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 00:33:13.941620 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 9 00:33:13.941760 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 9 00:33:13.941908 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 9 00:33:13.942042 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 9 00:33:13.942213 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 9 00:33:13.942349 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 9 00:33:13.942488 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 9 00:33:13.942615 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 9 00:33:13.942763 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 9 00:33:13.942948 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 9 00:33:13.943107 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 9 00:33:13.943248 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 9 00:33:13.943373 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 9 00:33:13.943543 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 9 00:33:13.943673 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 9 00:33:13.943814 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 9 00:33:13.943970 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 9 00:33:13.944101 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 9 00:33:13.944246 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 9 00:33:13.944377 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 9 00:33:13.944388 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 00:33:13.944396 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 00:33:13.944404 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 00:33:13.944413 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 00:33:13.944424 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 9 00:33:13.944450 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 9 00:33:13.944461 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 9 00:33:13.944470 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 9 00:33:13.944485 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 9 00:33:13.944498 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 9 00:33:13.944509 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 9 00:33:13.944520 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 9 00:33:13.944529 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 9 00:33:13.944537 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 9 00:33:13.944549 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 9 00:33:13.944565 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 9 00:33:13.944578 kernel: iommu: Default domain type: Translated May 9 00:33:13.944589 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 00:33:13.944600 kernel: efivars: Registered efivars operations May 9 00:33:13.944610 kernel: PCI: Using ACPI for IRQ routing May 9 00:33:13.944621 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 00:33:13.944631 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 9 00:33:13.944639 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 9 00:33:13.944651 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 9 00:33:13.944658 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 9 00:33:13.944841 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 9 00:33:13.944973 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 9 00:33:13.945098 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 00:33:13.945108 kernel: vgaarb: loaded May 9 00:33:13.945116 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 9 00:33:13.945124 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 9 00:33:13.945137 kernel: clocksource: Switched to clocksource kvm-clock May 9 00:33:13.945145 kernel: VFS: Disk quotas dquot_6.6.0 May 9 00:33:13.945152 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 00:33:13.945160 kernel: pnp: PnP ACPI init May 9 00:33:13.945330 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 9 00:33:13.945344 kernel: pnp: PnP ACPI: found 6 devices May 9 00:33:13.945352 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 00:33:13.945360 kernel: NET: Registered PF_INET protocol family May 9 00:33:13.946612 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 00:33:13.946634 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 00:33:13.946651 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 00:33:13.946660 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 00:33:13.946667 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 00:33:13.946675 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 00:33:13.946683 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:33:13.946690 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 00:33:13.946698 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 00:33:13.946709 kernel: NET: Registered PF_XDP protocol family May 9 00:33:13.946977 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 9 00:33:13.947110 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 9 00:33:13.947228 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 00:33:13.947350 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 00:33:13.947475 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 00:33:13.947591 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 9 00:33:13.947704 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 9 00:33:13.947857 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 9 00:33:13.947870 kernel: PCI: CLS 0 bytes, default 64 May 9 00:33:13.947878 kernel: Initialise system trusted keyrings May 9 00:33:13.947886 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 00:33:13.947894 kernel: Key type asymmetric registered May 9 00:33:13.947901 kernel: Asymmetric key parser 'x509' registered May 9 00:33:13.947911 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 00:33:13.947922 kernel: io scheduler mq-deadline registered May 9 00:33:13.947932 kernel: io scheduler kyber registered May 9 00:33:13.947949 kernel: io scheduler bfq registered May 9 00:33:13.947959 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 00:33:13.947968 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 9 00:33:13.947976 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 9 00:33:13.947984 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 9 00:33:13.947992 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 00:33:13.948000 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 00:33:13.948007 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 00:33:13.948015 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 00:33:13.948026 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 00:33:13.948217 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 00:33:13.948231 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 00:33:13.948352 kernel: rtc_cmos 00:04: registered as rtc0 May 9 00:33:13.948482 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T00:33:13 UTC (1746750793) May 9 00:33:13.948617 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 9 00:33:13.948629 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 00:33:13.948637 kernel: efifb: probing for efifb May 9 00:33:13.948650 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k May 9 00:33:13.948657 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 May 9 00:33:13.948665 kernel: efifb: scrolling: redraw May 9 00:33:13.948673 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 May 9 00:33:13.948680 kernel: Console: switching to colour frame buffer device 100x37 May 9 00:33:13.948688 kernel: fb0: EFI VGA frame buffer device May 9 00:33:13.948715 kernel: pstore: Using crash dump compression: deflate May 9 00:33:13.948725 kernel: pstore: Registered efi_pstore as persistent store backend May 9 00:33:13.948733 kernel: NET: Registered PF_INET6 protocol family May 9 00:33:13.948743 kernel: Segment Routing with IPv6 May 9 00:33:13.948754 kernel: In-situ OAM (IOAM) with IPv6 May 9 00:33:13.948761 kernel: NET: Registered PF_PACKET protocol family May 9 00:33:13.948769 kernel: Key type dns_resolver registered May 9 00:33:13.948777 kernel: IPI shorthand broadcast: enabled May 9 00:33:13.948785 kernel: sched_clock: Marking stable (1175003657, 130357607)->(1437873099, -132511835) May 9 00:33:13.948792 kernel: registered taskstats version 1 May 9 00:33:13.948800 kernel: Loading compiled-in X.509 certificates May 9 00:33:13.948808 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: fe5c896a3ca06bb89ebdfb7ed85f611806e4c1cc' May 9 00:33:13.948833 kernel: Key type .fscrypt registered May 9 00:33:13.948842 kernel: Key type fscrypt-provisioning registered May 9 00:33:13.948850 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 00:33:13.948858 kernel: ima: Allocated hash algorithm: sha1 May 9 00:33:13.948866 kernel: ima: No architecture policies found May 9 00:33:13.948874 kernel: clk: Disabling unused clocks May 9 00:33:13.948882 kernel: Freeing unused kernel image (initmem) memory: 42864K May 9 00:33:13.948889 kernel: Write protecting the kernel read-only data: 36864k May 9 00:33:13.948898 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 9 00:33:13.948909 kernel: Run /init as init process May 9 00:33:13.948916 kernel: with arguments: May 9 00:33:13.948924 kernel: /init May 9 00:33:13.948932 kernel: with environment: May 9 00:33:13.948939 kernel: HOME=/ May 9 00:33:13.948947 kernel: TERM=linux May 9 00:33:13.948955 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 00:33:13.948969 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:33:13.948982 systemd[1]: Detected virtualization kvm. May 9 00:33:13.948990 systemd[1]: Detected architecture x86-64. May 9 00:33:13.948998 systemd[1]: Running in initrd. May 9 00:33:13.949007 systemd[1]: No hostname configured, using default hostname. May 9 00:33:13.949017 systemd[1]: Hostname set to . May 9 00:33:13.949029 systemd[1]: Initializing machine ID from VM UUID. May 9 00:33:13.949037 systemd[1]: Queued start job for default target initrd.target. May 9 00:33:13.949045 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:33:13.949057 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:33:13.949066 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 00:33:13.949075 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:33:13.949084 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 00:33:13.949095 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 00:33:13.949105 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 00:33:13.949114 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 00:33:13.949122 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:33:13.949130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:33:13.949139 systemd[1]: Reached target paths.target - Path Units. May 9 00:33:13.949147 systemd[1]: Reached target slices.target - Slice Units. May 9 00:33:13.949158 systemd[1]: Reached target swap.target - Swaps. May 9 00:33:13.949166 systemd[1]: Reached target timers.target - Timer Units. May 9 00:33:13.949175 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:33:13.949184 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:33:13.949193 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 00:33:13.949202 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 00:33:13.949210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:33:13.949219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:33:13.949228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:33:13.949239 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:33:13.949248 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 00:33:13.949257 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:33:13.949266 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 00:33:13.949274 systemd[1]: Starting systemd-fsck-usr.service... May 9 00:33:13.949283 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:33:13.949292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:33:13.949301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:33:13.949312 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 00:33:13.949321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:33:13.949329 systemd[1]: Finished systemd-fsck-usr.service. May 9 00:33:13.949339 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:33:13.949348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:33:13.949379 systemd-journald[193]: Collecting audit messages is disabled. May 9 00:33:13.949399 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:33:13.949408 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:33:13.949416 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:33:13.949428 systemd-journald[193]: Journal started May 9 00:33:13.949456 systemd-journald[193]: Runtime Journal (/run/log/journal/860b10dc43e84a7fa2b25b08a33728f3) is 6.0M, max 48.3M, 42.2M free. May 9 00:33:13.926344 systemd-modules-load[194]: Inserted module 'overlay' May 9 00:33:13.952217 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:33:13.956942 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 00:33:13.957162 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:33:13.960249 systemd-modules-load[194]: Inserted module 'br_netfilter' May 9 00:33:13.960797 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:33:13.961316 kernel: Bridge firewalling registered May 9 00:33:13.962707 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:33:13.965780 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:33:13.970995 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 00:33:13.972151 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:33:13.984842 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:33:13.987693 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:33:13.989874 dracut-cmdline[220]: dracut-dracut-053 May 9 00:33:13.991959 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56b660b06ded103a15fe25ebfbdecb898a20f374e429fec465c69b1a75d59c4b May 9 00:33:14.002032 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:33:14.032675 systemd-resolved[242]: Positive Trust Anchors: May 9 00:33:14.032700 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:33:14.032732 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:33:14.035694 systemd-resolved[242]: Defaulting to hostname 'linux'. May 9 00:33:14.042415 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:33:14.045672 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:33:14.086856 kernel: SCSI subsystem initialized May 9 00:33:14.095844 kernel: Loading iSCSI transport class v2.0-870. May 9 00:33:14.105844 kernel: iscsi: registered transport (tcp) May 9 00:33:14.127031 kernel: iscsi: registered transport (qla4xxx) May 9 00:33:14.127063 kernel: QLogic iSCSI HBA Driver May 9 00:33:14.174679 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 00:33:14.184958 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 00:33:14.210851 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 00:33:14.210897 kernel: device-mapper: uevent: version 1.0.3 May 9 00:33:14.210909 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 00:33:14.252870 kernel: raid6: avx2x4 gen() 29668 MB/s May 9 00:33:14.269850 kernel: raid6: avx2x2 gen() 29945 MB/s May 9 00:33:14.286967 kernel: raid6: avx2x1 gen() 25125 MB/s May 9 00:33:14.286989 kernel: raid6: using algorithm avx2x2 gen() 29945 MB/s May 9 00:33:14.305033 kernel: raid6: .... xor() 19034 MB/s, rmw enabled May 9 00:33:14.305069 kernel: raid6: using avx2x2 recovery algorithm May 9 00:33:14.325847 kernel: xor: automatically using best checksumming function avx May 9 00:33:14.550866 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 00:33:14.563396 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 00:33:14.580982 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:33:14.596133 systemd-udevd[412]: Using default interface naming scheme 'v255'. May 9 00:33:14.601429 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:33:14.615007 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 00:33:14.630068 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation May 9 00:33:14.664617 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:33:14.677071 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:33:14.749449 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:33:14.760059 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 00:33:14.774433 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 00:33:14.777813 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:33:14.780258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:33:14.782488 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:33:14.792041 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 00:33:14.794913 kernel: cryptd: max_cpu_qlen set to 1000 May 9 00:33:14.796844 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues May 9 00:33:14.799861 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 9 00:33:14.808850 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 00:33:14.812406 kernel: AVX2 version of gcm_enc/dec engaged. May 9 00:33:14.812447 kernel: AES CTR mode by8 optimization enabled May 9 00:33:14.819251 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 00:33:14.819315 kernel: GPT:9289727 != 19775487 May 9 00:33:14.819334 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 00:33:14.820180 kernel: GPT:9289727 != 19775487 May 9 00:33:14.821271 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 00:33:14.821311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:33:14.832848 kernel: libata version 3.00 loaded. May 9 00:33:14.834699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:33:14.836301 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:33:14.838046 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:33:14.840929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:33:14.847112 kernel: ahci 0000:00:1f.2: version 3.0 May 9 00:33:14.847346 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 9 00:33:14.842952 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:33:14.852109 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 9 00:33:14.852321 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 9 00:33:14.845531 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:33:14.855883 kernel: scsi host0: ahci May 9 00:33:14.862858 kernel: scsi host1: ahci May 9 00:33:14.860201 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:33:14.867904 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) May 9 00:33:14.871843 kernel: scsi host2: ahci May 9 00:33:14.872089 kernel: BTRFS: device fsid 8d57db23-a0fc-4362-9769-38fbda5747c1 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (461) May 9 00:33:14.874855 kernel: scsi host3: ahci May 9 00:33:14.880856 kernel: scsi host4: ahci May 9 00:33:14.883472 kernel: scsi host5: ahci May 9 00:33:14.883687 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 9 00:33:14.883714 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 9 00:33:14.884310 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 9 00:33:14.884287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:33:14.890428 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 9 00:33:14.890448 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 9 00:33:14.890462 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 9 00:33:14.895628 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 00:33:14.904020 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 00:33:14.915153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:33:14.921648 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 00:33:14.925323 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 00:33:14.945962 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 00:33:14.949096 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 00:33:14.970179 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:33:15.031289 disk-uuid[567]: Primary Header is updated. May 9 00:33:15.031289 disk-uuid[567]: Secondary Entries is updated. May 9 00:33:15.031289 disk-uuid[567]: Secondary Header is updated. May 9 00:33:15.035053 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:33:15.039840 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:33:15.197856 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 9 00:33:15.197923 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 9 00:33:15.197937 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 9 00:33:15.197951 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 9 00:33:15.197965 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 9 00:33:15.198854 kernel: ata3.00: applying bridge limits May 9 00:33:15.199848 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 9 00:33:15.200840 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 9 00:33:15.200864 kernel: ata3.00: configured for UDMA/100 May 9 00:33:15.201855 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 9 00:33:15.245872 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 9 00:33:15.246132 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 9 00:33:15.259850 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 9 00:33:16.040860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 00:33:16.041550 disk-uuid[577]: The operation has completed successfully. May 9 00:33:16.071767 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 00:33:16.071904 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 00:33:16.093021 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 00:33:16.097251 sh[593]: Success May 9 00:33:16.110849 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 9 00:33:16.146224 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 00:33:16.159575 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 00:33:16.163197 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 00:33:16.176530 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57db23-a0fc-4362-9769-38fbda5747c1 May 9 00:33:16.176569 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 00:33:16.176583 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 00:33:16.177575 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 00:33:16.178327 kernel: BTRFS info (device dm-0): using free space tree May 9 00:33:16.183468 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 00:33:16.185970 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 00:33:16.195991 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 00:33:16.198737 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 00:33:16.208062 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:33:16.208114 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:33:16.208142 kernel: BTRFS info (device vda6): using free space tree May 9 00:33:16.211857 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:33:16.221173 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 00:33:16.223200 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:33:16.239432 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 00:33:16.246987 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 00:33:16.312261 ignition[691]: Ignition 2.19.0 May 9 00:33:16.312274 ignition[691]: Stage: fetch-offline May 9 00:33:16.312329 ignition[691]: no configs at "/usr/lib/ignition/base.d" May 9 00:33:16.312342 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:33:16.312471 ignition[691]: parsed url from cmdline: "" May 9 00:33:16.312476 ignition[691]: no config URL provided May 9 00:33:16.312484 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" May 9 00:33:16.312497 ignition[691]: no config at "/usr/lib/ignition/user.ign" May 9 00:33:16.312530 ignition[691]: op(1): [started] loading QEMU firmware config module May 9 00:33:16.312537 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" May 9 00:33:16.321104 ignition[691]: op(1): [finished] loading QEMU firmware config module May 9 00:33:16.336317 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:33:16.344996 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:33:16.367764 ignition[691]: parsing config with SHA512: a1c855d9b8c601b99a1fd2f793acaa1cbd3a8385f2b97861a017bb02070d9da229d67c2e2adfcbecc96c66fefe7aec4024822a1ea1cd36520c4f65b13308f993 May 9 00:33:16.368687 systemd-networkd[782]: lo: Link UP May 9 00:33:16.368696 systemd-networkd[782]: lo: Gained carrier May 9 00:33:16.370801 systemd-networkd[782]: Enumeration completed May 9 00:33:16.370890 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:33:16.371400 systemd[1]: Reached target network.target - Network. May 9 00:33:16.373564 ignition[691]: fetch-offline: fetch-offline passed May 9 00:33:16.372100 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:33:16.373671 ignition[691]: Ignition finished successfully May 9 00:33:16.372105 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:33:16.372859 unknown[691]: fetched base config from "system" May 9 00:33:16.372873 unknown[691]: fetched user config from "qemu" May 9 00:33:16.373283 systemd-networkd[782]: eth0: Link UP May 9 00:33:16.373288 systemd-networkd[782]: eth0: Gained carrier May 9 00:33:16.373296 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:33:16.376022 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:33:16.377010 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 9 00:33:16.386159 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 00:33:16.388891 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:33:16.405654 ignition[785]: Ignition 2.19.0 May 9 00:33:16.405668 ignition[785]: Stage: kargs May 9 00:33:16.405905 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 9 00:33:16.405921 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:33:16.406983 ignition[785]: kargs: kargs passed May 9 00:33:16.407040 ignition[785]: Ignition finished successfully May 9 00:33:16.413453 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 00:33:16.421002 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 00:33:16.450577 ignition[794]: Ignition 2.19.0 May 9 00:33:16.450591 ignition[794]: Stage: disks May 9 00:33:16.450777 ignition[794]: no configs at "/usr/lib/ignition/base.d" May 9 00:33:16.450789 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:33:16.451782 ignition[794]: disks: disks passed May 9 00:33:16.451867 ignition[794]: Ignition finished successfully May 9 00:33:16.457742 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 00:33:16.460092 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 00:33:16.460586 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 00:33:16.461155 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:33:16.461544 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:33:16.462102 systemd[1]: Reached target basic.target - Basic System. May 9 00:33:16.480990 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 00:33:16.492696 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 00:33:16.687253 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 00:33:16.700977 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 00:33:16.821855 kernel: EXT4-fs (vda9): mounted filesystem 4cb03022-f5a4-4664-b5b4-bc39fcc2f946 r/w with ordered data mode. Quota mode: none. May 9 00:33:16.822191 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 00:33:16.823424 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 00:33:16.836926 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:33:16.838938 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 00:33:16.840394 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 00:33:16.846107 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) May 9 00:33:16.846127 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:33:16.840450 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 00:33:16.853145 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:33:16.853170 kernel: BTRFS info (device vda6): using free space tree May 9 00:33:16.853185 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:33:16.840480 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:33:16.848458 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 00:33:16.854600 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:33:16.870993 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 00:33:16.905257 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory May 9 00:33:16.909139 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 9 00:33:16.913130 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 9 00:33:16.917001 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 9 00:33:17.021464 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 00:33:17.030980 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 00:33:17.034280 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 00:33:17.039850 kernel: BTRFS info (device vda6): last unmount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:33:17.062034 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 00:33:17.089743 ignition[931]: INFO : Ignition 2.19.0 May 9 00:33:17.089743 ignition[931]: INFO : Stage: mount May 9 00:33:17.091564 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:33:17.091564 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:33:17.091564 ignition[931]: INFO : mount: mount passed May 9 00:33:17.091564 ignition[931]: INFO : Ignition finished successfully May 9 00:33:17.093357 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 00:33:17.110929 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 00:33:17.176505 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 00:33:17.185179 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 00:33:17.193854 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (941) May 9 00:33:17.196071 kernel: BTRFS info (device vda6): first mount of filesystem f16ac009-18be-48d6-89c7-f7afe3ecb605 May 9 00:33:17.196094 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 00:33:17.196106 kernel: BTRFS info (device vda6): using free space tree May 9 00:33:17.200871 kernel: BTRFS info (device vda6): auto enabling async discard May 9 00:33:17.202596 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 00:33:17.237294 ignition[958]: INFO : Ignition 2.19.0 May 9 00:33:17.237294 ignition[958]: INFO : Stage: files May 9 00:33:17.239281 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:33:17.239281 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:33:17.242540 ignition[958]: DEBUG : files: compiled without relabeling support, skipping May 9 00:33:17.243906 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 00:33:17.243906 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 00:33:17.249633 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 00:33:17.251200 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 00:33:17.253171 unknown[958]: wrote ssh authorized keys file for user: core May 9 00:33:17.254430 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 00:33:17.257247 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 9 00:33:17.259273 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 9 00:33:17.322746 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 00:33:17.565004 systemd-networkd[782]: eth0: Gained IPv6LL May 9 00:33:17.748954 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 9 00:33:17.748954 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:33:17.748954 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 9 00:33:17.912128 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 00:33:18.135419 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 00:33:18.135419 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 00:33:18.139360 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 00:33:18.139360 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 00:33:18.142945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 00:33:18.144723 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:33:18.146654 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 00:33:18.146654 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:33:18.150112 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 00:33:18.152220 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:33:18.154126 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 00:33:18.155972 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:33:18.158606 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:33:18.161103 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:33:18.163243 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 9 00:33:18.566809 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 00:33:18.924662 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 9 00:33:18.924662 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 00:33:18.929024 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:33:18.931742 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 00:33:18.931742 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 00:33:18.931742 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 9 00:33:18.936733 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:33:18.939096 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 9 00:33:18.939096 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 9 00:33:18.942616 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 9 00:33:18.967249 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:33:18.974154 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 9 00:33:18.975877 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 9 00:33:18.975877 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 9 00:33:18.975877 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 9 00:33:18.975877 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 00:33:18.975877 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 00:33:18.975877 ignition[958]: INFO : files: files passed May 9 00:33:18.975877 ignition[958]: INFO : Ignition finished successfully May 9 00:33:18.987395 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 00:33:18.996024 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 00:33:18.999043 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 00:33:19.001964 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 00:33:19.003040 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 00:33:19.011869 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory May 9 00:33:19.015895 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:33:19.017845 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 00:33:19.019427 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 00:33:19.021973 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:33:19.023650 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 00:33:19.037081 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 00:33:19.072070 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 00:33:19.072222 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 00:33:19.073011 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 00:33:19.073465 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 00:33:19.074209 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 00:33:19.075198 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 00:33:19.099917 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:33:19.102390 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 00:33:19.118569 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 00:33:19.119337 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:33:19.119781 systemd[1]: Stopped target timers.target - Timer Units. May 9 00:33:19.120419 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 00:33:19.120610 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 00:33:19.127129 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 00:33:19.127561 systemd[1]: Stopped target basic.target - Basic System. May 9 00:33:19.128253 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 00:33:19.134698 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 00:33:19.135328 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 00:33:19.135775 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 00:33:19.136438 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 00:33:19.136936 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 00:33:19.137530 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 00:33:19.138173 systemd[1]: Stopped target swap.target - Swaps. May 9 00:33:19.138598 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 00:33:19.138784 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 00:33:19.157444 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 00:33:19.157878 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:33:19.158225 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 00:33:19.158388 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:33:19.164177 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 00:33:19.164390 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 00:33:19.170127 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 00:33:19.170300 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 00:33:19.172521 systemd[1]: Stopped target paths.target - Path Units. May 9 00:33:19.175019 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 00:33:19.181022 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:33:19.184592 systemd[1]: Stopped target slices.target - Slice Units. May 9 00:33:19.186773 systemd[1]: Stopped target sockets.target - Socket Units. May 9 00:33:19.189278 systemd[1]: iscsid.socket: Deactivated successfully. May 9 00:33:19.190500 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 00:33:19.193473 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 00:33:19.194804 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 00:33:19.197586 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 00:33:19.199168 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 00:33:19.202528 systemd[1]: ignition-files.service: Deactivated successfully. May 9 00:33:19.203869 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 00:33:19.223120 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 00:33:19.226765 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 00:33:19.229376 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 00:33:19.230906 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:33:19.233976 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 00:33:19.235113 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 00:33:19.241005 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 00:33:19.241166 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 00:33:19.257559 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 00:33:19.273073 ignition[1012]: INFO : Ignition 2.19.0 May 9 00:33:19.273073 ignition[1012]: INFO : Stage: umount May 9 00:33:19.275331 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 00:33:19.275331 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 9 00:33:19.275331 ignition[1012]: INFO : umount: umount passed May 9 00:33:19.275331 ignition[1012]: INFO : Ignition finished successfully May 9 00:33:19.276293 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 00:33:19.276496 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 00:33:19.278471 systemd[1]: Stopped target network.target - Network. May 9 00:33:19.280264 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 00:33:19.280350 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 00:33:19.282455 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 00:33:19.282510 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 00:33:19.284610 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 00:33:19.284661 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 00:33:19.285147 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 00:33:19.285200 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 00:33:19.285714 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 00:33:19.286132 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 00:33:19.294888 systemd-networkd[782]: eth0: DHCPv6 lease lost May 9 00:33:19.297288 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 00:33:19.297464 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 00:33:19.300486 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 00:33:19.300574 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 00:33:19.311115 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 00:33:19.313578 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 00:33:19.313668 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 00:33:19.316126 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:33:19.319600 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 00:33:19.319774 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 00:33:19.325808 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:33:19.326023 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:33:19.327589 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 00:33:19.327655 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 00:33:19.329838 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 00:33:19.329909 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:33:19.334511 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 00:33:19.334686 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 00:33:19.337014 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 00:33:19.337240 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:33:19.340020 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 00:33:19.340135 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 00:33:19.342680 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 00:33:19.342737 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:33:19.344709 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 00:33:19.344767 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 00:33:19.347217 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 00:33:19.347282 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 00:33:19.349197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 00:33:19.349262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 00:33:19.359077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 00:33:19.360909 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 00:33:19.360987 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:33:19.363273 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 00:33:19.363352 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:33:19.365590 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 00:33:19.365656 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:33:19.368315 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 00:33:19.368382 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:33:19.371160 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 00:33:19.371297 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 00:33:19.635564 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 00:33:19.635732 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 00:33:19.638362 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 00:33:19.639633 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 00:33:19.639704 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 00:33:19.652999 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 00:33:19.660160 systemd[1]: Switching root. May 9 00:33:19.695057 systemd-journald[193]: Journal stopped May 9 00:33:21.304988 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). May 9 00:33:21.305118 kernel: SELinux: policy capability network_peer_controls=1 May 9 00:33:21.305146 kernel: SELinux: policy capability open_perms=1 May 9 00:33:21.305158 kernel: SELinux: policy capability extended_socket_class=1 May 9 00:33:21.305170 kernel: SELinux: policy capability always_check_network=0 May 9 00:33:21.305182 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 00:33:21.305193 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 00:33:21.305204 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 00:33:21.305215 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 00:33:21.305227 kernel: audit: type=1403 audit(1746750800.231:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 00:33:21.305250 systemd[1]: Successfully loaded SELinux policy in 47.283ms. May 9 00:33:21.305287 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.159ms. May 9 00:33:21.305300 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 00:33:21.305336 systemd[1]: Detected virtualization kvm. May 9 00:33:21.305348 systemd[1]: Detected architecture x86-64. May 9 00:33:21.305360 systemd[1]: Detected first boot. May 9 00:33:21.305372 systemd[1]: Initializing machine ID from VM UUID. May 9 00:33:21.305384 zram_generator::config[1056]: No configuration found. May 9 00:33:21.305397 systemd[1]: Populated /etc with preset unit settings. May 9 00:33:21.305415 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 00:33:21.305427 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 00:33:21.305440 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 00:33:21.305453 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 00:33:21.305465 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 00:33:21.305477 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 00:33:21.305490 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 00:33:21.305502 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 00:33:21.305519 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 00:33:21.305532 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 00:33:21.305543 systemd[1]: Created slice user.slice - User and Session Slice. May 9 00:33:21.305556 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 00:33:21.305568 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 00:33:21.305586 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 00:33:21.305598 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 00:33:21.305616 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 00:33:21.305629 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 00:33:21.305645 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 00:33:21.305657 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 00:33:21.305669 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 00:33:21.305681 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 00:33:21.305693 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 00:33:21.305705 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 00:33:21.305717 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 00:33:21.305734 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 00:33:21.305751 systemd[1]: Reached target slices.target - Slice Units. May 9 00:33:21.305763 systemd[1]: Reached target swap.target - Swaps. May 9 00:33:21.305780 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 00:33:21.305793 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 00:33:21.305805 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 00:33:21.305816 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 00:33:21.305844 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 00:33:21.305856 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 00:33:21.305867 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 00:33:21.305885 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 00:33:21.305898 systemd[1]: Mounting media.mount - External Media Directory... May 9 00:33:21.305916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:33:21.305928 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 00:33:21.305940 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 00:33:21.305952 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 00:33:21.305965 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 00:33:21.305977 systemd[1]: Reached target machines.target - Containers. May 9 00:33:21.305994 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 00:33:21.306013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:33:21.306025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 00:33:21.306037 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 00:33:21.306049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:33:21.306061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:33:21.306073 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:33:21.306085 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 00:33:21.306097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:33:21.306115 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 00:33:21.306127 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 00:33:21.306139 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 00:33:21.306151 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 00:33:21.306162 systemd[1]: Stopped systemd-fsck-usr.service. May 9 00:33:21.306174 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 00:33:21.306192 kernel: fuse: init (API version 7.39) May 9 00:33:21.306204 kernel: loop: module loaded May 9 00:33:21.306268 systemd-journald[1119]: Collecting audit messages is disabled. May 9 00:33:21.306298 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 00:33:21.306311 systemd-journald[1119]: Journal started May 9 00:33:21.306333 systemd-journald[1119]: Runtime Journal (/run/log/journal/860b10dc43e84a7fa2b25b08a33728f3) is 6.0M, max 48.3M, 42.2M free. May 9 00:33:21.017985 systemd[1]: Queued start job for default target multi-user.target. May 9 00:33:21.041250 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 00:33:21.041787 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 00:33:21.318371 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 00:33:21.321774 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 00:33:21.328122 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 00:33:21.328191 kernel: ACPI: bus type drm_connector registered May 9 00:33:21.334597 systemd[1]: verity-setup.service: Deactivated successfully. May 9 00:33:21.334671 systemd[1]: Stopped verity-setup.service. May 9 00:33:21.334698 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:33:21.340404 systemd[1]: Started systemd-journald.service - Journal Service. May 9 00:33:21.341186 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 00:33:21.342367 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 00:33:21.344108 systemd[1]: Mounted media.mount - External Media Directory. May 9 00:33:21.345475 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 00:33:21.346800 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 00:33:21.348144 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 00:33:21.349584 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 00:33:21.351452 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 00:33:21.351687 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 00:33:21.353445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:33:21.353663 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:33:21.355475 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:33:21.355696 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:33:21.357237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:33:21.357470 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:33:21.359199 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 00:33:21.359436 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 00:33:21.360990 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:33:21.361201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:33:21.363206 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 00:33:21.365308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 00:33:21.367364 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 00:33:21.387552 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 00:33:21.400924 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 00:33:21.404726 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 00:33:21.407799 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 00:33:21.407849 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 00:33:21.410694 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 00:33:21.413702 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 00:33:21.416634 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 00:33:21.417963 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:33:21.457128 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 00:33:21.460424 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 00:33:21.462790 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:33:21.465997 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 00:33:21.478176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:33:21.485814 systemd-journald[1119]: Time spent on flushing to /var/log/journal/860b10dc43e84a7fa2b25b08a33728f3 is 17.109ms for 991 entries. May 9 00:33:21.485814 systemd-journald[1119]: System Journal (/var/log/journal/860b10dc43e84a7fa2b25b08a33728f3) is 8.0M, max 195.6M, 187.6M free. May 9 00:33:21.626815 systemd-journald[1119]: Received client request to flush runtime journal. May 9 00:33:21.626893 kernel: loop0: detected capacity change from 0 to 140768 May 9 00:33:21.486479 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:33:21.510000 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 00:33:21.514082 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 00:33:21.531051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 00:33:21.532600 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 00:33:21.534082 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 00:33:21.535951 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 00:33:21.589970 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 00:33:21.594560 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 00:33:21.607727 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 00:33:21.610074 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:33:21.611631 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 00:33:21.625121 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 00:33:21.628661 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 00:33:21.631010 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. May 9 00:33:21.631025 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. May 9 00:33:21.636921 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 00:33:21.638020 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 00:33:21.657074 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 00:33:21.668079 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 00:33:21.735202 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 00:33:21.754059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 00:33:21.786402 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 9 00:33:21.786433 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. May 9 00:33:21.794637 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 00:33:21.796866 kernel: loop1: detected capacity change from 0 to 142488 May 9 00:33:21.932883 kernel: loop2: detected capacity change from 0 to 218376 May 9 00:33:22.035863 kernel: loop3: detected capacity change from 0 to 140768 May 9 00:33:22.050869 kernel: loop4: detected capacity change from 0 to 142488 May 9 00:33:22.085863 kernel: loop5: detected capacity change from 0 to 218376 May 9 00:33:22.085297 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 00:33:22.088470 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 00:33:22.097446 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 9 00:33:22.098282 (sd-merge)[1198]: Merged extensions into '/usr'. May 9 00:33:22.201009 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... May 9 00:33:22.201028 systemd[1]: Reloading... May 9 00:33:22.323068 zram_generator::config[1224]: No configuration found. May 9 00:33:22.415364 ldconfig[1157]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 00:33:22.467674 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:33:22.540961 systemd[1]: Reloading finished in 339 ms. May 9 00:33:22.672210 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 00:33:22.674363 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 00:33:22.693152 systemd[1]: Starting ensure-sysext.service... May 9 00:33:22.697244 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 00:33:22.705869 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... May 9 00:33:22.705892 systemd[1]: Reloading... May 9 00:33:22.797875 zram_generator::config[1292]: No configuration found. May 9 00:33:22.945493 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 00:33:22.945940 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 00:33:22.947088 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 00:33:22.947413 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 9 00:33:22.947512 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. May 9 00:33:22.951081 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:33:22.951095 systemd-tmpfiles[1263]: Skipping /boot May 9 00:33:22.964241 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. May 9 00:33:22.964259 systemd-tmpfiles[1263]: Skipping /boot May 9 00:33:23.044547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:33:23.095290 systemd[1]: Reloading finished in 388 ms. May 9 00:33:23.114182 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 00:33:23.132689 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:33:23.135385 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 00:33:23.137853 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 00:33:23.142677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 00:33:23.148149 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 00:33:23.156939 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 00:33:23.159746 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:33:23.159949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:33:23.162187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:33:23.165071 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:33:23.169143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:33:23.171766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:33:23.171926 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:33:23.178400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:33:23.178583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:33:23.178785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:33:23.178929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:33:23.206056 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 00:33:23.208059 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 00:33:23.209916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:33:23.210095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:33:23.211774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:33:23.211983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:33:23.214092 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:33:23.214281 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:33:23.216084 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 00:33:23.224653 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:33:23.224934 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:33:23.227997 augenrules[1357]: No rules May 9 00:33:23.231252 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 00:33:23.234756 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 00:33:23.237200 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:33:23.243687 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:33:23.243921 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 00:33:23.248068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 00:33:23.254148 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 00:33:23.258130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 00:33:23.265097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 00:33:23.274651 systemd-udevd[1358]: Using default interface naming scheme 'v255'. May 9 00:33:23.280684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 00:33:23.281047 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 00:33:23.282165 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 00:33:23.284842 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 00:33:23.287281 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 00:33:23.289394 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 00:33:23.289602 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 00:33:23.291961 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 00:33:23.292163 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 00:33:23.294312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 00:33:23.294550 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 00:33:23.297096 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 00:33:23.297342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 00:33:23.303161 systemd[1]: Finished ensure-sysext.service. May 9 00:33:23.311302 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 00:33:23.324012 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 00:33:23.326019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 00:33:23.326134 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 00:33:23.338162 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 00:33:23.340277 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 00:33:23.356146 systemd-resolved[1331]: Positive Trust Anchors: May 9 00:33:23.356189 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 00:33:23.356258 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 00:33:23.366983 systemd-resolved[1331]: Defaulting to hostname 'linux'. May 9 00:33:23.368923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 00:33:23.371012 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 00:33:23.372791 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 00:33:23.441856 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1388) May 9 00:33:23.466849 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 9 00:33:23.487879 kernel: ACPI: button: Power Button [PWRF] May 9 00:33:23.490166 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 00:33:23.492490 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 00:33:23.492960 systemd-networkd[1385]: lo: Link UP May 9 00:33:23.493557 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 9 00:33:23.511632 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 9 00:33:23.511817 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 9 00:33:23.512037 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 9 00:33:23.492975 systemd-networkd[1385]: lo: Gained carrier May 9 00:33:23.495566 systemd[1]: Reached target time-set.target - System Time Set. May 9 00:33:23.501260 systemd-networkd[1385]: Enumeration completed May 9 00:33:23.501729 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:33:23.501734 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 00:33:23.510182 systemd-networkd[1385]: eth0: Link UP May 9 00:33:23.510186 systemd-networkd[1385]: eth0: Gained carrier May 9 00:33:23.510199 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 00:33:23.517307 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 00:33:23.518908 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 00:33:23.520606 systemd[1]: Reached target network.target - Network. May 9 00:33:23.526269 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 9 00:33:23.528855 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 9 00:33:23.529033 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 00:33:23.530340 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. May 9 00:33:24.226738 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 9 00:33:24.226787 systemd-timesyncd[1394]: Initial clock synchronization to Fri 2025-05-09 00:33:24.226631 UTC. May 9 00:33:24.226944 systemd-resolved[1331]: Clock change detected. Flushing caches. May 9 00:33:24.278309 kernel: mousedev: PS/2 mouse device common for all mice May 9 00:33:24.278911 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 00:33:24.297060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 00:33:24.428679 kernel: kvm_amd: TSC scaling supported May 9 00:33:24.428771 kernel: kvm_amd: Nested Virtualization enabled May 9 00:33:24.428821 kernel: kvm_amd: Nested Paging enabled May 9 00:33:24.428834 kernel: kvm_amd: LBR virtualization supported May 9 00:33:24.431315 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported May 9 00:33:24.431366 kernel: kvm_amd: Virtual GIF supported May 9 00:33:24.455237 kernel: EDAC MC: Ver: 3.0.0 May 9 00:33:24.473813 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 00:33:24.498277 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 00:33:24.516828 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 00:33:24.527590 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:33:24.629491 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 00:33:24.631259 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 00:33:24.632679 systemd[1]: Reached target sysinit.target - System Initialization. May 9 00:33:24.634189 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 00:33:24.635844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 00:33:24.637741 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 00:33:24.639274 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 00:33:24.640972 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 00:33:24.642596 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 00:33:24.642643 systemd[1]: Reached target paths.target - Path Units. May 9 00:33:24.644065 systemd[1]: Reached target timers.target - Timer Units. May 9 00:33:24.647047 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 00:33:24.650811 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 00:33:24.664436 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 00:33:24.667700 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 00:33:24.669708 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 00:33:24.671121 systemd[1]: Reached target sockets.target - Socket Units. May 9 00:33:24.672316 systemd[1]: Reached target basic.target - Basic System. May 9 00:33:24.673549 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 00:33:24.673589 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 00:33:24.674943 systemd[1]: Starting containerd.service - containerd container runtime... May 9 00:33:24.677678 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 00:33:24.680314 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 00:33:24.682327 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 00:33:24.688427 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 00:33:24.690257 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 00:33:24.691850 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 00:33:24.697153 jq[1434]: false May 9 00:33:24.697587 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 00:33:24.704435 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 00:33:24.708478 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 00:33:24.727602 extend-filesystems[1435]: Found loop3 May 9 00:33:24.727602 extend-filesystems[1435]: Found loop4 May 9 00:33:24.727602 extend-filesystems[1435]: Found loop5 May 9 00:33:24.727602 extend-filesystems[1435]: Found sr0 May 9 00:33:24.727602 extend-filesystems[1435]: Found vda May 9 00:33:24.727602 extend-filesystems[1435]: Found vda1 May 9 00:33:24.727602 extend-filesystems[1435]: Found vda2 May 9 00:33:24.727602 extend-filesystems[1435]: Found vda3 May 9 00:33:24.727602 extend-filesystems[1435]: Found usr May 9 00:33:24.727602 extend-filesystems[1435]: Found vda4 May 9 00:33:24.727602 extend-filesystems[1435]: Found vda6 May 9 00:33:24.727602 extend-filesystems[1435]: Found vda7 May 9 00:33:24.727602 extend-filesystems[1435]: Found vda9 May 9 00:33:24.727602 extend-filesystems[1435]: Checking size of /dev/vda9 May 9 00:33:24.726911 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 00:33:24.739834 dbus-daemon[1433]: [system] SELinux support is enabled May 9 00:33:24.734341 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 00:33:24.736783 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 00:33:24.742521 systemd[1]: Starting update-engine.service - Update Engine... May 9 00:33:24.746275 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 00:33:24.751070 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 00:33:24.761381 extend-filesystems[1435]: Resized partition /dev/vda9 May 9 00:33:24.763956 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 00:33:24.767654 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 00:33:24.767894 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 00:33:24.768610 systemd[1]: motdgen.service: Deactivated successfully. May 9 00:33:24.769285 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 00:33:24.774040 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 00:33:24.779331 jq[1453]: true May 9 00:33:24.774350 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 00:33:24.787380 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) May 9 00:33:24.796405 update_engine[1450]: I20250509 00:33:24.791265 1450 main.cc:92] Flatcar Update Engine starting May 9 00:33:24.796405 update_engine[1450]: I20250509 00:33:24.795676 1450 update_check_scheduler.cc:74] Next update check in 7m36s May 9 00:33:24.802463 jq[1459]: true May 9 00:33:24.808902 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 00:33:24.813231 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1382) May 9 00:33:24.815488 systemd[1]: Started update-engine.service - Update Engine. May 9 00:33:24.820450 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 00:33:24.820504 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 00:33:24.822393 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 00:33:24.822446 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 00:33:24.831409 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 00:33:24.840221 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 9 00:33:24.841608 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) May 9 00:33:24.842234 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 00:33:24.842710 systemd-logind[1443]: New seat seat0. May 9 00:33:24.846672 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 00:33:24.850315 systemd[1]: Started systemd-logind.service - User Login Management. May 9 00:33:24.856159 tar[1457]: linux-amd64/LICENSE May 9 00:33:24.856572 tar[1457]: linux-amd64/helm May 9 00:33:24.879609 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 00:33:24.932963 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 00:33:24.946621 systemd[1]: issuegen.service: Deactivated successfully. May 9 00:33:24.946890 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 00:33:24.955513 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 00:33:25.036477 locksmithd[1472]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 00:33:25.037799 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 00:33:25.048626 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 00:33:25.051210 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 00:33:25.076995 systemd[1]: Reached target getty.target - Login Prompts. May 9 00:33:25.153239 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 9 00:33:25.556605 systemd-networkd[1385]: eth0: Gained IPv6LL May 9 00:33:25.586056 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 00:33:25.590139 systemd[1]: Reached target network-online.target - Network is Online. May 9 00:33:25.596825 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 9 00:33:25.864812 containerd[1462]: time="2025-05-09T00:33:25.864559301Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 00:33:25.694484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:33:25.697215 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 00:33:25.717584 systemd[1]: coreos-metadata.service: Deactivated successfully. May 9 00:33:25.717829 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 9 00:33:25.749887 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 00:33:25.900451 containerd[1462]: time="2025-05-09T00:33:25.900366234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 00:33:25.902502 containerd[1462]: time="2025-05-09T00:33:25.902386874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 00:33:25.902502 containerd[1462]: time="2025-05-09T00:33:25.902431508Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 00:33:25.902502 containerd[1462]: time="2025-05-09T00:33:25.902451726Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 00:33:25.902711 containerd[1462]: time="2025-05-09T00:33:25.902678221Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 00:33:25.902711 containerd[1462]: time="2025-05-09T00:33:25.902707165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 00:33:25.902807 containerd[1462]: time="2025-05-09T00:33:25.902784711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:33:25.902807 containerd[1462]: time="2025-05-09T00:33:25.902801582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 00:33:25.903048 containerd[1462]: time="2025-05-09T00:33:25.903015944Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:33:25.903048 containerd[1462]: time="2025-05-09T00:33:25.903035210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 00:33:25.903104 containerd[1462]: time="2025-05-09T00:33:25.903049567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:33:25.903104 containerd[1462]: time="2025-05-09T00:33:25.903059797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 00:33:25.903262 containerd[1462]: time="2025-05-09T00:33:25.903236358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 00:33:25.903315 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 00:33:25.903315 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 00:33:25.903315 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 9 00:33:25.933259 extend-filesystems[1435]: Resized filesystem in /dev/vda9 May 9 00:33:25.914248 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 00:33:25.978148 containerd[1462]: time="2025-05-09T00:33:25.903514830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 00:33:25.978148 containerd[1462]: time="2025-05-09T00:33:25.903676433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 00:33:25.978148 containerd[1462]: time="2025-05-09T00:33:25.903691992Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 00:33:25.978148 containerd[1462]: time="2025-05-09T00:33:25.903808381Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 00:33:25.978148 containerd[1462]: time="2025-05-09T00:33:25.903867912Z" level=info msg="metadata content store policy set" policy=shared May 9 00:33:25.936583 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 00:33:25.936913 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 00:33:26.111495 bash[1491]: Updated "/home/core/.ssh/authorized_keys" May 9 00:33:26.113883 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 00:33:26.116505 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 9 00:33:26.122080 containerd[1462]: time="2025-05-09T00:33:26.122007951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 00:33:26.122152 containerd[1462]: time="2025-05-09T00:33:26.122123517Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 00:33:26.122177 containerd[1462]: time="2025-05-09T00:33:26.122159295Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 00:33:26.122226 containerd[1462]: time="2025-05-09T00:33:26.122185914Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 00:33:26.122266 containerd[1462]: time="2025-05-09T00:33:26.122230969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 00:33:26.122518 containerd[1462]: time="2025-05-09T00:33:26.122430784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 00:33:26.122951 containerd[1462]: time="2025-05-09T00:33:26.122922175Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 00:33:26.123147 containerd[1462]: time="2025-05-09T00:33:26.123121069Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 00:33:26.123187 containerd[1462]: time="2025-05-09T00:33:26.123149051Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 00:33:26.123187 containerd[1462]: time="2025-05-09T00:33:26.123167496Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 00:33:26.123263 containerd[1462]: time="2025-05-09T00:33:26.123183816Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 00:33:26.123263 containerd[1462]: time="2025-05-09T00:33:26.123230684Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 00:33:26.123263 containerd[1462]: time="2025-05-09T00:33:26.123249970Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 00:33:26.123354 containerd[1462]: time="2025-05-09T00:33:26.123266281Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 00:33:26.123354 containerd[1462]: time="2025-05-09T00:33:26.123303360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 00:33:26.123354 containerd[1462]: time="2025-05-09T00:33:26.123326784Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 00:33:26.123354 containerd[1462]: time="2025-05-09T00:33:26.123341893Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 00:33:26.123433 containerd[1462]: time="2025-05-09T00:33:26.123356831Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 00:33:26.123433 containerd[1462]: time="2025-05-09T00:33:26.123394862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123433 containerd[1462]: time="2025-05-09T00:33:26.123414910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123433 containerd[1462]: time="2025-05-09T00:33:26.123430549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123514 containerd[1462]: time="2025-05-09T00:33:26.123445397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123514 containerd[1462]: time="2025-05-09T00:33:26.123459303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123514 containerd[1462]: time="2025-05-09T00:33:26.123474011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123514 containerd[1462]: time="2025-05-09T00:33:26.123498637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123514 containerd[1462]: time="2025-05-09T00:33:26.123511461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123697 containerd[1462]: time="2025-05-09T00:33:26.123548350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123697 containerd[1462]: time="2025-05-09T00:33:26.123568317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123697 containerd[1462]: time="2025-05-09T00:33:26.123583035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123697 containerd[1462]: time="2025-05-09T00:33:26.123597663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123697 containerd[1462]: time="2025-05-09T00:33:26.123612570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123697 containerd[1462]: time="2025-05-09T00:33:26.123630895Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 00:33:26.123697 containerd[1462]: time="2025-05-09T00:33:26.123669658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123697 containerd[1462]: time="2025-05-09T00:33:26.123689114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123844 containerd[1462]: time="2025-05-09T00:33:26.123707238Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 00:33:26.123844 containerd[1462]: time="2025-05-09T00:33:26.123771649Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 00:33:26.123844 containerd[1462]: time="2025-05-09T00:33:26.123789132Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 00:33:26.123844 containerd[1462]: time="2025-05-09T00:33:26.123800553Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 00:33:26.123844 containerd[1462]: time="2025-05-09T00:33:26.123813227Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 00:33:26.123844 containerd[1462]: time="2025-05-09T00:33:26.123822715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 00:33:26.123844 containerd[1462]: time="2025-05-09T00:33:26.123835128Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 00:33:26.123978 containerd[1462]: time="2025-05-09T00:33:26.123855837Z" level=info msg="NRI interface is disabled by configuration." May 9 00:33:26.123978 containerd[1462]: time="2025-05-09T00:33:26.123867338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 00:33:26.124363 containerd[1462]: time="2025-05-09T00:33:26.124273400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 00:33:26.124741 containerd[1462]: time="2025-05-09T00:33:26.124713215Z" level=info msg="Connect containerd service" May 9 00:33:26.124800 containerd[1462]: time="2025-05-09T00:33:26.124783086Z" level=info msg="using legacy CRI server" May 9 00:33:26.124800 containerd[1462]: time="2025-05-09T00:33:26.124795139Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 00:33:26.125043 containerd[1462]: time="2025-05-09T00:33:26.124964176Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 00:33:26.125880 containerd[1462]: time="2025-05-09T00:33:26.125845659Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:33:26.126285 containerd[1462]: time="2025-05-09T00:33:26.126076702Z" level=info msg="Start subscribing containerd event" May 9 00:33:26.126285 containerd[1462]: time="2025-05-09T00:33:26.126243425Z" level=info msg="Start recovering state" May 9 00:33:26.126610 containerd[1462]: time="2025-05-09T00:33:26.126519794Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 00:33:26.126610 containerd[1462]: time="2025-05-09T00:33:26.126557945Z" level=info msg="Start event monitor" May 9 00:33:26.126610 containerd[1462]: time="2025-05-09T00:33:26.126584525Z" level=info msg="Start snapshots syncer" May 9 00:33:26.126610 containerd[1462]: time="2025-05-09T00:33:26.126594474Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 00:33:26.126859 containerd[1462]: time="2025-05-09T00:33:26.126797745Z" level=info msg="Start cni network conf syncer for default" May 9 00:33:26.126937 containerd[1462]: time="2025-05-09T00:33:26.126901229Z" level=info msg="Start streaming server" May 9 00:33:26.127784 containerd[1462]: time="2025-05-09T00:33:26.127281833Z" level=info msg="containerd successfully booted in 0.333338s" May 9 00:33:26.127414 systemd[1]: Started containerd.service - containerd container runtime. May 9 00:33:26.333954 tar[1457]: linux-amd64/README.md May 9 00:33:26.351059 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 00:33:27.473421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:33:27.475460 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 00:33:27.478433 systemd[1]: Startup finished in 1.311s (kernel) + 6.512s (initrd) + 6.595s (userspace) = 14.419s. May 9 00:33:27.480345 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:33:27.675874 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 00:33:27.690520 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:34608.service - OpenSSH per-connection server daemon (10.0.0.1:34608). May 9 00:33:27.738977 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 34608 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:33:27.743320 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:33:27.755341 systemd-logind[1443]: New session 1 of user core. May 9 00:33:27.756989 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 00:33:27.770599 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 00:33:27.829430 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 00:33:27.837602 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 00:33:27.845154 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 00:33:27.997631 systemd[1562]: Queued start job for default target default.target. May 9 00:33:28.007533 systemd[1562]: Created slice app.slice - User Application Slice. May 9 00:33:28.007557 systemd[1562]: Reached target paths.target - Paths. May 9 00:33:28.007570 systemd[1562]: Reached target timers.target - Timers. May 9 00:33:28.009262 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 00:33:28.337226 kubelet[1547]: E0509 00:33:28.336958 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:33:28.342834 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:33:28.343117 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:33:28.343842 systemd[1]: kubelet.service: Consumed 2.177s CPU time. May 9 00:33:28.344823 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 00:33:28.344967 systemd[1562]: Reached target sockets.target - Sockets. May 9 00:33:28.344985 systemd[1562]: Reached target basic.target - Basic System. May 9 00:33:28.345024 systemd[1562]: Reached target default.target - Main User Target. May 9 00:33:28.345064 systemd[1562]: Startup finished in 487ms. May 9 00:33:28.346164 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 00:33:28.362371 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 00:33:28.428562 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:34622.service - OpenSSH per-connection server daemon (10.0.0.1:34622). May 9 00:33:28.472359 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 34622 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:33:28.474639 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:33:28.480247 systemd-logind[1443]: New session 2 of user core. May 9 00:33:28.491458 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 00:33:28.549073 sshd[1575]: pam_unix(sshd:session): session closed for user core May 9 00:33:28.558050 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:34622.service: Deactivated successfully. May 9 00:33:28.560750 systemd[1]: session-2.scope: Deactivated successfully. May 9 00:33:28.562677 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. May 9 00:33:28.571486 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:34630.service - OpenSSH per-connection server daemon (10.0.0.1:34630). May 9 00:33:28.572816 systemd-logind[1443]: Removed session 2. May 9 00:33:28.605144 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 34630 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:33:28.607244 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:33:28.612407 systemd-logind[1443]: New session 3 of user core. May 9 00:33:28.619362 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 00:33:28.671844 sshd[1582]: pam_unix(sshd:session): session closed for user core May 9 00:33:28.692402 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:34630.service: Deactivated successfully. May 9 00:33:28.695261 systemd[1]: session-3.scope: Deactivated successfully. May 9 00:33:28.697022 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. May 9 00:33:28.708684 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:34634.service - OpenSSH per-connection server daemon (10.0.0.1:34634). May 9 00:33:28.709974 systemd-logind[1443]: Removed session 3. May 9 00:33:28.746480 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 34634 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:33:28.748598 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:33:28.754364 systemd-logind[1443]: New session 4 of user core. May 9 00:33:28.764398 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 00:33:28.823428 sshd[1589]: pam_unix(sshd:session): session closed for user core May 9 00:33:28.835880 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:34634.service: Deactivated successfully. May 9 00:33:28.838862 systemd[1]: session-4.scope: Deactivated successfully. May 9 00:33:28.841404 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. May 9 00:33:28.849124 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:34650.service - OpenSSH per-connection server daemon (10.0.0.1:34650). May 9 00:33:28.852832 systemd-logind[1443]: Removed session 4. May 9 00:33:28.914666 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 34650 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:33:28.915730 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:33:28.933230 systemd-logind[1443]: New session 5 of user core. May 9 00:33:28.950535 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 00:33:29.066583 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 00:33:29.067053 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:33:29.094975 sudo[1599]: pam_unix(sudo:session): session closed for user root May 9 00:33:29.098697 sshd[1596]: pam_unix(sshd:session): session closed for user core May 9 00:33:29.129775 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:34650.service: Deactivated successfully. May 9 00:33:29.132569 systemd[1]: session-5.scope: Deactivated successfully. May 9 00:33:29.142797 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. May 9 00:33:29.155430 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:34660.service - OpenSSH per-connection server daemon (10.0.0.1:34660). May 9 00:33:29.157845 systemd-logind[1443]: Removed session 5. May 9 00:33:29.210643 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 34660 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:33:29.213327 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:33:29.223799 systemd-logind[1443]: New session 6 of user core. May 9 00:33:29.239575 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 00:33:29.310724 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 00:33:29.311277 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:33:29.325927 sudo[1608]: pam_unix(sudo:session): session closed for user root May 9 00:33:29.343098 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 00:33:29.343940 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:33:29.396647 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 00:33:29.413726 auditctl[1611]: No rules May 9 00:33:29.414459 systemd[1]: audit-rules.service: Deactivated successfully. May 9 00:33:29.414816 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 00:33:29.434412 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 00:33:29.505566 augenrules[1629]: No rules May 9 00:33:29.507742 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 00:33:29.512790 sudo[1607]: pam_unix(sudo:session): session closed for user root May 9 00:33:29.524213 sshd[1604]: pam_unix(sshd:session): session closed for user core May 9 00:33:29.562711 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:34668.service - OpenSSH per-connection server daemon (10.0.0.1:34668). May 9 00:33:29.563439 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:34660.service: Deactivated successfully. May 9 00:33:29.566788 systemd[1]: session-6.scope: Deactivated successfully. May 9 00:33:29.576084 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. May 9 00:33:29.577921 systemd-logind[1443]: Removed session 6. May 9 00:33:29.608929 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 34668 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:33:29.611627 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:33:29.618362 systemd-logind[1443]: New session 7 of user core. May 9 00:33:29.628536 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 00:33:29.697681 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 00:33:29.698354 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 00:33:33.461906 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 00:33:33.462286 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 00:33:36.546379 dockerd[1660]: time="2025-05-09T00:33:36.540431896Z" level=info msg="Starting up" May 9 00:33:37.505896 dockerd[1660]: time="2025-05-09T00:33:37.505047479Z" level=info msg="Loading containers: start." May 9 00:33:37.838279 kernel: Initializing XFRM netlink socket May 9 00:33:38.078312 systemd-networkd[1385]: docker0: Link UP May 9 00:33:38.128299 dockerd[1660]: time="2025-05-09T00:33:38.127184540Z" level=info msg="Loading containers: done." May 9 00:33:38.172389 dockerd[1660]: time="2025-05-09T00:33:38.172294293Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 00:33:38.172647 dockerd[1660]: time="2025-05-09T00:33:38.172489089Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 00:33:38.172691 dockerd[1660]: time="2025-05-09T00:33:38.172669237Z" level=info msg="Daemon has completed initialization" May 9 00:33:38.293334 dockerd[1660]: time="2025-05-09T00:33:38.293161215Z" level=info msg="API listen on /run/docker.sock" May 9 00:33:38.293565 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 00:33:38.593515 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 00:33:38.612720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:33:38.929364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:33:38.936788 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:33:39.069944 kubelet[1812]: E0509 00:33:39.069835 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:33:39.079739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:33:39.080046 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:33:39.838706 containerd[1462]: time="2025-05-09T00:33:39.838582915Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 9 00:33:41.252437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576384318.mount: Deactivated successfully. May 9 00:33:45.156892 containerd[1462]: time="2025-05-09T00:33:45.156750467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:45.158636 containerd[1462]: time="2025-05-09T00:33:45.157846503Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682879" May 9 00:33:45.160038 containerd[1462]: time="2025-05-09T00:33:45.159945600Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:45.167624 containerd[1462]: time="2025-05-09T00:33:45.167540155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:45.169434 containerd[1462]: time="2025-05-09T00:33:45.169112765Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 5.330422789s" May 9 00:33:45.169434 containerd[1462]: time="2025-05-09T00:33:45.169176985Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 9 00:33:45.173284 containerd[1462]: time="2025-05-09T00:33:45.170624551Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 9 00:33:48.166077 containerd[1462]: time="2025-05-09T00:33:48.165115800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:48.166919 containerd[1462]: time="2025-05-09T00:33:48.166791744Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779589" May 9 00:33:48.168679 containerd[1462]: time="2025-05-09T00:33:48.168620304Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:48.184182 containerd[1462]: time="2025-05-09T00:33:48.184081905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:48.185786 containerd[1462]: time="2025-05-09T00:33:48.185722392Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 3.015055923s" May 9 00:33:48.185786 containerd[1462]: time="2025-05-09T00:33:48.185769421Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 9 00:33:48.188854 containerd[1462]: time="2025-05-09T00:33:48.188512877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 9 00:33:49.291017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 00:33:49.304680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:33:49.576639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:33:49.582661 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:33:49.664148 kubelet[1893]: E0509 00:33:49.664078 1893 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:33:49.672534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:33:49.672819 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:33:50.384071 containerd[1462]: time="2025-05-09T00:33:50.380795642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:50.387264 containerd[1462]: time="2025-05-09T00:33:50.386128295Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169938" May 9 00:33:50.391056 containerd[1462]: time="2025-05-09T00:33:50.388768247Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:50.395443 containerd[1462]: time="2025-05-09T00:33:50.394148539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:50.404473 containerd[1462]: time="2025-05-09T00:33:50.396825510Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.208261327s" May 9 00:33:50.404473 containerd[1462]: time="2025-05-09T00:33:50.400030712Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 9 00:33:50.406707 containerd[1462]: time="2025-05-09T00:33:50.406624620Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 9 00:33:53.038281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801224254.mount: Deactivated successfully. May 9 00:33:55.116931 containerd[1462]: time="2025-05-09T00:33:55.113743390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:55.119343 containerd[1462]: time="2025-05-09T00:33:55.119249728Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917856" May 9 00:33:55.121559 containerd[1462]: time="2025-05-09T00:33:55.121500029Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:55.126743 containerd[1462]: time="2025-05-09T00:33:55.126672882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:55.135145 containerd[1462]: time="2025-05-09T00:33:55.128005371Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 4.72132163s" May 9 00:33:55.135145 containerd[1462]: time="2025-05-09T00:33:55.133931727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 9 00:33:55.136770 containerd[1462]: time="2025-05-09T00:33:55.136722231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 9 00:33:55.796558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173503303.mount: Deactivated successfully. May 9 00:33:58.361968 containerd[1462]: time="2025-05-09T00:33:58.357179953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:58.361968 containerd[1462]: time="2025-05-09T00:33:58.359106088Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" May 9 00:33:58.361968 containerd[1462]: time="2025-05-09T00:33:58.361089978Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:58.370320 containerd[1462]: time="2025-05-09T00:33:58.369989395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:58.371940 containerd[1462]: time="2025-05-09T00:33:58.371698974Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.234923701s" May 9 00:33:58.371940 containerd[1462]: time="2025-05-09T00:33:58.371763511Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 9 00:33:58.374314 containerd[1462]: time="2025-05-09T00:33:58.374280702Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 00:33:59.083134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911549020.mount: Deactivated successfully. May 9 00:33:59.117827 containerd[1462]: time="2025-05-09T00:33:59.117715276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:59.119185 containerd[1462]: time="2025-05-09T00:33:59.119112581Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" May 9 00:33:59.124483 containerd[1462]: time="2025-05-09T00:33:59.124377365Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:59.128394 containerd[1462]: time="2025-05-09T00:33:59.128301545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:33:59.129915 containerd[1462]: time="2025-05-09T00:33:59.129836240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 755.333121ms" May 9 00:33:59.129915 containerd[1462]: time="2025-05-09T00:33:59.129901578Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 9 00:33:59.130812 containerd[1462]: time="2025-05-09T00:33:59.130711660Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 9 00:33:59.782787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 9 00:33:59.792639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:34:00.048664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:34:00.054470 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 00:34:00.330836 kubelet[1974]: E0509 00:34:00.330621 1974 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 00:34:00.337338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 00:34:00.337708 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 00:34:00.507415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247423556.mount: Deactivated successfully. May 9 00:34:05.820420 containerd[1462]: time="2025-05-09T00:34:05.820304890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:34:05.826376 containerd[1462]: time="2025-05-09T00:34:05.822534718Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" May 9 00:34:05.829639 containerd[1462]: time="2025-05-09T00:34:05.828766062Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:34:05.837846 containerd[1462]: time="2025-05-09T00:34:05.837711923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:34:05.839791 containerd[1462]: time="2025-05-09T00:34:05.839707085Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.708955528s" May 9 00:34:05.839791 containerd[1462]: time="2025-05-09T00:34:05.839756050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 9 00:34:09.394047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:34:09.433810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:34:09.518786 systemd[1]: Reloading requested from client PID 2067 ('systemctl') (unit session-7.scope)... May 9 00:34:09.523494 systemd[1]: Reloading... May 9 00:34:09.719588 zram_generator::config[2107]: No configuration found. May 9 00:34:10.065192 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:34:10.110621 update_engine[1450]: I20250509 00:34:10.110494 1450 update_attempter.cc:509] Updating boot flags... May 9 00:34:10.238490 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2147) May 9 00:34:10.250098 systemd[1]: Reloading finished in 721 ms. May 9 00:34:10.322252 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2147) May 9 00:34:10.396272 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2147) May 9 00:34:10.462170 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:34:10.542727 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:34:10.551445 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:34:10.551795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:34:10.597752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:34:11.992572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:34:12.002311 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:34:12.148538 kubelet[2176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:34:12.149151 kubelet[2176]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 00:34:12.149151 kubelet[2176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:34:12.149352 kubelet[2176]: I0509 00:34:12.149289 2176 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:34:12.845278 kubelet[2176]: I0509 00:34:12.838592 2176 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 00:34:12.845278 kubelet[2176]: I0509 00:34:12.840296 2176 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:34:12.848390 kubelet[2176]: I0509 00:34:12.845674 2176 server.go:954] "Client rotation is on, will bootstrap in background" May 9 00:34:12.920527 kubelet[2176]: E0509 00:34:12.916809 2176 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:12.921054 kubelet[2176]: I0509 00:34:12.920998 2176 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:34:12.951444 kubelet[2176]: E0509 00:34:12.951367 2176 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:34:12.951444 kubelet[2176]: I0509 00:34:12.951425 2176 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:34:12.964705 kubelet[2176]: I0509 00:34:12.964439 2176 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:34:12.970367 kubelet[2176]: I0509 00:34:12.970247 2176 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:34:12.970667 kubelet[2176]: I0509 00:34:12.970349 2176 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:34:12.970803 kubelet[2176]: I0509 00:34:12.970678 2176 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:34:12.970803 kubelet[2176]: I0509 00:34:12.970694 2176 container_manager_linux.go:304] "Creating device plugin manager" May 9 00:34:12.970980 kubelet[2176]: I0509 00:34:12.970941 2176 state_mem.go:36] "Initialized new in-memory state store" May 9 00:34:12.975471 kubelet[2176]: I0509 00:34:12.975225 2176 kubelet.go:446] "Attempting to sync node with API server" May 9 00:34:12.975471 kubelet[2176]: I0509 00:34:12.975272 2176 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:34:12.975471 kubelet[2176]: I0509 00:34:12.975312 2176 kubelet.go:352] "Adding apiserver pod source" May 9 00:34:12.975471 kubelet[2176]: I0509 00:34:12.975331 2176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:34:12.983330 kubelet[2176]: I0509 00:34:12.982429 2176 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:34:12.983330 kubelet[2176]: W0509 00:34:12.982724 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:12.983330 kubelet[2176]: E0509 00:34:12.982802 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:12.983330 kubelet[2176]: I0509 00:34:12.983022 2176 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:34:12.983330 kubelet[2176]: W0509 00:34:12.983140 2176 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 00:34:12.990804 kubelet[2176]: W0509 00:34:12.989227 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:12.990804 kubelet[2176]: E0509 00:34:12.989333 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:13.000675 kubelet[2176]: I0509 00:34:12.999667 2176 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 00:34:13.000675 kubelet[2176]: I0509 00:34:13.000707 2176 server.go:1287] "Started kubelet" May 9 00:34:13.007659 kubelet[2176]: I0509 00:34:13.005710 2176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:34:13.007659 kubelet[2176]: I0509 00:34:13.006145 2176 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:34:13.007659 kubelet[2176]: I0509 00:34:13.006611 2176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:34:13.011796 kubelet[2176]: I0509 00:34:13.008673 2176 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:34:13.011796 kubelet[2176]: I0509 00:34:13.009126 2176 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:34:13.011796 kubelet[2176]: I0509 00:34:13.010857 2176 server.go:490] "Adding debug handlers to kubelet server" May 9 00:34:13.021543 kubelet[2176]: I0509 00:34:13.014463 2176 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 00:34:13.021543 kubelet[2176]: E0509 00:34:13.018590 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:13.021543 kubelet[2176]: I0509 00:34:13.021089 2176 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:34:13.021543 kubelet[2176]: I0509 00:34:13.021216 2176 reconciler.go:26] "Reconciler: start to sync state" May 9 00:34:13.022365 kubelet[2176]: I0509 00:34:13.022329 2176 factory.go:221] Registration of the systemd container factory successfully May 9 00:34:13.022533 kubelet[2176]: I0509 00:34:13.022487 2176 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:34:13.024248 kubelet[2176]: E0509 00:34:13.024181 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" May 9 00:34:13.026663 kubelet[2176]: E0509 00:34:13.024462 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db4a568370657 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:34:13.000660567 +0000 UTC m=+0.980401000,LastTimestamp:2025-05-09 00:34:13.000660567 +0000 UTC m=+0.980401000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:34:13.027153 kubelet[2176]: W0509 00:34:13.027091 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:13.029396 kubelet[2176]: E0509 00:34:13.027646 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:13.093713 kubelet[2176]: I0509 00:34:13.080900 2176 factory.go:221] Registration of the containerd container factory successfully May 9 00:34:13.119781 kubelet[2176]: E0509 00:34:13.119564 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:13.151660 kubelet[2176]: I0509 00:34:13.149765 2176 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 00:34:13.151660 kubelet[2176]: I0509 00:34:13.149821 2176 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 00:34:13.151660 kubelet[2176]: I0509 00:34:13.149854 2176 state_mem.go:36] "Initialized new in-memory state store" May 9 00:34:13.152486 kubelet[2176]: I0509 00:34:13.152436 2176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:34:13.171966 kubelet[2176]: I0509 00:34:13.171908 2176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:34:13.171966 kubelet[2176]: I0509 00:34:13.171979 2176 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 00:34:13.172216 kubelet[2176]: I0509 00:34:13.172027 2176 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 00:34:13.172216 kubelet[2176]: I0509 00:34:13.172042 2176 kubelet.go:2388] "Starting kubelet main sync loop" May 9 00:34:13.172216 kubelet[2176]: E0509 00:34:13.172122 2176 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:34:13.173111 kubelet[2176]: W0509 00:34:13.173050 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:13.173169 kubelet[2176]: E0509 00:34:13.173123 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:13.193125 kubelet[2176]: I0509 00:34:13.192994 2176 policy_none.go:49] "None policy: Start" May 9 00:34:13.194519 kubelet[2176]: I0509 00:34:13.193183 2176 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 00:34:13.194613 kubelet[2176]: I0509 00:34:13.194530 2176 state_mem.go:35] "Initializing new in-memory state store" May 9 00:34:13.227962 kubelet[2176]: E0509 00:34:13.225435 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:13.227962 kubelet[2176]: E0509 00:34:13.227041 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" May 9 00:34:13.245611 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 00:34:13.272847 kubelet[2176]: E0509 00:34:13.272776 2176 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:34:13.277044 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 00:34:13.326073 kubelet[2176]: E0509 00:34:13.325988 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:13.362961 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 00:34:13.384522 kubelet[2176]: I0509 00:34:13.383892 2176 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:34:13.384522 kubelet[2176]: I0509 00:34:13.384245 2176 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:34:13.384522 kubelet[2176]: I0509 00:34:13.384274 2176 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:34:13.385772 kubelet[2176]: I0509 00:34:13.384778 2176 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:34:13.386388 kubelet[2176]: E0509 00:34:13.386349 2176 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 00:34:13.386452 kubelet[2176]: E0509 00:34:13.386422 2176 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 9 00:34:13.493552 kubelet[2176]: I0509 00:34:13.490453 2176 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:34:13.490574 systemd[1]: Created slice kubepods-burstable-pod07d701b33ef38742a282533f1dcda708.slice - libcontainer container kubepods-burstable-pod07d701b33ef38742a282533f1dcda708.slice. May 9 00:34:13.495892 kubelet[2176]: E0509 00:34:13.495801 2176 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 9 00:34:13.504879 kubelet[2176]: E0509 00:34:13.504808 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:13.513581 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 9 00:34:13.527072 kubelet[2176]: I0509 00:34:13.527003 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:13.527072 kubelet[2176]: I0509 00:34:13.527071 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:13.527440 kubelet[2176]: I0509 00:34:13.527316 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07d701b33ef38742a282533f1dcda708-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07d701b33ef38742a282533f1dcda708\") " pod="kube-system/kube-apiserver-localhost" May 9 00:34:13.527440 kubelet[2176]: I0509 00:34:13.527347 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07d701b33ef38742a282533f1dcda708-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07d701b33ef38742a282533f1dcda708\") " pod="kube-system/kube-apiserver-localhost" May 9 00:34:13.527440 kubelet[2176]: I0509 00:34:13.527368 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:13.527440 kubelet[2176]: I0509 00:34:13.527387 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:13.527440 kubelet[2176]: I0509 00:34:13.527405 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:13.527652 kubelet[2176]: I0509 00:34:13.527423 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 9 00:34:13.527652 kubelet[2176]: I0509 00:34:13.527469 2176 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07d701b33ef38742a282533f1dcda708-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07d701b33ef38742a282533f1dcda708\") " pod="kube-system/kube-apiserver-localhost" May 9 00:34:13.530613 kubelet[2176]: E0509 00:34:13.530188 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:13.535769 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 9 00:34:13.538395 kubelet[2176]: E0509 00:34:13.538332 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:13.628230 kubelet[2176]: E0509 00:34:13.628119 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" May 9 00:34:13.699032 kubelet[2176]: I0509 00:34:13.698832 2176 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:34:13.700510 kubelet[2176]: E0509 00:34:13.700326 2176 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 9 00:34:13.808872 kubelet[2176]: E0509 00:34:13.806477 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:13.810606 containerd[1462]: time="2025-05-09T00:34:13.810054069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07d701b33ef38742a282533f1dcda708,Namespace:kube-system,Attempt:0,}" May 9 00:34:13.834707 kubelet[2176]: E0509 00:34:13.831861 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:13.834988 containerd[1462]: time="2025-05-09T00:34:13.832675317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 9 00:34:13.848885 kubelet[2176]: E0509 00:34:13.847811 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:13.849080 containerd[1462]: time="2025-05-09T00:34:13.848625750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 9 00:34:13.993403 kubelet[2176]: W0509 00:34:13.987341 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:13.993403 kubelet[2176]: E0509 00:34:13.987452 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:14.104293 kubelet[2176]: I0509 00:34:14.104216 2176 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:34:14.104711 kubelet[2176]: E0509 00:34:14.104664 2176 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 9 00:34:14.297877 kubelet[2176]: W0509 00:34:14.297610 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:14.297877 kubelet[2176]: E0509 00:34:14.297721 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:14.384569 kubelet[2176]: W0509 00:34:14.384318 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:14.384569 kubelet[2176]: E0509 00:34:14.384427 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:14.429722 kubelet[2176]: E0509 00:34:14.429614 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" May 9 00:34:14.546939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907976597.mount: Deactivated successfully. May 9 00:34:14.588495 kubelet[2176]: W0509 00:34:14.588264 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:14.588495 kubelet[2176]: E0509 00:34:14.588372 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:14.594071 containerd[1462]: time="2025-05-09T00:34:14.593962383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:34:14.595123 containerd[1462]: time="2025-05-09T00:34:14.595024621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:34:14.602462 containerd[1462]: time="2025-05-09T00:34:14.602352644Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:34:14.609139 containerd[1462]: time="2025-05-09T00:34:14.607889958Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:34:14.618784 containerd[1462]: time="2025-05-09T00:34:14.611114134Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:34:14.620322 containerd[1462]: time="2025-05-09T00:34:14.620219801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 00:34:14.622167 containerd[1462]: time="2025-05-09T00:34:14.621962860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 9 00:34:14.631177 containerd[1462]: time="2025-05-09T00:34:14.631073277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 00:34:14.631883 containerd[1462]: time="2025-05-09T00:34:14.631817027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 783.086367ms" May 9 00:34:14.653340 containerd[1462]: time="2025-05-09T00:34:14.652872012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 842.689237ms" May 9 00:34:14.665447 containerd[1462]: time="2025-05-09T00:34:14.665099020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 832.314064ms" May 9 00:34:15.133720 kubelet[2176]: E0509 00:34:15.133641 2176 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:15.140372 kubelet[2176]: I0509 00:34:15.139832 2176 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:34:15.140372 kubelet[2176]: E0509 00:34:15.140320 2176 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 9 00:34:15.586267 containerd[1462]: time="2025-05-09T00:34:15.585710377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:34:15.586267 containerd[1462]: time="2025-05-09T00:34:15.585812162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:34:15.586267 containerd[1462]: time="2025-05-09T00:34:15.585832540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:15.586267 containerd[1462]: time="2025-05-09T00:34:15.585991733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:15.587391 containerd[1462]: time="2025-05-09T00:34:15.586332704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:34:15.587391 containerd[1462]: time="2025-05-09T00:34:15.586535672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:34:15.587391 containerd[1462]: time="2025-05-09T00:34:15.586625663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:15.587391 containerd[1462]: time="2025-05-09T00:34:15.586860962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:15.787766 systemd[1]: Started cri-containerd-71da1d8ad3f6dd6b8ba74e15f2c7035dbff8ec8d31bdc766db5292d1df570602.scope - libcontainer container 71da1d8ad3f6dd6b8ba74e15f2c7035dbff8ec8d31bdc766db5292d1df570602. May 9 00:34:15.865500 containerd[1462]: time="2025-05-09T00:34:15.851651587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:34:15.865500 containerd[1462]: time="2025-05-09T00:34:15.851749052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:34:15.865500 containerd[1462]: time="2025-05-09T00:34:15.851809608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:15.865500 containerd[1462]: time="2025-05-09T00:34:15.851993589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:15.968635 systemd[1]: Started cri-containerd-3b3e29d0b77de97810c5faff41ac49051d9a0e585eaff31c5e1bc236bf1055eb.scope - libcontainer container 3b3e29d0b77de97810c5faff41ac49051d9a0e585eaff31c5e1bc236bf1055eb. May 9 00:34:16.030889 kubelet[2176]: E0509 00:34:16.030706 2176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="3.2s" May 9 00:34:16.052124 containerd[1462]: time="2025-05-09T00:34:16.052055478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"71da1d8ad3f6dd6b8ba74e15f2c7035dbff8ec8d31bdc766db5292d1df570602\"" May 9 00:34:16.056569 kubelet[2176]: E0509 00:34:16.056024 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:16.062579 containerd[1462]: time="2025-05-09T00:34:16.062528714Z" level=info msg="CreateContainer within sandbox \"71da1d8ad3f6dd6b8ba74e15f2c7035dbff8ec8d31bdc766db5292d1df570602\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 00:34:16.095647 systemd[1]: Started cri-containerd-e3f1424f895ecf0f5f98051c051ef9aaa1ea27c07d2a6acc5029d7a3ecaa5928.scope - libcontainer container e3f1424f895ecf0f5f98051c051ef9aaa1ea27c07d2a6acc5029d7a3ecaa5928. May 9 00:34:16.121536 kubelet[2176]: W0509 00:34:16.120055 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:16.121536 kubelet[2176]: E0509 00:34:16.120119 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:16.158158 containerd[1462]: time="2025-05-09T00:34:16.156862922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b3e29d0b77de97810c5faff41ac49051d9a0e585eaff31c5e1bc236bf1055eb\"" May 9 00:34:16.158331 kubelet[2176]: E0509 00:34:16.157951 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:16.162016 containerd[1462]: time="2025-05-09T00:34:16.161948522Z" level=info msg="CreateContainer within sandbox \"3b3e29d0b77de97810c5faff41ac49051d9a0e585eaff31c5e1bc236bf1055eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 00:34:16.243916 containerd[1462]: time="2025-05-09T00:34:16.243760309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07d701b33ef38742a282533f1dcda708,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3f1424f895ecf0f5f98051c051ef9aaa1ea27c07d2a6acc5029d7a3ecaa5928\"" May 9 00:34:16.253150 kubelet[2176]: E0509 00:34:16.252855 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:16.255227 containerd[1462]: time="2025-05-09T00:34:16.254974156Z" level=info msg="CreateContainer within sandbox \"e3f1424f895ecf0f5f98051c051ef9aaa1ea27c07d2a6acc5029d7a3ecaa5928\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 00:34:16.484558 kubelet[2176]: W0509 00:34:16.484316 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:16.484558 kubelet[2176]: E0509 00:34:16.484382 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:16.631603 kubelet[2176]: W0509 00:34:16.624176 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:16.631603 kubelet[2176]: E0509 00:34:16.628071 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:16.684904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2188838103.mount: Deactivated successfully. May 9 00:34:16.743721 kubelet[2176]: I0509 00:34:16.743538 2176 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:34:16.744026 kubelet[2176]: E0509 00:34:16.743972 2176 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" May 9 00:34:16.864575 containerd[1462]: time="2025-05-09T00:34:16.863914630Z" level=info msg="CreateContainer within sandbox \"71da1d8ad3f6dd6b8ba74e15f2c7035dbff8ec8d31bdc766db5292d1df570602\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"50946cc12fcd0ea948db0b992cfa574ecf2f5e249bd10f99ee1d4def5018cdb4\"" May 9 00:34:16.865152 containerd[1462]: time="2025-05-09T00:34:16.865067496Z" level=info msg="StartContainer for \"50946cc12fcd0ea948db0b992cfa574ecf2f5e249bd10f99ee1d4def5018cdb4\"" May 9 00:34:16.966978 systemd[1]: Started cri-containerd-50946cc12fcd0ea948db0b992cfa574ecf2f5e249bd10f99ee1d4def5018cdb4.scope - libcontainer container 50946cc12fcd0ea948db0b992cfa574ecf2f5e249bd10f99ee1d4def5018cdb4. May 9 00:34:17.041887 kubelet[2176]: E0509 00:34:17.041701 2176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183db4a568370657 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-09 00:34:13.000660567 +0000 UTC m=+0.980401000,LastTimestamp:2025-05-09 00:34:13.000660567 +0000 UTC m=+0.980401000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 9 00:34:17.204415 containerd[1462]: time="2025-05-09T00:34:17.203980344Z" level=info msg="StartContainer for \"50946cc12fcd0ea948db0b992cfa574ecf2f5e249bd10f99ee1d4def5018cdb4\" returns successfully" May 9 00:34:17.230674 kubelet[2176]: E0509 00:34:17.226749 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:17.230674 kubelet[2176]: E0509 00:34:17.226986 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:17.321359 containerd[1462]: time="2025-05-09T00:34:17.320802845Z" level=info msg="CreateContainer within sandbox \"3b3e29d0b77de97810c5faff41ac49051d9a0e585eaff31c5e1bc236bf1055eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"32c3847206cedf185607340a0c27b10e35c4b86d84144d9762907ac19a6d64ea\"" May 9 00:34:17.324312 containerd[1462]: time="2025-05-09T00:34:17.321805433Z" level=info msg="StartContainer for \"32c3847206cedf185607340a0c27b10e35c4b86d84144d9762907ac19a6d64ea\"" May 9 00:34:17.353340 kubelet[2176]: W0509 00:34:17.353261 2176 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused May 9 00:34:17.353340 kubelet[2176]: E0509 00:34:17.353337 2176 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" May 9 00:34:17.357402 containerd[1462]: time="2025-05-09T00:34:17.357328173Z" level=info msg="CreateContainer within sandbox \"e3f1424f895ecf0f5f98051c051ef9aaa1ea27c07d2a6acc5029d7a3ecaa5928\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9fc99a832b7088213005a243d2c4af9890c717cd571dade16f73f8e49bce5c64\"" May 9 00:34:17.359608 containerd[1462]: time="2025-05-09T00:34:17.359188595Z" level=info msg="StartContainer for \"9fc99a832b7088213005a243d2c4af9890c717cd571dade16f73f8e49bce5c64\"" May 9 00:34:17.457098 systemd[1]: Started cri-containerd-9fc99a832b7088213005a243d2c4af9890c717cd571dade16f73f8e49bce5c64.scope - libcontainer container 9fc99a832b7088213005a243d2c4af9890c717cd571dade16f73f8e49bce5c64. May 9 00:34:17.466951 systemd[1]: Started cri-containerd-32c3847206cedf185607340a0c27b10e35c4b86d84144d9762907ac19a6d64ea.scope - libcontainer container 32c3847206cedf185607340a0c27b10e35c4b86d84144d9762907ac19a6d64ea. May 9 00:34:17.566843 containerd[1462]: time="2025-05-09T00:34:17.566777976Z" level=info msg="StartContainer for \"32c3847206cedf185607340a0c27b10e35c4b86d84144d9762907ac19a6d64ea\" returns successfully" May 9 00:34:17.584345 containerd[1462]: time="2025-05-09T00:34:17.583973880Z" level=info msg="StartContainer for \"9fc99a832b7088213005a243d2c4af9890c717cd571dade16f73f8e49bce5c64\" returns successfully" May 9 00:34:18.231397 kubelet[2176]: E0509 00:34:18.231339 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:18.231982 kubelet[2176]: E0509 00:34:18.231537 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:18.235099 kubelet[2176]: E0509 00:34:18.234293 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:18.235099 kubelet[2176]: E0509 00:34:18.234482 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:18.235741 kubelet[2176]: E0509 00:34:18.235702 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:18.235908 kubelet[2176]: E0509 00:34:18.235879 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:19.238251 kubelet[2176]: E0509 00:34:19.237206 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:19.238251 kubelet[2176]: E0509 00:34:19.237357 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:19.240796 kubelet[2176]: E0509 00:34:19.240754 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:19.241214 kubelet[2176]: E0509 00:34:19.241172 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:19.952249 kubelet[2176]: I0509 00:34:19.949539 2176 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:34:20.244141 kubelet[2176]: E0509 00:34:20.243985 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:20.249084 kubelet[2176]: E0509 00:34:20.248298 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:20.249084 kubelet[2176]: E0509 00:34:20.248823 2176 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 9 00:34:20.249084 kubelet[2176]: E0509 00:34:20.248943 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:21.623300 kubelet[2176]: E0509 00:34:21.623233 2176 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 9 00:34:21.672567 kubelet[2176]: I0509 00:34:21.670347 2176 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 9 00:34:21.672567 kubelet[2176]: E0509 00:34:21.670430 2176 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 9 00:34:21.675629 kubelet[2176]: E0509 00:34:21.675587 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:21.776365 kubelet[2176]: E0509 00:34:21.776229 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:21.877036 kubelet[2176]: E0509 00:34:21.876691 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:21.977345 kubelet[2176]: E0509 00:34:21.977129 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:22.079244 kubelet[2176]: E0509 00:34:22.077345 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:22.178764 kubelet[2176]: E0509 00:34:22.178551 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:22.279107 kubelet[2176]: E0509 00:34:22.278939 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:22.382238 kubelet[2176]: E0509 00:34:22.379593 2176 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:22.520070 kubelet[2176]: I0509 00:34:22.519439 2176 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 00:34:22.572313 kubelet[2176]: I0509 00:34:22.570721 2176 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 00:34:22.579275 kubelet[2176]: I0509 00:34:22.578187 2176 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 00:34:22.598764 kubelet[2176]: I0509 00:34:22.597444 2176 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 00:34:22.630597 kubelet[2176]: E0509 00:34:22.630518 2176 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 9 00:34:22.631164 kubelet[2176]: E0509 00:34:22.630771 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:23.146152 kubelet[2176]: I0509 00:34:23.144758 2176 apiserver.go:52] "Watching apiserver" May 9 00:34:23.149937 kubelet[2176]: E0509 00:34:23.148723 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:23.149937 kubelet[2176]: E0509 00:34:23.149045 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:23.223700 kubelet[2176]: I0509 00:34:23.223511 2176 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:34:23.256148 kubelet[2176]: E0509 00:34:23.253706 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:23.256148 kubelet[2176]: E0509 00:34:23.254063 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:23.337575 kubelet[2176]: I0509 00:34:23.337441 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.337404982 podStartE2EDuration="1.337404982s" podCreationTimestamp="2025-05-09 00:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:34:23.298012668 +0000 UTC m=+11.277753091" watchObservedRunningTime="2025-05-09 00:34:23.337404982 +0000 UTC m=+11.317145405" May 9 00:34:23.379484 kubelet[2176]: I0509 00:34:23.379323 2176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.379294897 podStartE2EDuration="1.379294897s" podCreationTimestamp="2025-05-09 00:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:34:23.351679346 +0000 UTC m=+11.331419799" watchObservedRunningTime="2025-05-09 00:34:23.379294897 +0000 UTC m=+11.359035320" May 9 00:34:25.136218 kubelet[2176]: E0509 00:34:25.135872 2176 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:26.378114 systemd[1]: Reloading requested from client PID 2460 ('systemctl') (unit session-7.scope)... May 9 00:34:26.378138 systemd[1]: Reloading... May 9 00:34:26.604240 zram_generator::config[2501]: No configuration found. May 9 00:34:26.834426 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 00:34:26.991846 systemd[1]: Reloading finished in 613 ms. May 9 00:34:27.089274 kubelet[2176]: I0509 00:34:27.088977 2176 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:34:27.090737 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:34:27.104597 systemd[1]: kubelet.service: Deactivated successfully. May 9 00:34:27.104957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:34:27.105023 systemd[1]: kubelet.service: Consumed 2.068s CPU time, 128.6M memory peak, 0B memory swap peak. May 9 00:34:27.129419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 00:34:27.471600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 00:34:27.495077 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 00:34:27.657589 kubelet[2544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:34:27.657589 kubelet[2544]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 00:34:27.657589 kubelet[2544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 00:34:27.658603 kubelet[2544]: I0509 00:34:27.658509 2544 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 00:34:27.669850 kubelet[2544]: I0509 00:34:27.669283 2544 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 00:34:27.669850 kubelet[2544]: I0509 00:34:27.669332 2544 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 00:34:27.669850 kubelet[2544]: I0509 00:34:27.669769 2544 server.go:954] "Client rotation is on, will bootstrap in background" May 9 00:34:27.676119 kubelet[2544]: I0509 00:34:27.675123 2544 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 00:34:27.679977 kubelet[2544]: I0509 00:34:27.679921 2544 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 00:34:27.721560 kubelet[2544]: E0509 00:34:27.718250 2544 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 00:34:27.721560 kubelet[2544]: I0509 00:34:27.718298 2544 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 00:34:27.725664 kubelet[2544]: I0509 00:34:27.724775 2544 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 00:34:27.725664 kubelet[2544]: I0509 00:34:27.725053 2544 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 00:34:27.725664 kubelet[2544]: I0509 00:34:27.725094 2544 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 00:34:27.725664 kubelet[2544]: I0509 00:34:27.725356 2544 topology_manager.go:138] "Creating topology manager with none policy" May 9 00:34:27.726188 kubelet[2544]: I0509 00:34:27.725371 2544 container_manager_linux.go:304] "Creating device plugin manager" May 9 00:34:27.726188 kubelet[2544]: I0509 00:34:27.725432 2544 state_mem.go:36] "Initialized new in-memory state store" May 9 00:34:27.729008 kubelet[2544]: I0509 00:34:27.728929 2544 kubelet.go:446] "Attempting to sync node with API server" May 9 00:34:27.729008 kubelet[2544]: I0509 00:34:27.728970 2544 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 00:34:27.729008 kubelet[2544]: I0509 00:34:27.729007 2544 kubelet.go:352] "Adding apiserver pod source" May 9 00:34:27.729232 kubelet[2544]: I0509 00:34:27.729024 2544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 00:34:27.731880 kubelet[2544]: I0509 00:34:27.731277 2544 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 00:34:27.740267 kubelet[2544]: I0509 00:34:27.738595 2544 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 00:34:27.740267 kubelet[2544]: I0509 00:34:27.739622 2544 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 00:34:27.740267 kubelet[2544]: I0509 00:34:27.739677 2544 server.go:1287] "Started kubelet" May 9 00:34:27.763918 kubelet[2544]: I0509 00:34:27.763876 2544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 00:34:27.766969 kubelet[2544]: I0509 00:34:27.766156 2544 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 00:34:27.767720 kubelet[2544]: I0509 00:34:27.767689 2544 server.go:490] "Adding debug handlers to kubelet server" May 9 00:34:27.773899 kubelet[2544]: E0509 00:34:27.773857 2544 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 9 00:34:27.780670 kubelet[2544]: I0509 00:34:27.774025 2544 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 00:34:27.780670 kubelet[2544]: I0509 00:34:27.774062 2544 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 00:34:27.780670 kubelet[2544]: I0509 00:34:27.775492 2544 reconciler.go:26] "Reconciler: start to sync state" May 9 00:34:27.780670 kubelet[2544]: I0509 00:34:27.776473 2544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 00:34:27.780670 kubelet[2544]: I0509 00:34:27.779734 2544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 00:34:27.780670 kubelet[2544]: I0509 00:34:27.780082 2544 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 00:34:27.792752 kubelet[2544]: I0509 00:34:27.785675 2544 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 00:34:27.792752 kubelet[2544]: I0509 00:34:27.789043 2544 factory.go:221] Registration of the containerd container factory successfully May 9 00:34:27.792752 kubelet[2544]: I0509 00:34:27.789084 2544 factory.go:221] Registration of the systemd container factory successfully May 9 00:34:27.807118 kubelet[2544]: I0509 00:34:27.807056 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 00:34:27.812976 kubelet[2544]: I0509 00:34:27.812923 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 00:34:27.812976 kubelet[2544]: I0509 00:34:27.812989 2544 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 00:34:27.813221 kubelet[2544]: I0509 00:34:27.813028 2544 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 00:34:27.813221 kubelet[2544]: I0509 00:34:27.813040 2544 kubelet.go:2388] "Starting kubelet main sync loop" May 9 00:34:27.813221 kubelet[2544]: E0509 00:34:27.813121 2544 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 00:34:27.913815 kubelet[2544]: E0509 00:34:27.913747 2544 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 00:34:27.945695 kubelet[2544]: I0509 00:34:27.943486 2544 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 00:34:27.945695 kubelet[2544]: I0509 00:34:27.943517 2544 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 00:34:27.945695 kubelet[2544]: I0509 00:34:27.943548 2544 state_mem.go:36] "Initialized new in-memory state store" May 9 00:34:27.945695 kubelet[2544]: I0509 00:34:27.943766 2544 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 00:34:27.945695 kubelet[2544]: I0509 00:34:27.943780 2544 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 00:34:27.945695 kubelet[2544]: I0509 00:34:27.943805 2544 policy_none.go:49] "None policy: Start" May 9 00:34:27.945695 kubelet[2544]: I0509 00:34:27.943816 2544 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 00:34:27.945695 kubelet[2544]: I0509 00:34:27.943828 2544 state_mem.go:35] "Initializing new in-memory state store" May 9 00:34:27.945695 kubelet[2544]: I0509 00:34:27.943950 2544 state_mem.go:75] "Updated machine memory state" May 9 00:34:27.957694 kubelet[2544]: I0509 00:34:27.952364 2544 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 00:34:27.957694 kubelet[2544]: I0509 00:34:27.955527 2544 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 00:34:27.965106 sudo[2579]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 00:34:27.966291 sudo[2579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 00:34:27.966921 kubelet[2544]: I0509 00:34:27.966554 2544 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 00:34:27.970567 kubelet[2544]: I0509 00:34:27.967905 2544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 00:34:27.971149 kubelet[2544]: E0509 00:34:27.971101 2544 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 00:34:28.086310 kubelet[2544]: I0509 00:34:28.086252 2544 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 9 00:34:28.115700 kubelet[2544]: I0509 00:34:28.115095 2544 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 9 00:34:28.115904 kubelet[2544]: I0509 00:34:28.115803 2544 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 00:34:28.120357 kubelet[2544]: I0509 00:34:28.117096 2544 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 9 00:34:28.146235 kubelet[2544]: E0509 00:34:28.146162 2544 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 9 00:34:28.146397 kubelet[2544]: E0509 00:34:28.146330 2544 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:34:28.146605 kubelet[2544]: I0509 00:34:28.146580 2544 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 9 00:34:28.146762 kubelet[2544]: I0509 00:34:28.146668 2544 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 9 00:34:28.149776 kubelet[2544]: E0509 00:34:28.149676 2544 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 9 00:34:28.177620 kubelet[2544]: I0509 00:34:28.177536 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:28.177620 kubelet[2544]: I0509 00:34:28.177606 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:28.177874 kubelet[2544]: I0509 00:34:28.177658 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07d701b33ef38742a282533f1dcda708-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07d701b33ef38742a282533f1dcda708\") " pod="kube-system/kube-apiserver-localhost" May 9 00:34:28.177874 kubelet[2544]: I0509 00:34:28.177690 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07d701b33ef38742a282533f1dcda708-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07d701b33ef38742a282533f1dcda708\") " pod="kube-system/kube-apiserver-localhost" May 9 00:34:28.177874 kubelet[2544]: I0509 00:34:28.177765 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07d701b33ef38742a282533f1dcda708-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07d701b33ef38742a282533f1dcda708\") " pod="kube-system/kube-apiserver-localhost" May 9 00:34:28.177874 kubelet[2544]: I0509 00:34:28.177832 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:28.178034 kubelet[2544]: I0509 00:34:28.177878 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:28.178034 kubelet[2544]: I0509 00:34:28.177916 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 9 00:34:28.178034 kubelet[2544]: I0509 00:34:28.177947 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 9 00:34:28.448994 kubelet[2544]: E0509 00:34:28.448340 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:28.448994 kubelet[2544]: E0509 00:34:28.448797 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:28.450054 kubelet[2544]: E0509 00:34:28.450028 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:28.730573 kubelet[2544]: I0509 00:34:28.730361 2544 apiserver.go:52] "Watching apiserver" May 9 00:34:28.776460 kubelet[2544]: I0509 00:34:28.775856 2544 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 00:34:28.836982 kubelet[2544]: I0509 00:34:28.836927 2544 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 9 00:34:28.837177 kubelet[2544]: E0509 00:34:28.837027 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:28.837899 kubelet[2544]: E0509 00:34:28.837870 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:28.857230 kubelet[2544]: E0509 00:34:28.854918 2544 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 9 00:34:28.857230 kubelet[2544]: E0509 00:34:28.855176 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:28.968571 sudo[2579]: pam_unix(sudo:session): session closed for user root May 9 00:34:29.843727 kubelet[2544]: E0509 00:34:29.843390 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:29.846281 kubelet[2544]: E0509 00:34:29.845418 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:30.251802 kubelet[2544]: I0509 00:34:30.251631 2544 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 00:34:30.252460 containerd[1462]: time="2025-05-09T00:34:30.252340626Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 00:34:30.252975 kubelet[2544]: I0509 00:34:30.252546 2544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 00:34:30.457608 sudo[1640]: pam_unix(sudo:session): session closed for user root May 9 00:34:30.461968 sshd[1635]: pam_unix(sshd:session): session closed for user core May 9 00:34:30.468649 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:34668.service: Deactivated successfully. May 9 00:34:30.472060 systemd[1]: session-7.scope: Deactivated successfully. May 9 00:34:30.472571 systemd[1]: session-7.scope: Consumed 8.414s CPU time, 158.2M memory peak, 0B memory swap peak. May 9 00:34:30.473589 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. May 9 00:34:30.475063 systemd-logind[1443]: Removed session 7. May 9 00:34:30.846556 kubelet[2544]: E0509 00:34:30.846505 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:31.256938 kubelet[2544]: E0509 00:34:31.256721 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:31.702678 kubelet[2544]: I0509 00:34:31.702210 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27b090f7-bd71-4785-9b66-193cedcffa5c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8jcpc\" (UID: \"27b090f7-bd71-4785-9b66-193cedcffa5c\") " pod="kube-system/cilium-operator-6c4d7847fc-8jcpc" May 9 00:34:31.702678 kubelet[2544]: I0509 00:34:31.702346 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twdbr\" (UniqueName: \"kubernetes.io/projected/27b090f7-bd71-4785-9b66-193cedcffa5c-kube-api-access-twdbr\") pod \"cilium-operator-6c4d7847fc-8jcpc\" (UID: \"27b090f7-bd71-4785-9b66-193cedcffa5c\") " pod="kube-system/cilium-operator-6c4d7847fc-8jcpc" May 9 00:34:31.702678 kubelet[2544]: I0509 00:34:31.702470 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25c3ec62-8e14-4277-9382-8d8c887e8958-kube-proxy\") pod \"kube-proxy-8fjng\" (UID: \"25c3ec62-8e14-4277-9382-8d8c887e8958\") " pod="kube-system/kube-proxy-8fjng" May 9 00:34:31.702678 kubelet[2544]: I0509 00:34:31.702503 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25c3ec62-8e14-4277-9382-8d8c887e8958-xtables-lock\") pod \"kube-proxy-8fjng\" (UID: \"25c3ec62-8e14-4277-9382-8d8c887e8958\") " pod="kube-system/kube-proxy-8fjng" May 9 00:34:31.702678 kubelet[2544]: I0509 00:34:31.702529 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhpkx\" (UniqueName: \"kubernetes.io/projected/25c3ec62-8e14-4277-9382-8d8c887e8958-kube-api-access-hhpkx\") pod \"kube-proxy-8fjng\" (UID: \"25c3ec62-8e14-4277-9382-8d8c887e8958\") " pod="kube-system/kube-proxy-8fjng" May 9 00:34:31.702953 kubelet[2544]: I0509 00:34:31.702560 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25c3ec62-8e14-4277-9382-8d8c887e8958-lib-modules\") pod \"kube-proxy-8fjng\" (UID: \"25c3ec62-8e14-4277-9382-8d8c887e8958\") " pod="kube-system/kube-proxy-8fjng" May 9 00:34:31.705773 systemd[1]: Created slice kubepods-besteffort-pod25c3ec62_8e14_4277_9382_8d8c887e8958.slice - libcontainer container kubepods-besteffort-pod25c3ec62_8e14_4277_9382_8d8c887e8958.slice. May 9 00:34:31.735454 systemd[1]: Created slice kubepods-besteffort-pod27b090f7_bd71_4785_9b66_193cedcffa5c.slice - libcontainer container kubepods-besteffort-pod27b090f7_bd71_4785_9b66_193cedcffa5c.slice. May 9 00:34:31.802909 kubelet[2544]: I0509 00:34:31.802842 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-cgroup\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.802909 kubelet[2544]: I0509 00:34:31.802895 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qz9d\" (UniqueName: \"kubernetes.io/projected/c6a6e843-5cd8-4099-989f-6054d8e42957-kube-api-access-8qz9d\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.802909 kubelet[2544]: I0509 00:34:31.802921 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-lib-modules\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803211 kubelet[2544]: I0509 00:34:31.802945 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-host-proc-sys-net\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803211 kubelet[2544]: I0509 00:34:31.802963 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6a6e843-5cd8-4099-989f-6054d8e42957-hubble-tls\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803211 kubelet[2544]: I0509 00:34:31.802983 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-run\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803211 kubelet[2544]: I0509 00:34:31.802997 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-host-proc-sys-kernel\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803211 kubelet[2544]: I0509 00:34:31.803022 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-etc-cni-netd\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803211 kubelet[2544]: I0509 00:34:31.803037 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cni-path\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803471 kubelet[2544]: I0509 00:34:31.803052 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-config-path\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803471 kubelet[2544]: I0509 00:34:31.803069 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-bpf-maps\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803471 kubelet[2544]: I0509 00:34:31.803085 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-hostproc\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803471 kubelet[2544]: I0509 00:34:31.803100 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-xtables-lock\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.803471 kubelet[2544]: I0509 00:34:31.803116 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6a6e843-5cd8-4099-989f-6054d8e42957-clustermesh-secrets\") pod \"cilium-fwlhj\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " pod="kube-system/cilium-fwlhj" May 9 00:34:31.808922 systemd[1]: Created slice kubepods-burstable-podc6a6e843_5cd8_4099_989f_6054d8e42957.slice - libcontainer container kubepods-burstable-podc6a6e843_5cd8_4099_989f_6054d8e42957.slice. May 9 00:34:31.848081 kubelet[2544]: E0509 00:34:31.848043 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:31.848646 kubelet[2544]: E0509 00:34:31.848321 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:32.034187 kubelet[2544]: E0509 00:34:32.034143 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:32.035000 containerd[1462]: time="2025-05-09T00:34:32.034849199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8fjng,Uid:25c3ec62-8e14-4277-9382-8d8c887e8958,Namespace:kube-system,Attempt:0,}" May 9 00:34:32.043181 kubelet[2544]: E0509 00:34:32.043108 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:32.043500 containerd[1462]: time="2025-05-09T00:34:32.043452979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8jcpc,Uid:27b090f7-bd71-4785-9b66-193cedcffa5c,Namespace:kube-system,Attempt:0,}" May 9 00:34:32.111784 kubelet[2544]: E0509 00:34:32.111694 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:32.112309 containerd[1462]: time="2025-05-09T00:34:32.112254348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fwlhj,Uid:c6a6e843-5cd8-4099-989f-6054d8e42957,Namespace:kube-system,Attempt:0,}" May 9 00:34:32.968058 containerd[1462]: time="2025-05-09T00:34:32.967905109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:34:32.968058 containerd[1462]: time="2025-05-09T00:34:32.967978567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:34:32.968058 containerd[1462]: time="2025-05-09T00:34:32.967999157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:32.969124 containerd[1462]: time="2025-05-09T00:34:32.968121207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:32.996512 systemd[1]: Started cri-containerd-5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f.scope - libcontainer container 5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f. May 9 00:34:33.027538 containerd[1462]: time="2025-05-09T00:34:33.026895264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fwlhj,Uid:c6a6e843-5cd8-4099-989f-6054d8e42957,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\"" May 9 00:34:33.028089 kubelet[2544]: E0509 00:34:33.028038 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:33.029935 containerd[1462]: time="2025-05-09T00:34:33.029765887Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 00:34:33.184355 containerd[1462]: time="2025-05-09T00:34:33.184240479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:34:33.185244 containerd[1462]: time="2025-05-09T00:34:33.184523253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:34:33.185244 containerd[1462]: time="2025-05-09T00:34:33.184572535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:34:33.185244 containerd[1462]: time="2025-05-09T00:34:33.184591661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:33.185785 containerd[1462]: time="2025-05-09T00:34:33.185718166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:33.185981 containerd[1462]: time="2025-05-09T00:34:33.185914086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:34:33.186093 containerd[1462]: time="2025-05-09T00:34:33.186070480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:33.186438 containerd[1462]: time="2025-05-09T00:34:33.186366709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:33.210440 systemd[1]: Started cri-containerd-f1870160cc8b1542b0b8e7206c2acaa1e4f6c6d754665ce8533ca936f527addc.scope - libcontainer container f1870160cc8b1542b0b8e7206c2acaa1e4f6c6d754665ce8533ca936f527addc. May 9 00:34:33.215024 systemd[1]: Started cri-containerd-75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86.scope - libcontainer container 75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86. May 9 00:34:33.247421 containerd[1462]: time="2025-05-09T00:34:33.247104338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8fjng,Uid:25c3ec62-8e14-4277-9382-8d8c887e8958,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1870160cc8b1542b0b8e7206c2acaa1e4f6c6d754665ce8533ca936f527addc\"" May 9 00:34:33.249106 kubelet[2544]: E0509 00:34:33.248859 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:33.252029 containerd[1462]: time="2025-05-09T00:34:33.251969091Z" level=info msg="CreateContainer within sandbox \"f1870160cc8b1542b0b8e7206c2acaa1e4f6c6d754665ce8533ca936f527addc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 00:34:33.280265 containerd[1462]: time="2025-05-09T00:34:33.280207278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8jcpc,Uid:27b090f7-bd71-4785-9b66-193cedcffa5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86\"" May 9 00:34:33.281546 kubelet[2544]: E0509 00:34:33.281501 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:33.285146 containerd[1462]: time="2025-05-09T00:34:33.285080186Z" level=info msg="CreateContainer within sandbox \"f1870160cc8b1542b0b8e7206c2acaa1e4f6c6d754665ce8533ca936f527addc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6fad49c4b36bcd4150ea9ce77123ce441fa2b84b78bcebfe3991c824ee4d88c2\"" May 9 00:34:33.285728 containerd[1462]: time="2025-05-09T00:34:33.285700526Z" level=info msg="StartContainer for \"6fad49c4b36bcd4150ea9ce77123ce441fa2b84b78bcebfe3991c824ee4d88c2\"" May 9 00:34:33.327566 systemd[1]: Started cri-containerd-6fad49c4b36bcd4150ea9ce77123ce441fa2b84b78bcebfe3991c824ee4d88c2.scope - libcontainer container 6fad49c4b36bcd4150ea9ce77123ce441fa2b84b78bcebfe3991c824ee4d88c2. May 9 00:34:33.368333 containerd[1462]: time="2025-05-09T00:34:33.368250796Z" level=info msg="StartContainer for \"6fad49c4b36bcd4150ea9ce77123ce441fa2b84b78bcebfe3991c824ee4d88c2\" returns successfully" May 9 00:34:33.856504 kubelet[2544]: E0509 00:34:33.856445 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:33.887626 kubelet[2544]: I0509 00:34:33.887518 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8fjng" podStartSLOduration=2.887485242 podStartE2EDuration="2.887485242s" podCreationTimestamp="2025-05-09 00:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:34:33.887264726 +0000 UTC m=+6.378424611" watchObservedRunningTime="2025-05-09 00:34:33.887485242 +0000 UTC m=+6.378645137" May 9 00:34:37.161037 kubelet[2544]: E0509 00:34:37.160979 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:37.866423 kubelet[2544]: E0509 00:34:37.866357 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:38.868248 kubelet[2544]: E0509 00:34:38.868181 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:42.926814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224743705.mount: Deactivated successfully. May 9 00:34:46.516106 containerd[1462]: time="2025-05-09T00:34:46.516009900Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:34:46.516788 containerd[1462]: time="2025-05-09T00:34:46.516665072Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 9 00:34:46.517971 containerd[1462]: time="2025-05-09T00:34:46.517910784Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:34:46.519763 containerd[1462]: time="2025-05-09T00:34:46.519723872Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.489916137s" May 9 00:34:46.519812 containerd[1462]: time="2025-05-09T00:34:46.519770139Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 9 00:34:46.525758 containerd[1462]: time="2025-05-09T00:34:46.525716748Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 00:34:46.527421 containerd[1462]: time="2025-05-09T00:34:46.527378323Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:34:46.540266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337093486.mount: Deactivated successfully. May 9 00:34:46.542607 containerd[1462]: time="2025-05-09T00:34:46.542546870Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\"" May 9 00:34:46.545397 containerd[1462]: time="2025-05-09T00:34:46.545341083Z" level=info msg="StartContainer for \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\"" May 9 00:34:46.581383 systemd[1]: Started cri-containerd-4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac.scope - libcontainer container 4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac. May 9 00:34:46.612254 containerd[1462]: time="2025-05-09T00:34:46.612173046Z" level=info msg="StartContainer for \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\" returns successfully" May 9 00:34:46.629510 systemd[1]: cri-containerd-4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac.scope: Deactivated successfully. May 9 00:34:47.186285 kubelet[2544]: E0509 00:34:47.186235 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:47.328112 containerd[1462]: time="2025-05-09T00:34:47.328035130Z" level=info msg="shim disconnected" id=4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac namespace=k8s.io May 9 00:34:47.328112 containerd[1462]: time="2025-05-09T00:34:47.328107295Z" level=warning msg="cleaning up after shim disconnected" id=4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac namespace=k8s.io May 9 00:34:47.328112 containerd[1462]: time="2025-05-09T00:34:47.328120480Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:34:47.537794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac-rootfs.mount: Deactivated successfully. May 9 00:34:48.188567 kubelet[2544]: E0509 00:34:48.188518 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:48.195406 containerd[1462]: time="2025-05-09T00:34:48.194600592Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:34:48.218326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384639914.mount: Deactivated successfully. May 9 00:34:48.219725 containerd[1462]: time="2025-05-09T00:34:48.219663788Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\"" May 9 00:34:48.220353 containerd[1462]: time="2025-05-09T00:34:48.220320242Z" level=info msg="StartContainer for \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\"" May 9 00:34:48.253475 systemd[1]: Started cri-containerd-2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a.scope - libcontainer container 2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a. May 9 00:34:48.281358 containerd[1462]: time="2025-05-09T00:34:48.281310983Z" level=info msg="StartContainer for \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\" returns successfully" May 9 00:34:48.294823 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 00:34:48.295073 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 00:34:48.295160 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 00:34:48.301605 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 00:34:48.301836 systemd[1]: cri-containerd-2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a.scope: Deactivated successfully. May 9 00:34:48.328414 containerd[1462]: time="2025-05-09T00:34:48.328338952Z" level=info msg="shim disconnected" id=2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a namespace=k8s.io May 9 00:34:48.328414 containerd[1462]: time="2025-05-09T00:34:48.328411057Z" level=warning msg="cleaning up after shim disconnected" id=2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a namespace=k8s.io May 9 00:34:48.328414 containerd[1462]: time="2025-05-09T00:34:48.328419663Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:34:48.331279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 00:34:48.538260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a-rootfs.mount: Deactivated successfully. May 9 00:34:48.865649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440923118.mount: Deactivated successfully. May 9 00:34:49.175805 containerd[1462]: time="2025-05-09T00:34:49.175657619Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:34:49.176388 containerd[1462]: time="2025-05-09T00:34:49.176317379Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 9 00:34:49.177403 containerd[1462]: time="2025-05-09T00:34:49.177365789Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 00:34:49.178952 containerd[1462]: time="2025-05-09T00:34:49.178907207Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.653137058s" May 9 00:34:49.179018 containerd[1462]: time="2025-05-09T00:34:49.178954726Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 9 00:34:49.181118 containerd[1462]: time="2025-05-09T00:34:49.181065662Z" level=info msg="CreateContainer within sandbox \"75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 00:34:49.191433 kubelet[2544]: E0509 00:34:49.191381 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:49.195821 containerd[1462]: time="2025-05-09T00:34:49.195631425Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:34:49.201328 containerd[1462]: time="2025-05-09T00:34:49.201259372Z" level=info msg="CreateContainer within sandbox \"75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\"" May 9 00:34:49.201831 containerd[1462]: time="2025-05-09T00:34:49.201778487Z" level=info msg="StartContainer for \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\"" May 9 00:34:49.219966 containerd[1462]: time="2025-05-09T00:34:49.219890945Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\"" May 9 00:34:49.220606 containerd[1462]: time="2025-05-09T00:34:49.220498407Z" level=info msg="StartContainer for \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\"" May 9 00:34:49.236544 systemd[1]: Started cri-containerd-7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788.scope - libcontainer container 7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788. May 9 00:34:49.256638 systemd[1]: Started cri-containerd-2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893.scope - libcontainer container 2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893. May 9 00:34:49.295435 systemd[1]: cri-containerd-2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893.scope: Deactivated successfully. May 9 00:34:49.396387 containerd[1462]: time="2025-05-09T00:34:49.396321221Z" level=info msg="StartContainer for \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\" returns successfully" May 9 00:34:49.396871 containerd[1462]: time="2025-05-09T00:34:49.396329807Z" level=info msg="StartContainer for \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\" returns successfully" May 9 00:34:49.432731 containerd[1462]: time="2025-05-09T00:34:49.432556756Z" level=info msg="shim disconnected" id=2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893 namespace=k8s.io May 9 00:34:49.432731 containerd[1462]: time="2025-05-09T00:34:49.432616429Z" level=warning msg="cleaning up after shim disconnected" id=2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893 namespace=k8s.io May 9 00:34:49.432731 containerd[1462]: time="2025-05-09T00:34:49.432624835Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:34:50.196485 kubelet[2544]: E0509 00:34:50.196422 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:50.198215 kubelet[2544]: E0509 00:34:50.197677 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:50.199775 containerd[1462]: time="2025-05-09T00:34:50.199653535Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:34:50.219891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162355087.mount: Deactivated successfully. May 9 00:34:50.221674 containerd[1462]: time="2025-05-09T00:34:50.221623177Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\"" May 9 00:34:50.223081 containerd[1462]: time="2025-05-09T00:34:50.222893905Z" level=info msg="StartContainer for \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\"" May 9 00:34:50.236876 kubelet[2544]: I0509 00:34:50.236801 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8jcpc" podStartSLOduration=3.339243458 podStartE2EDuration="19.236737578s" podCreationTimestamp="2025-05-09 00:34:31 +0000 UTC" firstStartedPulling="2025-05-09 00:34:33.282226125 +0000 UTC m=+5.773386010" lastFinishedPulling="2025-05-09 00:34:49.179720245 +0000 UTC m=+21.670880130" observedRunningTime="2025-05-09 00:34:50.215495714 +0000 UTC m=+22.706655619" watchObservedRunningTime="2025-05-09 00:34:50.236737578 +0000 UTC m=+22.727897463" May 9 00:34:50.280517 systemd[1]: Started cri-containerd-2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71.scope - libcontainer container 2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71. May 9 00:34:50.310346 systemd[1]: cri-containerd-2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71.scope: Deactivated successfully. May 9 00:34:50.313482 containerd[1462]: time="2025-05-09T00:34:50.313352830Z" level=info msg="StartContainer for \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\" returns successfully" May 9 00:34:50.337816 containerd[1462]: time="2025-05-09T00:34:50.337744162Z" level=info msg="shim disconnected" id=2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71 namespace=k8s.io May 9 00:34:50.337816 containerd[1462]: time="2025-05-09T00:34:50.337806870Z" level=warning msg="cleaning up after shim disconnected" id=2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71 namespace=k8s.io May 9 00:34:50.337816 containerd[1462]: time="2025-05-09T00:34:50.337815836Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:34:50.539088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71-rootfs.mount: Deactivated successfully. May 9 00:34:51.201975 kubelet[2544]: E0509 00:34:51.201923 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:51.202610 kubelet[2544]: E0509 00:34:51.202122 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:51.204433 containerd[1462]: time="2025-05-09T00:34:51.204244737Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:34:51.224220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942702681.mount: Deactivated successfully. May 9 00:34:51.226587 containerd[1462]: time="2025-05-09T00:34:51.226109838Z" level=info msg="CreateContainer within sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\"" May 9 00:34:51.226787 containerd[1462]: time="2025-05-09T00:34:51.226741575Z" level=info msg="StartContainer for \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\"" May 9 00:34:51.277578 systemd[1]: Started cri-containerd-308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa.scope - libcontainer container 308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa. May 9 00:34:51.314122 containerd[1462]: time="2025-05-09T00:34:51.314042346Z" level=info msg="StartContainer for \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\" returns successfully" May 9 00:34:51.475361 kubelet[2544]: I0509 00:34:51.474754 2544 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 9 00:34:51.515327 systemd[1]: Created slice kubepods-burstable-pode5be38f8_8de7_4356_9705_59aa634707ad.slice - libcontainer container kubepods-burstable-pode5be38f8_8de7_4356_9705_59aa634707ad.slice. May 9 00:34:51.527913 systemd[1]: Created slice kubepods-burstable-podf532c8ae_4d7f_496a_b283_cb1913df2d75.slice - libcontainer container kubepods-burstable-podf532c8ae_4d7f_496a_b283_cb1913df2d75.slice. May 9 00:34:51.538817 kubelet[2544]: I0509 00:34:51.538782 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5be38f8-8de7-4356-9705-59aa634707ad-config-volume\") pod \"coredns-668d6bf9bc-zzpfx\" (UID: \"e5be38f8-8de7-4356-9705-59aa634707ad\") " pod="kube-system/coredns-668d6bf9bc-zzpfx" May 9 00:34:51.538978 kubelet[2544]: I0509 00:34:51.538823 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958kl\" (UniqueName: \"kubernetes.io/projected/e5be38f8-8de7-4356-9705-59aa634707ad-kube-api-access-958kl\") pod \"coredns-668d6bf9bc-zzpfx\" (UID: \"e5be38f8-8de7-4356-9705-59aa634707ad\") " pod="kube-system/coredns-668d6bf9bc-zzpfx" May 9 00:34:51.538978 kubelet[2544]: I0509 00:34:51.538875 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pr9n\" (UniqueName: \"kubernetes.io/projected/f532c8ae-4d7f-496a-b283-cb1913df2d75-kube-api-access-2pr9n\") pod \"coredns-668d6bf9bc-pk4fh\" (UID: \"f532c8ae-4d7f-496a-b283-cb1913df2d75\") " pod="kube-system/coredns-668d6bf9bc-pk4fh" May 9 00:34:51.538978 kubelet[2544]: I0509 00:34:51.538890 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f532c8ae-4d7f-496a-b283-cb1913df2d75-config-volume\") pod \"coredns-668d6bf9bc-pk4fh\" (UID: \"f532c8ae-4d7f-496a-b283-cb1913df2d75\") " pod="kube-system/coredns-668d6bf9bc-pk4fh" May 9 00:34:51.821732 kubelet[2544]: E0509 00:34:51.821684 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:51.822618 containerd[1462]: time="2025-05-09T00:34:51.822571801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zzpfx,Uid:e5be38f8-8de7-4356-9705-59aa634707ad,Namespace:kube-system,Attempt:0,}" May 9 00:34:51.831110 kubelet[2544]: E0509 00:34:51.831052 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:51.833673 containerd[1462]: time="2025-05-09T00:34:51.833622102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pk4fh,Uid:f532c8ae-4d7f-496a-b283-cb1913df2d75,Namespace:kube-system,Attempt:0,}" May 9 00:34:52.207923 kubelet[2544]: E0509 00:34:52.207725 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:52.237571 kubelet[2544]: I0509 00:34:52.237479 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fwlhj" podStartSLOduration=7.741177707 podStartE2EDuration="21.237456639s" podCreationTimestamp="2025-05-09 00:34:31 +0000 UTC" firstStartedPulling="2025-05-09 00:34:33.029164603 +0000 UTC m=+5.520324488" lastFinishedPulling="2025-05-09 00:34:46.525443535 +0000 UTC m=+19.016603420" observedRunningTime="2025-05-09 00:34:52.235863877 +0000 UTC m=+24.727023762" watchObservedRunningTime="2025-05-09 00:34:52.237456639 +0000 UTC m=+24.728616524" May 9 00:34:53.209528 kubelet[2544]: E0509 00:34:53.209484 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:53.632089 systemd-networkd[1385]: cilium_host: Link UP May 9 00:34:53.632294 systemd-networkd[1385]: cilium_net: Link UP May 9 00:34:53.632299 systemd-networkd[1385]: cilium_net: Gained carrier May 9 00:34:53.632494 systemd-networkd[1385]: cilium_host: Gained carrier May 9 00:34:53.633082 systemd-networkd[1385]: cilium_host: Gained IPv6LL May 9 00:34:53.746362 systemd-networkd[1385]: cilium_vxlan: Link UP May 9 00:34:53.746378 systemd-networkd[1385]: cilium_vxlan: Gained carrier May 9 00:34:53.975235 kernel: NET: Registered PF_ALG protocol family May 9 00:34:54.108364 systemd-networkd[1385]: cilium_net: Gained IPv6LL May 9 00:34:54.210905 kubelet[2544]: E0509 00:34:54.210859 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:54.701088 systemd-networkd[1385]: lxc_health: Link UP May 9 00:34:54.709923 systemd-networkd[1385]: lxc_health: Gained carrier May 9 00:34:55.050081 systemd-networkd[1385]: lxcef2eb61cdd7a: Link UP May 9 00:34:55.058249 kernel: eth0: renamed from tmp39d65 May 9 00:34:55.065811 systemd-networkd[1385]: lxcef2eb61cdd7a: Gained carrier May 9 00:34:55.081644 systemd-networkd[1385]: lxca7046faf4f03: Link UP May 9 00:34:55.090949 kernel: eth0: renamed from tmpf84b1 May 9 00:34:55.094847 systemd-networkd[1385]: lxca7046faf4f03: Gained carrier May 9 00:34:55.668398 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL May 9 00:34:55.931383 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:35122.service - OpenSSH per-connection server daemon (10.0.0.1:35122). May 9 00:34:55.995904 sshd[3752]: Accepted publickey for core from 10.0.0.1 port 35122 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:34:55.997867 sshd[3752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:34:56.007054 systemd-logind[1443]: New session 8 of user core. May 9 00:34:56.011650 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 00:34:56.115754 kubelet[2544]: E0509 00:34:56.115693 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:56.180365 systemd-networkd[1385]: lxca7046faf4f03: Gained IPv6LL May 9 00:34:56.204146 sshd[3752]: pam_unix(sshd:session): session closed for user core May 9 00:34:56.207932 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:35122.service: Deactivated successfully. May 9 00:34:56.210443 systemd[1]: session-8.scope: Deactivated successfully. May 9 00:34:56.212655 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. May 9 00:34:56.215154 systemd-logind[1443]: Removed session 8. May 9 00:34:56.217775 kubelet[2544]: E0509 00:34:56.217102 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:56.564420 systemd-networkd[1385]: lxc_health: Gained IPv6LL May 9 00:34:56.628410 systemd-networkd[1385]: lxcef2eb61cdd7a: Gained IPv6LL May 9 00:34:58.737039 containerd[1462]: time="2025-05-09T00:34:58.736903102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:34:58.737039 containerd[1462]: time="2025-05-09T00:34:58.736990095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:34:58.737039 containerd[1462]: time="2025-05-09T00:34:58.737004482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:58.738119 containerd[1462]: time="2025-05-09T00:34:58.738035287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:58.739516 containerd[1462]: time="2025-05-09T00:34:58.739408797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:34:58.739516 containerd[1462]: time="2025-05-09T00:34:58.739471735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:34:58.739516 containerd[1462]: time="2025-05-09T00:34:58.739482495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:58.739698 containerd[1462]: time="2025-05-09T00:34:58.739613841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:34:58.760410 systemd[1]: Started cri-containerd-39d65b3562f4420d2a0d146cac894a31b1aafd6aadfdac3d2fb6442fe1bd9cb1.scope - libcontainer container 39d65b3562f4420d2a0d146cac894a31b1aafd6aadfdac3d2fb6442fe1bd9cb1. May 9 00:34:58.764762 systemd[1]: Started cri-containerd-f84b120d5886b1a4240de214ababf30ac5d53d8a6744c8f22535cd62be327388.scope - libcontainer container f84b120d5886b1a4240de214ababf30ac5d53d8a6744c8f22535cd62be327388. May 9 00:34:58.774592 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:34:58.781680 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 9 00:34:58.813074 containerd[1462]: time="2025-05-09T00:34:58.812998715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zzpfx,Uid:e5be38f8-8de7-4356-9705-59aa634707ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"39d65b3562f4420d2a0d146cac894a31b1aafd6aadfdac3d2fb6442fe1bd9cb1\"" May 9 00:34:58.814594 kubelet[2544]: E0509 00:34:58.814156 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:58.817279 containerd[1462]: time="2025-05-09T00:34:58.817115835Z" level=info msg="CreateContainer within sandbox \"39d65b3562f4420d2a0d146cac894a31b1aafd6aadfdac3d2fb6442fe1bd9cb1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:34:58.822470 containerd[1462]: time="2025-05-09T00:34:58.822377596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pk4fh,Uid:f532c8ae-4d7f-496a-b283-cb1913df2d75,Namespace:kube-system,Attempt:0,} returns sandbox id \"f84b120d5886b1a4240de214ababf30ac5d53d8a6744c8f22535cd62be327388\"" May 9 00:34:58.823383 kubelet[2544]: E0509 00:34:58.823207 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:58.826923 containerd[1462]: time="2025-05-09T00:34:58.826034663Z" level=info msg="CreateContainer within sandbox \"f84b120d5886b1a4240de214ababf30ac5d53d8a6744c8f22535cd62be327388\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 00:34:58.882996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39034297.mount: Deactivated successfully. May 9 00:34:58.893830 containerd[1462]: time="2025-05-09T00:34:58.893764348Z" level=info msg="CreateContainer within sandbox \"f84b120d5886b1a4240de214ababf30ac5d53d8a6744c8f22535cd62be327388\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e4fca144770341f425e1597286dc862b7633dc6732b9c289424735ea1e87934\"" May 9 00:34:58.896677 containerd[1462]: time="2025-05-09T00:34:58.894463961Z" level=info msg="StartContainer for \"3e4fca144770341f425e1597286dc862b7633dc6732b9c289424735ea1e87934\"" May 9 00:34:58.926261 containerd[1462]: time="2025-05-09T00:34:58.926208418Z" level=info msg="CreateContainer within sandbox \"39d65b3562f4420d2a0d146cac894a31b1aafd6aadfdac3d2fb6442fe1bd9cb1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fda362c468e8e6608332c2cb5ea762db3af7185ba9cc35fe12fec84d3ec3cbdc\"" May 9 00:34:58.936337 systemd[1]: Started cri-containerd-3e4fca144770341f425e1597286dc862b7633dc6732b9c289424735ea1e87934.scope - libcontainer container 3e4fca144770341f425e1597286dc862b7633dc6732b9c289424735ea1e87934. May 9 00:34:58.941553 containerd[1462]: time="2025-05-09T00:34:58.941506093Z" level=info msg="StartContainer for \"fda362c468e8e6608332c2cb5ea762db3af7185ba9cc35fe12fec84d3ec3cbdc\"" May 9 00:34:58.975752 containerd[1462]: time="2025-05-09T00:34:58.975695850Z" level=info msg="StartContainer for \"3e4fca144770341f425e1597286dc862b7633dc6732b9c289424735ea1e87934\" returns successfully" May 9 00:34:58.976394 systemd[1]: Started cri-containerd-fda362c468e8e6608332c2cb5ea762db3af7185ba9cc35fe12fec84d3ec3cbdc.scope - libcontainer container fda362c468e8e6608332c2cb5ea762db3af7185ba9cc35fe12fec84d3ec3cbdc. May 9 00:34:59.011449 containerd[1462]: time="2025-05-09T00:34:59.011326302Z" level=info msg="StartContainer for \"fda362c468e8e6608332c2cb5ea762db3af7185ba9cc35fe12fec84d3ec3cbdc\" returns successfully" May 9 00:34:59.223477 kubelet[2544]: E0509 00:34:59.222928 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:59.225468 kubelet[2544]: E0509 00:34:59.225425 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:34:59.255621 kubelet[2544]: I0509 00:34:59.255533 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pk4fh" podStartSLOduration=28.255509131 podStartE2EDuration="28.255509131s" podCreationTimestamp="2025-05-09 00:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:34:59.253986272 +0000 UTC m=+31.745146147" watchObservedRunningTime="2025-05-09 00:34:59.255509131 +0000 UTC m=+31.746669016" May 9 00:34:59.308887 kubelet[2544]: I0509 00:34:59.308817 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zzpfx" podStartSLOduration=28.308792844 podStartE2EDuration="28.308792844s" podCreationTimestamp="2025-05-09 00:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:34:59.307590657 +0000 UTC m=+31.798750542" watchObservedRunningTime="2025-05-09 00:34:59.308792844 +0000 UTC m=+31.799952729" May 9 00:35:00.227297 kubelet[2544]: E0509 00:35:00.227028 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:00.227297 kubelet[2544]: E0509 00:35:00.227097 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:01.217736 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:45124.service - OpenSSH per-connection server daemon (10.0.0.1:45124). May 9 00:35:01.229090 kubelet[2544]: E0509 00:35:01.229053 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:01.229481 kubelet[2544]: E0509 00:35:01.229096 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:01.262205 sshd[3950]: Accepted publickey for core from 10.0.0.1 port 45124 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:01.264069 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:01.268581 systemd-logind[1443]: New session 9 of user core. May 9 00:35:01.279374 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 00:35:01.419485 sshd[3950]: pam_unix(sshd:session): session closed for user core May 9 00:35:01.424340 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:45124.service: Deactivated successfully. May 9 00:35:01.426847 systemd[1]: session-9.scope: Deactivated successfully. May 9 00:35:01.427630 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. May 9 00:35:01.429093 systemd-logind[1443]: Removed session 9. May 9 00:35:06.431814 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:45132.service - OpenSSH per-connection server daemon (10.0.0.1:45132). May 9 00:35:06.469243 sshd[3972]: Accepted publickey for core from 10.0.0.1 port 45132 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:06.471058 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:06.475153 systemd-logind[1443]: New session 10 of user core. May 9 00:35:06.491330 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 00:35:06.608384 sshd[3972]: pam_unix(sshd:session): session closed for user core May 9 00:35:06.612969 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:45132.service: Deactivated successfully. May 9 00:35:06.615485 systemd[1]: session-10.scope: Deactivated successfully. May 9 00:35:06.616386 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. May 9 00:35:06.617570 systemd-logind[1443]: Removed session 10. May 9 00:35:11.622395 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:39712.service - OpenSSH per-connection server daemon (10.0.0.1:39712). May 9 00:35:11.666495 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 39712 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:11.668568 sshd[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:11.673713 systemd-logind[1443]: New session 11 of user core. May 9 00:35:11.683473 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 00:35:11.828632 sshd[3988]: pam_unix(sshd:session): session closed for user core May 9 00:35:11.833980 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:39712.service: Deactivated successfully. May 9 00:35:11.836708 systemd[1]: session-11.scope: Deactivated successfully. May 9 00:35:11.837580 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. May 9 00:35:11.838716 systemd-logind[1443]: Removed session 11. May 9 00:35:16.842063 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:56040.service - OpenSSH per-connection server daemon (10.0.0.1:56040). May 9 00:35:16.913017 sshd[4003]: Accepted publickey for core from 10.0.0.1 port 56040 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:16.914472 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:16.918322 systemd-logind[1443]: New session 12 of user core. May 9 00:35:16.927326 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 00:35:17.040306 sshd[4003]: pam_unix(sshd:session): session closed for user core May 9 00:35:17.052230 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:56040.service: Deactivated successfully. May 9 00:35:17.054178 systemd[1]: session-12.scope: Deactivated successfully. May 9 00:35:17.055815 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. May 9 00:35:17.057414 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:56046.service - OpenSSH per-connection server daemon (10.0.0.1:56046). May 9 00:35:17.058470 systemd-logind[1443]: Removed session 12. May 9 00:35:17.094283 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 56046 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:17.095758 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:17.099614 systemd-logind[1443]: New session 13 of user core. May 9 00:35:17.105317 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 00:35:17.267412 sshd[4018]: pam_unix(sshd:session): session closed for user core May 9 00:35:17.277595 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:56046.service: Deactivated successfully. May 9 00:35:17.279615 systemd[1]: session-13.scope: Deactivated successfully. May 9 00:35:17.281059 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. May 9 00:35:17.289638 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:56060.service - OpenSSH per-connection server daemon (10.0.0.1:56060). May 9 00:35:17.290592 systemd-logind[1443]: Removed session 13. May 9 00:35:17.324917 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 56060 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:17.326620 sshd[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:17.330744 systemd-logind[1443]: New session 14 of user core. May 9 00:35:17.338809 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 00:35:17.491493 sshd[4030]: pam_unix(sshd:session): session closed for user core May 9 00:35:17.495542 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:56060.service: Deactivated successfully. May 9 00:35:17.497976 systemd[1]: session-14.scope: Deactivated successfully. May 9 00:35:17.498701 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. May 9 00:35:17.499690 systemd-logind[1443]: Removed session 14. May 9 00:35:22.506932 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:56064.service - OpenSSH per-connection server daemon (10.0.0.1:56064). May 9 00:35:22.545761 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 56064 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:22.547749 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:22.552870 systemd-logind[1443]: New session 15 of user core. May 9 00:35:22.569376 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 00:35:22.682052 sshd[4044]: pam_unix(sshd:session): session closed for user core May 9 00:35:22.686118 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:56064.service: Deactivated successfully. May 9 00:35:22.688475 systemd[1]: session-15.scope: Deactivated successfully. May 9 00:35:22.689261 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. May 9 00:35:22.690252 systemd-logind[1443]: Removed session 15. May 9 00:35:27.696872 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:40028.service - OpenSSH per-connection server daemon (10.0.0.1:40028). May 9 00:35:27.735675 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 40028 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:27.737775 sshd[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:27.742924 systemd-logind[1443]: New session 16 of user core. May 9 00:35:27.754340 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 00:35:27.876708 sshd[4058]: pam_unix(sshd:session): session closed for user core May 9 00:35:27.891458 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:40028.service: Deactivated successfully. May 9 00:35:27.893757 systemd[1]: session-16.scope: Deactivated successfully. May 9 00:35:27.895900 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. May 9 00:35:27.903551 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:40030.service - OpenSSH per-connection server daemon (10.0.0.1:40030). May 9 00:35:27.904659 systemd-logind[1443]: Removed session 16. May 9 00:35:27.936591 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 40030 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:27.938420 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:27.942849 systemd-logind[1443]: New session 17 of user core. May 9 00:35:27.952354 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 00:35:28.228217 sshd[4074]: pam_unix(sshd:session): session closed for user core May 9 00:35:28.241728 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:40030.service: Deactivated successfully. May 9 00:35:28.243761 systemd[1]: session-17.scope: Deactivated successfully. May 9 00:35:28.245656 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. May 9 00:35:28.252457 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:40042.service - OpenSSH per-connection server daemon (10.0.0.1:40042). May 9 00:35:28.253575 systemd-logind[1443]: Removed session 17. May 9 00:35:28.290673 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 40042 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:28.292817 sshd[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:28.297225 systemd-logind[1443]: New session 18 of user core. May 9 00:35:28.305325 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 00:35:29.205260 sshd[4087]: pam_unix(sshd:session): session closed for user core May 9 00:35:29.216824 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:40042.service: Deactivated successfully. May 9 00:35:29.218922 systemd[1]: session-18.scope: Deactivated successfully. May 9 00:35:29.222496 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. May 9 00:35:29.237708 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:40050.service - OpenSSH per-connection server daemon (10.0.0.1:40050). May 9 00:35:29.238940 systemd-logind[1443]: Removed session 18. May 9 00:35:29.270344 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 40050 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:29.271962 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:29.276155 systemd-logind[1443]: New session 19 of user core. May 9 00:35:29.286333 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 00:35:29.541650 sshd[4109]: pam_unix(sshd:session): session closed for user core May 9 00:35:29.552308 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:40050.service: Deactivated successfully. May 9 00:35:29.554954 systemd[1]: session-19.scope: Deactivated successfully. May 9 00:35:29.558775 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. May 9 00:35:29.571727 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:40056.service - OpenSSH per-connection server daemon (10.0.0.1:40056). May 9 00:35:29.573060 systemd-logind[1443]: Removed session 19. May 9 00:35:29.605394 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 40056 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:29.607387 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:29.613147 systemd-logind[1443]: New session 20 of user core. May 9 00:35:29.627542 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 00:35:29.740001 sshd[4121]: pam_unix(sshd:session): session closed for user core May 9 00:35:29.744780 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:40056.service: Deactivated successfully. May 9 00:35:29.747002 systemd[1]: session-20.scope: Deactivated successfully. May 9 00:35:29.748082 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. May 9 00:35:29.749257 systemd-logind[1443]: Removed session 20. May 9 00:35:34.755331 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:40062.service - OpenSSH per-connection server daemon (10.0.0.1:40062). May 9 00:35:34.796896 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 40062 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:34.799005 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:34.804542 systemd-logind[1443]: New session 21 of user core. May 9 00:35:34.814426 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 00:35:34.957168 sshd[4137]: pam_unix(sshd:session): session closed for user core May 9 00:35:34.961661 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:40062.service: Deactivated successfully. May 9 00:35:34.963761 systemd[1]: session-21.scope: Deactivated successfully. May 9 00:35:34.964351 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. May 9 00:35:34.965252 systemd-logind[1443]: Removed session 21. May 9 00:35:39.969611 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:51162.service - OpenSSH per-connection server daemon (10.0.0.1:51162). May 9 00:35:40.008739 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 51162 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:40.010666 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:40.015161 systemd-logind[1443]: New session 22 of user core. May 9 00:35:40.024322 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 00:35:40.139742 sshd[4153]: pam_unix(sshd:session): session closed for user core May 9 00:35:40.144171 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:51162.service: Deactivated successfully. May 9 00:35:40.146423 systemd[1]: session-22.scope: Deactivated successfully. May 9 00:35:40.147108 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. May 9 00:35:40.148052 systemd-logind[1443]: Removed session 22. May 9 00:35:41.814912 kubelet[2544]: E0509 00:35:41.814833 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:45.158942 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:51174.service - OpenSSH per-connection server daemon (10.0.0.1:51174). May 9 00:35:45.199506 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 51174 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:45.201326 sshd[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:45.205784 systemd-logind[1443]: New session 23 of user core. May 9 00:35:45.214334 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 00:35:45.322749 sshd[4167]: pam_unix(sshd:session): session closed for user core May 9 00:35:45.327401 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:51174.service: Deactivated successfully. May 9 00:35:45.330021 systemd[1]: session-23.scope: Deactivated successfully. May 9 00:35:45.330910 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. May 9 00:35:45.331961 systemd-logind[1443]: Removed session 23. May 9 00:35:50.355839 systemd[1]: Started sshd@23-10.0.0.84:22-10.0.0.1:35072.service - OpenSSH per-connection server daemon (10.0.0.1:35072). May 9 00:35:50.409663 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 35072 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:50.413525 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:50.450037 systemd-logind[1443]: New session 24 of user core. May 9 00:35:50.462614 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 00:35:50.597554 sshd[4181]: pam_unix(sshd:session): session closed for user core May 9 00:35:50.612272 systemd[1]: sshd@23-10.0.0.84:22-10.0.0.1:35072.service: Deactivated successfully. May 9 00:35:50.615040 systemd[1]: session-24.scope: Deactivated successfully. May 9 00:35:50.618543 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. May 9 00:35:50.624754 systemd[1]: Started sshd@24-10.0.0.84:22-10.0.0.1:35088.service - OpenSSH per-connection server daemon (10.0.0.1:35088). May 9 00:35:50.626837 systemd-logind[1443]: Removed session 24. May 9 00:35:50.662523 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 35088 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:50.664718 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:50.670143 systemd-logind[1443]: New session 25 of user core. May 9 00:35:50.680498 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 00:35:50.814030 kubelet[2544]: E0509 00:35:50.813913 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:50.814030 kubelet[2544]: E0509 00:35:50.813967 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:52.027577 containerd[1462]: time="2025-05-09T00:35:52.027504613Z" level=info msg="StopContainer for \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\" with timeout 30 (s)" May 9 00:35:52.028332 containerd[1462]: time="2025-05-09T00:35:52.028291474Z" level=info msg="Stop container \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\" with signal terminated" May 9 00:35:52.041941 systemd[1]: cri-containerd-7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788.scope: Deactivated successfully. May 9 00:35:52.068656 containerd[1462]: time="2025-05-09T00:35:52.068589773Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 00:35:52.071754 containerd[1462]: time="2025-05-09T00:35:52.071645223Z" level=info msg="StopContainer for \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\" with timeout 2 (s)" May 9 00:35:52.072074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788-rootfs.mount: Deactivated successfully. May 9 00:35:52.072457 containerd[1462]: time="2025-05-09T00:35:52.072348808Z" level=info msg="Stop container \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\" with signal terminated" May 9 00:35:52.083929 containerd[1462]: time="2025-05-09T00:35:52.082925224Z" level=info msg="shim disconnected" id=7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788 namespace=k8s.io May 9 00:35:52.083929 containerd[1462]: time="2025-05-09T00:35:52.083083421Z" level=warning msg="cleaning up after shim disconnected" id=7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788 namespace=k8s.io May 9 00:35:52.083929 containerd[1462]: time="2025-05-09T00:35:52.083097759Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:35:52.083146 systemd-networkd[1385]: lxc_health: Link DOWN May 9 00:35:52.083154 systemd-networkd[1385]: lxc_health: Lost carrier May 9 00:35:52.105818 containerd[1462]: time="2025-05-09T00:35:52.105757779Z" level=info msg="StopContainer for \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\" returns successfully" May 9 00:35:52.108692 systemd[1]: cri-containerd-308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa.scope: Deactivated successfully. May 9 00:35:52.109067 systemd[1]: cri-containerd-308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa.scope: Consumed 7.362s CPU time. May 9 00:35:52.110678 containerd[1462]: time="2025-05-09T00:35:52.110618007Z" level=info msg="StopPodSandbox for \"75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86\"" May 9 00:35:52.110678 containerd[1462]: time="2025-05-09T00:35:52.110666398Z" level=info msg="Container to stop \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:35:52.114439 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86-shm.mount: Deactivated successfully. May 9 00:35:52.127398 systemd[1]: cri-containerd-75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86.scope: Deactivated successfully. May 9 00:35:52.137687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa-rootfs.mount: Deactivated successfully. May 9 00:35:52.145998 containerd[1462]: time="2025-05-09T00:35:52.145938295Z" level=info msg="shim disconnected" id=308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa namespace=k8s.io May 9 00:35:52.146411 containerd[1462]: time="2025-05-09T00:35:52.146266604Z" level=warning msg="cleaning up after shim disconnected" id=308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa namespace=k8s.io May 9 00:35:52.146411 containerd[1462]: time="2025-05-09T00:35:52.146293184Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:35:52.153574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86-rootfs.mount: Deactivated successfully. May 9 00:35:52.158158 containerd[1462]: time="2025-05-09T00:35:52.158074538Z" level=info msg="shim disconnected" id=75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86 namespace=k8s.io May 9 00:35:52.158313 containerd[1462]: time="2025-05-09T00:35:52.158153247Z" level=warning msg="cleaning up after shim disconnected" id=75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86 namespace=k8s.io May 9 00:35:52.158313 containerd[1462]: time="2025-05-09T00:35:52.158169918Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:35:52.165065 containerd[1462]: time="2025-05-09T00:35:52.165009301Z" level=warning msg="cleanup warnings time=\"2025-05-09T00:35:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 00:35:52.170016 containerd[1462]: time="2025-05-09T00:35:52.169964808Z" level=info msg="StopContainer for \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\" returns successfully" May 9 00:35:52.170699 containerd[1462]: time="2025-05-09T00:35:52.170671839Z" level=info msg="StopPodSandbox for \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\"" May 9 00:35:52.170918 containerd[1462]: time="2025-05-09T00:35:52.170808626Z" level=info msg="Container to stop \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:35:52.170918 containerd[1462]: time="2025-05-09T00:35:52.170849534Z" level=info msg="Container to stop \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:35:52.170918 containerd[1462]: time="2025-05-09T00:35:52.170865273Z" level=info msg="Container to stop \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:35:52.170918 containerd[1462]: time="2025-05-09T00:35:52.170875011Z" level=info msg="Container to stop \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:35:52.170918 containerd[1462]: time="2025-05-09T00:35:52.170888166Z" level=info msg="Container to stop \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 9 00:35:52.179665 systemd[1]: cri-containerd-5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f.scope: Deactivated successfully. May 9 00:35:52.185574 containerd[1462]: time="2025-05-09T00:35:52.185208029Z" level=info msg="TearDown network for sandbox \"75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86\" successfully" May 9 00:35:52.185574 containerd[1462]: time="2025-05-09T00:35:52.185244607Z" level=info msg="StopPodSandbox for \"75bacdf5b3fedb20015c05fa9bcf783bae26fe3b17aa16fdb4f7721cfbf29a86\" returns successfully" May 9 00:35:52.208529 containerd[1462]: time="2025-05-09T00:35:52.208450515Z" level=info msg="shim disconnected" id=5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f namespace=k8s.io May 9 00:35:52.208529 containerd[1462]: time="2025-05-09T00:35:52.208520347Z" level=warning msg="cleaning up after shim disconnected" id=5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f namespace=k8s.io May 9 00:35:52.208529 containerd[1462]: time="2025-05-09T00:35:52.208533271Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:35:52.224954 containerd[1462]: time="2025-05-09T00:35:52.224896129Z" level=info msg="TearDown network for sandbox \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" successfully" May 9 00:35:52.225160 containerd[1462]: time="2025-05-09T00:35:52.225129267Z" level=info msg="StopPodSandbox for \"5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f\" returns successfully" May 9 00:35:52.260798 kubelet[2544]: I0509 00:35:52.260747 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twdbr\" (UniqueName: \"kubernetes.io/projected/27b090f7-bd71-4785-9b66-193cedcffa5c-kube-api-access-twdbr\") pod \"27b090f7-bd71-4785-9b66-193cedcffa5c\" (UID: \"27b090f7-bd71-4785-9b66-193cedcffa5c\") " May 9 00:35:52.261265 kubelet[2544]: I0509 00:35:52.260807 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27b090f7-bd71-4785-9b66-193cedcffa5c-cilium-config-path\") pod \"27b090f7-bd71-4785-9b66-193cedcffa5c\" (UID: \"27b090f7-bd71-4785-9b66-193cedcffa5c\") " May 9 00:35:52.264639 kubelet[2544]: I0509 00:35:52.264596 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27b090f7-bd71-4785-9b66-193cedcffa5c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "27b090f7-bd71-4785-9b66-193cedcffa5c" (UID: "27b090f7-bd71-4785-9b66-193cedcffa5c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 00:35:52.264695 kubelet[2544]: I0509 00:35:52.264674 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b090f7-bd71-4785-9b66-193cedcffa5c-kube-api-access-twdbr" (OuterVolumeSpecName: "kube-api-access-twdbr") pod "27b090f7-bd71-4785-9b66-193cedcffa5c" (UID: "27b090f7-bd71-4785-9b66-193cedcffa5c"). InnerVolumeSpecName "kube-api-access-twdbr". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 00:35:52.342397 kubelet[2544]: I0509 00:35:52.342332 2544 scope.go:117] "RemoveContainer" containerID="308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa" May 9 00:35:52.343548 containerd[1462]: time="2025-05-09T00:35:52.343515912Z" level=info msg="RemoveContainer for \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\"" May 9 00:35:52.354629 containerd[1462]: time="2025-05-09T00:35:52.354584735Z" level=info msg="RemoveContainer for \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\" returns successfully" May 9 00:35:52.354867 systemd[1]: Removed slice kubepods-besteffort-pod27b090f7_bd71_4785_9b66_193cedcffa5c.slice - libcontainer container kubepods-besteffort-pod27b090f7_bd71_4785_9b66_193cedcffa5c.slice. May 9 00:35:52.355138 kubelet[2544]: I0509 00:35:52.354865 2544 scope.go:117] "RemoveContainer" containerID="2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71" May 9 00:35:52.356135 containerd[1462]: time="2025-05-09T00:35:52.356112290Z" level=info msg="RemoveContainer for \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\"" May 9 00:35:52.360028 containerd[1462]: time="2025-05-09T00:35:52.359982525Z" level=info msg="RemoveContainer for \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\" returns successfully" May 9 00:35:52.360217 kubelet[2544]: I0509 00:35:52.360169 2544 scope.go:117] "RemoveContainer" containerID="2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893" May 9 00:35:52.361066 kubelet[2544]: I0509 00:35:52.361036 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-host-proc-sys-kernel\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361066 kubelet[2544]: I0509 00:35:52.361062 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-lib-modules\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361297 kubelet[2544]: I0509 00:35:52.361078 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-bpf-maps\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361297 kubelet[2544]: I0509 00:35:52.361098 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-hostproc\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361297 kubelet[2544]: I0509 00:35:52.361122 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-config-path\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361297 kubelet[2544]: I0509 00:35:52.361140 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-cgroup\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361297 kubelet[2544]: I0509 00:35:52.361155 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-etc-cni-netd\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361297 kubelet[2544]: I0509 00:35:52.361171 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-host-proc-sys-net\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361449 kubelet[2544]: I0509 00:35:52.361167 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.361449 kubelet[2544]: I0509 00:35:52.361284 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.361449 kubelet[2544]: I0509 00:35:52.361345 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.361449 kubelet[2544]: I0509 00:35:52.361369 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.361449 kubelet[2544]: I0509 00:35:52.361391 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.361592 kubelet[2544]: I0509 00:35:52.361415 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.361592 kubelet[2544]: I0509 00:35:52.361434 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.361644 containerd[1462]: time="2025-05-09T00:35:52.361569773Z" level=info msg="RemoveContainer for \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\"" May 9 00:35:52.361982 kubelet[2544]: I0509 00:35:52.361765 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.361982 kubelet[2544]: I0509 00:35:52.361188 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cni-path\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361982 kubelet[2544]: I0509 00:35:52.361838 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-xtables-lock\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361982 kubelet[2544]: I0509 00:35:52.361880 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6a6e843-5cd8-4099-989f-6054d8e42957-clustermesh-secrets\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361982 kubelet[2544]: I0509 00:35:52.361905 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6a6e843-5cd8-4099-989f-6054d8e42957-hubble-tls\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.361982 kubelet[2544]: I0509 00:35:52.361977 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.362961 kubelet[2544]: I0509 00:35:52.362437 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-run\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.362961 kubelet[2544]: I0509 00:35:52.362465 2544 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qz9d\" (UniqueName: \"kubernetes.io/projected/c6a6e843-5cd8-4099-989f-6054d8e42957-kube-api-access-8qz9d\") pod \"c6a6e843-5cd8-4099-989f-6054d8e42957\" (UID: \"c6a6e843-5cd8-4099-989f-6054d8e42957\") " May 9 00:35:52.362961 kubelet[2544]: I0509 00:35:52.362551 2544 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twdbr\" (UniqueName: \"kubernetes.io/projected/27b090f7-bd71-4785-9b66-193cedcffa5c-kube-api-access-twdbr\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.362961 kubelet[2544]: I0509 00:35:52.362589 2544 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.362961 kubelet[2544]: I0509 00:35:52.362599 2544 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-lib-modules\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.362961 kubelet[2544]: I0509 00:35:52.362611 2544 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.362961 kubelet[2544]: I0509 00:35:52.362620 2544 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-hostproc\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.362961 kubelet[2544]: I0509 00:35:52.362629 2544 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.363207 kubelet[2544]: I0509 00:35:52.362637 2544 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.363207 kubelet[2544]: I0509 00:35:52.362645 2544 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.363207 kubelet[2544]: I0509 00:35:52.362678 2544 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/27b090f7-bd71-4785-9b66-193cedcffa5c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.363207 kubelet[2544]: I0509 00:35:52.362687 2544 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cni-path\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.363207 kubelet[2544]: I0509 00:35:52.362694 2544 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.365736 kubelet[2544]: I0509 00:35:52.365696 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 9 00:35:52.366395 kubelet[2544]: I0509 00:35:52.366367 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a6e843-5cd8-4099-989f-6054d8e42957-kube-api-access-8qz9d" (OuterVolumeSpecName: "kube-api-access-8qz9d") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "kube-api-access-8qz9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 00:35:52.368007 kubelet[2544]: I0509 00:35:52.367976 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6a6e843-5cd8-4099-989f-6054d8e42957-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 9 00:35:52.368922 kubelet[2544]: I0509 00:35:52.368880 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a6e843-5cd8-4099-989f-6054d8e42957-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 9 00:35:52.370938 kubelet[2544]: I0509 00:35:52.370870 2544 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6a6e843-5cd8-4099-989f-6054d8e42957" (UID: "c6a6e843-5cd8-4099-989f-6054d8e42957"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 9 00:35:52.374684 containerd[1462]: time="2025-05-09T00:35:52.374627690Z" level=info msg="RemoveContainer for \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\" returns successfully" May 9 00:35:52.374991 kubelet[2544]: I0509 00:35:52.374951 2544 scope.go:117] "RemoveContainer" containerID="2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a" May 9 00:35:52.376527 containerd[1462]: time="2025-05-09T00:35:52.376489123Z" level=info msg="RemoveContainer for \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\"" May 9 00:35:52.393178 containerd[1462]: time="2025-05-09T00:35:52.393122260Z" level=info msg="RemoveContainer for \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\" returns successfully" May 9 00:35:52.393494 kubelet[2544]: I0509 00:35:52.393447 2544 scope.go:117] "RemoveContainer" containerID="4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac" May 9 00:35:52.394723 containerd[1462]: time="2025-05-09T00:35:52.394680092Z" level=info msg="RemoveContainer for \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\"" May 9 00:35:52.397960 containerd[1462]: time="2025-05-09T00:35:52.397933325Z" level=info msg="RemoveContainer for \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\" returns successfully" May 9 00:35:52.398212 kubelet[2544]: I0509 00:35:52.398121 2544 scope.go:117] "RemoveContainer" containerID="308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa" May 9 00:35:52.401140 containerd[1462]: time="2025-05-09T00:35:52.401086730Z" level=error msg="ContainerStatus for \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\": not found" May 9 00:35:52.409917 kubelet[2544]: E0509 00:35:52.409893 2544 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\": not found" containerID="308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa" May 9 00:35:52.410036 kubelet[2544]: I0509 00:35:52.409933 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa"} err="failed to get container status \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"308a4f82e07f5320f665d44e22eaed6f06085dfd1717399686ef79d960db58fa\": not found" May 9 00:35:52.410036 kubelet[2544]: I0509 00:35:52.410035 2544 scope.go:117] "RemoveContainer" containerID="2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71" May 9 00:35:52.410241 containerd[1462]: time="2025-05-09T00:35:52.410212547Z" level=error msg="ContainerStatus for \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\": not found" May 9 00:35:52.410393 kubelet[2544]: E0509 00:35:52.410359 2544 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\": not found" containerID="2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71" May 9 00:35:52.410393 kubelet[2544]: I0509 00:35:52.410380 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71"} err="failed to get container status \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\": rpc error: code = NotFound desc = an error occurred when try to find container \"2eb83c7ca762996816d206967c929dcd272ae3e8bc5062b7cc0c22396ae42c71\": not found" May 9 00:35:52.410452 kubelet[2544]: I0509 00:35:52.410394 2544 scope.go:117] "RemoveContainer" containerID="2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893" May 9 00:35:52.410629 containerd[1462]: time="2025-05-09T00:35:52.410572725Z" level=error msg="ContainerStatus for \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\": not found" May 9 00:35:52.410763 kubelet[2544]: E0509 00:35:52.410743 2544 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\": not found" containerID="2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893" May 9 00:35:52.410810 kubelet[2544]: I0509 00:35:52.410763 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893"} err="failed to get container status \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f28594479e77b1e71d52794c4df6eed28d06225883ad3ef209ce9fd0d07a893\": not found" May 9 00:35:52.410810 kubelet[2544]: I0509 00:35:52.410775 2544 scope.go:117] "RemoveContainer" containerID="2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a" May 9 00:35:52.410986 containerd[1462]: time="2025-05-09T00:35:52.410953160Z" level=error msg="ContainerStatus for \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\": not found" May 9 00:35:52.411105 kubelet[2544]: E0509 00:35:52.411075 2544 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\": not found" containerID="2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a" May 9 00:35:52.411105 kubelet[2544]: I0509 00:35:52.411101 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a"} err="failed to get container status \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f1e57157c00d1f03c6e6fd24e7da2f06f5d7853217a011baaff7b034922409a\": not found" May 9 00:35:52.411188 kubelet[2544]: I0509 00:35:52.411115 2544 scope.go:117] "RemoveContainer" containerID="4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac" May 9 00:35:52.411388 containerd[1462]: time="2025-05-09T00:35:52.411331062Z" level=error msg="ContainerStatus for \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\": not found" May 9 00:35:52.411519 kubelet[2544]: E0509 00:35:52.411495 2544 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\": not found" containerID="4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac" May 9 00:35:52.411564 kubelet[2544]: I0509 00:35:52.411517 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac"} err="failed to get container status \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d30e81209ba87098170869a81031650e1ef62dc5bdea69e8d7f86a00103ddac\": not found" May 9 00:35:52.411564 kubelet[2544]: I0509 00:35:52.411531 2544 scope.go:117] "RemoveContainer" containerID="7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788" May 9 00:35:52.413123 containerd[1462]: time="2025-05-09T00:35:52.413083480Z" level=info msg="RemoveContainer for \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\"" May 9 00:35:52.427464 containerd[1462]: time="2025-05-09T00:35:52.427406519Z" level=info msg="RemoveContainer for \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\" returns successfully" May 9 00:35:52.427651 kubelet[2544]: I0509 00:35:52.427614 2544 scope.go:117] "RemoveContainer" containerID="7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788" May 9 00:35:52.427801 containerd[1462]: time="2025-05-09T00:35:52.427755646Z" level=error msg="ContainerStatus for \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\": not found" May 9 00:35:52.427969 kubelet[2544]: E0509 00:35:52.427864 2544 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\": not found" containerID="7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788" May 9 00:35:52.427969 kubelet[2544]: I0509 00:35:52.427882 2544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788"} err="failed to get container status \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\": rpc error: code = NotFound desc = an error occurred when try to find container \"7267634b94f2f49614920f5a22f195acfb3597b6ef7fd3b90a49479943860788\": not found" May 9 00:35:52.463237 kubelet[2544]: I0509 00:35:52.463158 2544 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6a6e843-5cd8-4099-989f-6054d8e42957-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.463237 kubelet[2544]: I0509 00:35:52.463187 2544 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-run\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.463237 kubelet[2544]: I0509 00:35:52.463219 2544 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6a6e843-5cd8-4099-989f-6054d8e42957-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.463237 kubelet[2544]: I0509 00:35:52.463230 2544 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8qz9d\" (UniqueName: \"kubernetes.io/projected/c6a6e843-5cd8-4099-989f-6054d8e42957-kube-api-access-8qz9d\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.463237 kubelet[2544]: I0509 00:35:52.463240 2544 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6a6e843-5cd8-4099-989f-6054d8e42957-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 9 00:35:52.648328 systemd[1]: Removed slice kubepods-burstable-podc6a6e843_5cd8_4099_989f_6054d8e42957.slice - libcontainer container kubepods-burstable-podc6a6e843_5cd8_4099_989f_6054d8e42957.slice. May 9 00:35:52.648419 systemd[1]: kubepods-burstable-podc6a6e843_5cd8_4099_989f_6054d8e42957.slice: Consumed 7.479s CPU time. May 9 00:35:52.993894 kubelet[2544]: E0509 00:35:52.993762 2544 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:35:53.044482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f-rootfs.mount: Deactivated successfully. May 9 00:35:53.044590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c448e471921933d0249a5c8cef2b3c8a01c5e3af6dc9ac90de2a014a073cc8f-shm.mount: Deactivated successfully. May 9 00:35:53.044671 systemd[1]: var-lib-kubelet-pods-c6a6e843\x2d5cd8\x2d4099\x2d989f\x2d6054d8e42957-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8qz9d.mount: Deactivated successfully. May 9 00:35:53.044762 systemd[1]: var-lib-kubelet-pods-c6a6e843\x2d5cd8\x2d4099\x2d989f\x2d6054d8e42957-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 9 00:35:53.044847 systemd[1]: var-lib-kubelet-pods-c6a6e843\x2d5cd8\x2d4099\x2d989f\x2d6054d8e42957-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 9 00:35:53.044931 systemd[1]: var-lib-kubelet-pods-27b090f7\x2dbd71\x2d4785\x2d9b66\x2d193cedcffa5c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtwdbr.mount: Deactivated successfully. May 9 00:35:53.816098 kubelet[2544]: I0509 00:35:53.816038 2544 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b090f7-bd71-4785-9b66-193cedcffa5c" path="/var/lib/kubelet/pods/27b090f7-bd71-4785-9b66-193cedcffa5c/volumes" May 9 00:35:53.816839 kubelet[2544]: I0509 00:35:53.816806 2544 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a6e843-5cd8-4099-989f-6054d8e42957" path="/var/lib/kubelet/pods/c6a6e843-5cd8-4099-989f-6054d8e42957/volumes" May 9 00:35:53.989278 sshd[4195]: pam_unix(sshd:session): session closed for user core May 9 00:35:54.002598 systemd[1]: sshd@24-10.0.0.84:22-10.0.0.1:35088.service: Deactivated successfully. May 9 00:35:54.004689 systemd[1]: session-25.scope: Deactivated successfully. May 9 00:35:54.006482 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. May 9 00:35:54.016540 systemd[1]: Started sshd@25-10.0.0.84:22-10.0.0.1:35102.service - OpenSSH per-connection server daemon (10.0.0.1:35102). May 9 00:35:54.017746 systemd-logind[1443]: Removed session 25. May 9 00:35:54.056396 sshd[4356]: Accepted publickey for core from 10.0.0.1 port 35102 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:54.058119 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:54.062751 systemd-logind[1443]: New session 26 of user core. May 9 00:35:54.076497 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 00:35:54.660306 sshd[4356]: pam_unix(sshd:session): session closed for user core May 9 00:35:54.671157 kubelet[2544]: I0509 00:35:54.671110 2544 memory_manager.go:355] "RemoveStaleState removing state" podUID="27b090f7-bd71-4785-9b66-193cedcffa5c" containerName="cilium-operator" May 9 00:35:54.671157 kubelet[2544]: I0509 00:35:54.671149 2544 memory_manager.go:355] "RemoveStaleState removing state" podUID="c6a6e843-5cd8-4099-989f-6054d8e42957" containerName="cilium-agent" May 9 00:35:54.676820 systemd[1]: sshd@25-10.0.0.84:22-10.0.0.1:35102.service: Deactivated successfully. May 9 00:35:54.679264 systemd[1]: session-26.scope: Deactivated successfully. May 9 00:35:54.688691 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. May 9 00:35:54.699596 systemd[1]: Started sshd@26-10.0.0.84:22-10.0.0.1:35118.service - OpenSSH per-connection server daemon (10.0.0.1:35118). May 9 00:35:54.700959 systemd-logind[1443]: Removed session 26. May 9 00:35:54.709409 systemd[1]: Created slice kubepods-burstable-pod4f88a5b4_3eda_4814_b000_8316f499bd7f.slice - libcontainer container kubepods-burstable-pod4f88a5b4_3eda_4814_b000_8316f499bd7f.slice. May 9 00:35:54.739790 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 35118 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:54.741803 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:54.746280 systemd-logind[1443]: New session 27 of user core. May 9 00:35:54.760371 systemd[1]: Started session-27.scope - Session 27 of User core. May 9 00:35:54.775313 kubelet[2544]: I0509 00:35:54.775254 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-xtables-lock\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775313 kubelet[2544]: I0509 00:35:54.775302 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f88a5b4-3eda-4814-b000-8316f499bd7f-cilium-config-path\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775425 kubelet[2544]: I0509 00:35:54.775330 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f88a5b4-3eda-4814-b000-8316f499bd7f-hubble-tls\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775425 kubelet[2544]: I0509 00:35:54.775356 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-cilium-run\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775425 kubelet[2544]: I0509 00:35:54.775375 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-etc-cni-netd\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775520 kubelet[2544]: I0509 00:35:54.775429 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-lib-modules\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775520 kubelet[2544]: I0509 00:35:54.775463 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-host-proc-sys-kernel\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775520 kubelet[2544]: I0509 00:35:54.775495 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f88a5b4-3eda-4814-b000-8316f499bd7f-clustermesh-secrets\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775520 kubelet[2544]: I0509 00:35:54.775515 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-host-proc-sys-net\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775602 kubelet[2544]: I0509 00:35:54.775538 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d7bw\" (UniqueName: \"kubernetes.io/projected/4f88a5b4-3eda-4814-b000-8316f499bd7f-kube-api-access-6d7bw\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775602 kubelet[2544]: I0509 00:35:54.775565 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-cni-path\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775643 kubelet[2544]: I0509 00:35:54.775601 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-bpf-maps\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775643 kubelet[2544]: I0509 00:35:54.775622 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4f88a5b4-3eda-4814-b000-8316f499bd7f-cilium-ipsec-secrets\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775695 kubelet[2544]: I0509 00:35:54.775642 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-hostproc\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.775695 kubelet[2544]: I0509 00:35:54.775661 2544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f88a5b4-3eda-4814-b000-8316f499bd7f-cilium-cgroup\") pod \"cilium-6wclf\" (UID: \"4f88a5b4-3eda-4814-b000-8316f499bd7f\") " pod="kube-system/cilium-6wclf" May 9 00:35:54.811131 sshd[4371]: pam_unix(sshd:session): session closed for user core May 9 00:35:54.822520 systemd[1]: sshd@26-10.0.0.84:22-10.0.0.1:35118.service: Deactivated successfully. May 9 00:35:54.824747 systemd[1]: session-27.scope: Deactivated successfully. May 9 00:35:54.826430 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. May 9 00:35:54.832587 systemd[1]: Started sshd@27-10.0.0.84:22-10.0.0.1:35124.service - OpenSSH per-connection server daemon (10.0.0.1:35124). May 9 00:35:54.834316 systemd-logind[1443]: Removed session 27. May 9 00:35:54.866290 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 35124 ssh2: RSA SHA256:YkFjw59PeYd0iJo8o6yRNOqCW4DsIah6oVydwFHJQdU May 9 00:35:54.867931 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 00:35:54.872551 systemd-logind[1443]: New session 28 of user core. May 9 00:35:54.881436 systemd[1]: Started session-28.scope - Session 28 of User core. May 9 00:35:55.014821 kubelet[2544]: E0509 00:35:55.014693 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:55.015988 containerd[1462]: time="2025-05-09T00:35:55.015487733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6wclf,Uid:4f88a5b4-3eda-4814-b000-8316f499bd7f,Namespace:kube-system,Attempt:0,}" May 9 00:35:55.039351 containerd[1462]: time="2025-05-09T00:35:55.038630633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 00:35:55.039351 containerd[1462]: time="2025-05-09T00:35:55.039318507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 00:35:55.039351 containerd[1462]: time="2025-05-09T00:35:55.039334438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:35:55.039562 containerd[1462]: time="2025-05-09T00:35:55.039431711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 00:35:55.068357 systemd[1]: Started cri-containerd-03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24.scope - libcontainer container 03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24. May 9 00:35:55.095377 containerd[1462]: time="2025-05-09T00:35:55.095315704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6wclf,Uid:4f88a5b4-3eda-4814-b000-8316f499bd7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\"" May 9 00:35:55.096392 kubelet[2544]: E0509 00:35:55.096358 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:55.099552 containerd[1462]: time="2025-05-09T00:35:55.099516891Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 00:35:55.115506 containerd[1462]: time="2025-05-09T00:35:55.115452498Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c60f29e81a2d9141178f5cfd49ad286911c623c9c12c5d72ef456d6e59f7ffb\"" May 9 00:35:55.116052 containerd[1462]: time="2025-05-09T00:35:55.116019755Z" level=info msg="StartContainer for \"6c60f29e81a2d9141178f5cfd49ad286911c623c9c12c5d72ef456d6e59f7ffb\"" May 9 00:35:55.148356 systemd[1]: Started cri-containerd-6c60f29e81a2d9141178f5cfd49ad286911c623c9c12c5d72ef456d6e59f7ffb.scope - libcontainer container 6c60f29e81a2d9141178f5cfd49ad286911c623c9c12c5d72ef456d6e59f7ffb. May 9 00:35:55.179800 containerd[1462]: time="2025-05-09T00:35:55.179757568Z" level=info msg="StartContainer for \"6c60f29e81a2d9141178f5cfd49ad286911c623c9c12c5d72ef456d6e59f7ffb\" returns successfully" May 9 00:35:55.191676 systemd[1]: cri-containerd-6c60f29e81a2d9141178f5cfd49ad286911c623c9c12c5d72ef456d6e59f7ffb.scope: Deactivated successfully. May 9 00:35:55.227257 containerd[1462]: time="2025-05-09T00:35:55.227149742Z" level=info msg="shim disconnected" id=6c60f29e81a2d9141178f5cfd49ad286911c623c9c12c5d72ef456d6e59f7ffb namespace=k8s.io May 9 00:35:55.227257 containerd[1462]: time="2025-05-09T00:35:55.227228209Z" level=warning msg="cleaning up after shim disconnected" id=6c60f29e81a2d9141178f5cfd49ad286911c623c9c12c5d72ef456d6e59f7ffb namespace=k8s.io May 9 00:35:55.227257 containerd[1462]: time="2025-05-09T00:35:55.227237026Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:35:55.353327 kubelet[2544]: E0509 00:35:55.353286 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:55.354965 containerd[1462]: time="2025-05-09T00:35:55.354923829Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 00:35:55.371023 containerd[1462]: time="2025-05-09T00:35:55.370965837Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56bd05d69f68fb9a3183336eac8eaf2a060839c746141ecc32428c57d2d31289\"" May 9 00:35:55.371605 containerd[1462]: time="2025-05-09T00:35:55.371579272Z" level=info msg="StartContainer for \"56bd05d69f68fb9a3183336eac8eaf2a060839c746141ecc32428c57d2d31289\"" May 9 00:35:55.403335 systemd[1]: Started cri-containerd-56bd05d69f68fb9a3183336eac8eaf2a060839c746141ecc32428c57d2d31289.scope - libcontainer container 56bd05d69f68fb9a3183336eac8eaf2a060839c746141ecc32428c57d2d31289. May 9 00:35:55.438845 systemd[1]: cri-containerd-56bd05d69f68fb9a3183336eac8eaf2a060839c746141ecc32428c57d2d31289.scope: Deactivated successfully. May 9 00:35:55.485391 containerd[1462]: time="2025-05-09T00:35:55.485332805Z" level=info msg="StartContainer for \"56bd05d69f68fb9a3183336eac8eaf2a060839c746141ecc32428c57d2d31289\" returns successfully" May 9 00:35:55.628103 containerd[1462]: time="2025-05-09T00:35:55.627920048Z" level=info msg="shim disconnected" id=56bd05d69f68fb9a3183336eac8eaf2a060839c746141ecc32428c57d2d31289 namespace=k8s.io May 9 00:35:55.628103 containerd[1462]: time="2025-05-09T00:35:55.627989619Z" level=warning msg="cleaning up after shim disconnected" id=56bd05d69f68fb9a3183336eac8eaf2a060839c746141ecc32428c57d2d31289 namespace=k8s.io May 9 00:35:55.628103 containerd[1462]: time="2025-05-09T00:35:55.628000991Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:35:56.356327 kubelet[2544]: E0509 00:35:56.356290 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:56.358807 containerd[1462]: time="2025-05-09T00:35:56.358759964Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 00:35:56.380767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129327403.mount: Deactivated successfully. May 9 00:35:56.384515 containerd[1462]: time="2025-05-09T00:35:56.384462540Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"22c4eaaed18eafe69846edcb0ce39f825ab79b53802a404f984642f98f47b2b2\"" May 9 00:35:56.385065 containerd[1462]: time="2025-05-09T00:35:56.385029837Z" level=info msg="StartContainer for \"22c4eaaed18eafe69846edcb0ce39f825ab79b53802a404f984642f98f47b2b2\"" May 9 00:35:56.423424 systemd[1]: Started cri-containerd-22c4eaaed18eafe69846edcb0ce39f825ab79b53802a404f984642f98f47b2b2.scope - libcontainer container 22c4eaaed18eafe69846edcb0ce39f825ab79b53802a404f984642f98f47b2b2. May 9 00:35:56.454803 containerd[1462]: time="2025-05-09T00:35:56.454762133Z" level=info msg="StartContainer for \"22c4eaaed18eafe69846edcb0ce39f825ab79b53802a404f984642f98f47b2b2\" returns successfully" May 9 00:35:56.457004 systemd[1]: cri-containerd-22c4eaaed18eafe69846edcb0ce39f825ab79b53802a404f984642f98f47b2b2.scope: Deactivated successfully. May 9 00:35:56.481523 containerd[1462]: time="2025-05-09T00:35:56.481447707Z" level=info msg="shim disconnected" id=22c4eaaed18eafe69846edcb0ce39f825ab79b53802a404f984642f98f47b2b2 namespace=k8s.io May 9 00:35:56.481523 containerd[1462]: time="2025-05-09T00:35:56.481505315Z" level=warning msg="cleaning up after shim disconnected" id=22c4eaaed18eafe69846edcb0ce39f825ab79b53802a404f984642f98f47b2b2 namespace=k8s.io May 9 00:35:56.481523 containerd[1462]: time="2025-05-09T00:35:56.481513551Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:35:56.883954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22c4eaaed18eafe69846edcb0ce39f825ab79b53802a404f984642f98f47b2b2-rootfs.mount: Deactivated successfully. May 9 00:35:57.359948 kubelet[2544]: E0509 00:35:57.359912 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:57.361667 containerd[1462]: time="2025-05-09T00:35:57.361625622Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 00:35:57.498544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896019301.mount: Deactivated successfully. May 9 00:35:57.505380 containerd[1462]: time="2025-05-09T00:35:57.505317783Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d04b85cb34a93d53aaafde52af7f2f0e53c788041d967666d09006920221f80d\"" May 9 00:35:57.506160 containerd[1462]: time="2025-05-09T00:35:57.505951927Z" level=info msg="StartContainer for \"d04b85cb34a93d53aaafde52af7f2f0e53c788041d967666d09006920221f80d\"" May 9 00:35:57.534414 systemd[1]: Started cri-containerd-d04b85cb34a93d53aaafde52af7f2f0e53c788041d967666d09006920221f80d.scope - libcontainer container d04b85cb34a93d53aaafde52af7f2f0e53c788041d967666d09006920221f80d. May 9 00:35:57.563360 systemd[1]: cri-containerd-d04b85cb34a93d53aaafde52af7f2f0e53c788041d967666d09006920221f80d.scope: Deactivated successfully. May 9 00:35:57.567093 containerd[1462]: time="2025-05-09T00:35:57.567052052Z" level=info msg="StartContainer for \"d04b85cb34a93d53aaafde52af7f2f0e53c788041d967666d09006920221f80d\" returns successfully" May 9 00:35:57.590599 containerd[1462]: time="2025-05-09T00:35:57.590528044Z" level=info msg="shim disconnected" id=d04b85cb34a93d53aaafde52af7f2f0e53c788041d967666d09006920221f80d namespace=k8s.io May 9 00:35:57.590599 containerd[1462]: time="2025-05-09T00:35:57.590585923Z" level=warning msg="cleaning up after shim disconnected" id=d04b85cb34a93d53aaafde52af7f2f0e53c788041d967666d09006920221f80d namespace=k8s.io May 9 00:35:57.590599 containerd[1462]: time="2025-05-09T00:35:57.590595902Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 00:35:57.883737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d04b85cb34a93d53aaafde52af7f2f0e53c788041d967666d09006920221f80d-rootfs.mount: Deactivated successfully. May 9 00:35:57.994984 kubelet[2544]: E0509 00:35:57.994931 2544 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 9 00:35:58.365716 kubelet[2544]: E0509 00:35:58.365677 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:58.368334 containerd[1462]: time="2025-05-09T00:35:58.368270773Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 00:35:58.570585 containerd[1462]: time="2025-05-09T00:35:58.570522528Z" level=info msg="CreateContainer within sandbox \"03aaefb440cb70807c486ff9f69951244ca737e971f2b22f2c2e2be9776c9d24\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d3cd511b34aeb731528578045cd8f97cfb6d7c94a5224e782f165d6d30b76390\"" May 9 00:35:58.572273 containerd[1462]: time="2025-05-09T00:35:58.571147403Z" level=info msg="StartContainer for \"d3cd511b34aeb731528578045cd8f97cfb6d7c94a5224e782f165d6d30b76390\"" May 9 00:35:58.608350 systemd[1]: Started cri-containerd-d3cd511b34aeb731528578045cd8f97cfb6d7c94a5224e782f165d6d30b76390.scope - libcontainer container d3cd511b34aeb731528578045cd8f97cfb6d7c94a5224e782f165d6d30b76390. May 9 00:35:58.640496 containerd[1462]: time="2025-05-09T00:35:58.640071453Z" level=info msg="StartContainer for \"d3cd511b34aeb731528578045cd8f97cfb6d7c94a5224e782f165d6d30b76390\" returns successfully" May 9 00:35:59.077255 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 9 00:35:59.371097 kubelet[2544]: E0509 00:35:59.370957 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:35:59.814539 kubelet[2544]: E0509 00:35:59.814479 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:00.476121 kubelet[2544]: I0509 00:36:00.476048 2544 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T00:36:00Z","lastTransitionTime":"2025-05-09T00:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 9 00:36:01.016145 kubelet[2544]: E0509 00:36:01.016030 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:02.476403 systemd-networkd[1385]: lxc_health: Link UP May 9 00:36:02.489709 systemd-networkd[1385]: lxc_health: Gained carrier May 9 00:36:03.017909 kubelet[2544]: E0509 00:36:03.016730 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:03.038891 kubelet[2544]: I0509 00:36:03.038775 2544 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6wclf" podStartSLOduration=9.03874379 podStartE2EDuration="9.03874379s" podCreationTimestamp="2025-05-09 00:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 00:35:59.386412447 +0000 UTC m=+91.877572352" watchObservedRunningTime="2025-05-09 00:36:03.03874379 +0000 UTC m=+95.529903685" May 9 00:36:03.379879 kubelet[2544]: E0509 00:36:03.379823 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:03.508590 systemd-networkd[1385]: lxc_health: Gained IPv6LL May 9 00:36:04.383514 kubelet[2544]: E0509 00:36:04.383437 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:04.813928 kubelet[2544]: E0509 00:36:04.813878 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 9 00:36:09.786353 sshd[4380]: pam_unix(sshd:session): session closed for user core May 9 00:36:09.790748 systemd[1]: sshd@27-10.0.0.84:22-10.0.0.1:35124.service: Deactivated successfully. May 9 00:36:09.793258 systemd[1]: session-28.scope: Deactivated successfully. May 9 00:36:09.794052 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. May 9 00:36:09.795044 systemd-logind[1443]: Removed session 28.