Mar 7 01:49:39.813522 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:49:39.813822 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:49:39.813840 kernel: BIOS-provided physical RAM map: Mar 7 01:49:39.813849 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:49:39.813857 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 7 01:49:39.813866 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 7 01:49:39.813875 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 7 01:49:39.813884 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 7 01:49:39.813892 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Mar 7 01:49:39.813902 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Mar 7 01:49:39.813916 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Mar 7 01:49:39.813925 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Mar 7 01:49:39.813934 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Mar 7 01:49:39.813943 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Mar 7 01:49:39.813953 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Mar 7 01:49:39.813962 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 7 01:49:39.813974 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Mar 7 01:49:39.813983 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Mar 7 01:49:39.813993 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 7 01:49:39.814002 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:49:39.814011 kernel: NX (Execute Disable) protection: active Mar 7 01:49:39.814019 kernel: APIC: Static calls initialized Mar 7 01:49:39.814028 kernel: efi: EFI v2.7 by EDK II Mar 7 01:49:39.814037 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Mar 7 01:49:39.814046 kernel: SMBIOS 2.8 present. Mar 7 01:49:39.814055 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Mar 7 01:49:39.814064 kernel: Hypervisor detected: KVM Mar 7 01:49:39.814076 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:49:39.814085 kernel: kvm-clock: using sched offset of 16588439115 cycles Mar 7 01:49:39.814095 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:49:39.814104 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:49:39.814114 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:49:39.814124 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:49:39.814133 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Mar 7 01:49:39.814143 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:49:39.814152 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:49:39.814164 kernel: Using GB pages for direct mapping Mar 7 01:49:39.814173 kernel: Secure boot disabled Mar 7 01:49:39.814183 kernel: ACPI: Early table checksum verification disabled Mar 7 01:49:39.814192 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 7 01:49:39.814206 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 7 01:49:39.814216 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:39.814226 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:39.814239 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 7 01:49:39.814249 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:39.814259 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:39.814268 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:39.814278 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:49:39.814288 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 7 01:49:39.814297 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 7 01:49:39.814310 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Mar 7 01:49:39.814322 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 7 01:49:39.814333 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 7 01:49:39.814343 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 7 01:49:39.814353 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 7 01:49:39.814362 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 7 01:49:39.814372 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 7 01:49:39.814382 kernel: No NUMA configuration found Mar 7 01:49:39.814391 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Mar 7 01:49:39.814407 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Mar 7 01:49:39.814419 kernel: Zone ranges: Mar 7 01:49:39.814431 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:49:39.814442 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Mar 7 01:49:39.814455 kernel: Normal empty Mar 7 01:49:39.814466 kernel: Movable zone start for each node Mar 7 01:49:39.814477 kernel: Early memory node ranges Mar 7 01:49:39.814488 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:49:39.814499 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 7 01:49:39.814516 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 7 01:49:39.814527 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Mar 7 01:49:39.814538 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Mar 7 01:49:39.814550 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Mar 7 01:49:39.814560 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Mar 7 01:49:39.814572 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:49:39.814584 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:49:39.814595 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 7 01:49:39.814607 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:49:39.814623 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Mar 7 01:49:39.814635 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 7 01:49:39.814646 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Mar 7 01:49:39.814658 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:49:39.814670 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:49:39.814793 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:49:39.814804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:49:39.814814 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:49:39.814824 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:49:39.814838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:49:39.814848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:49:39.814858 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:49:39.814868 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:49:39.814880 kernel: TSC deadline timer available Mar 7 01:49:39.814891 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:49:39.814902 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:49:39.814913 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:49:39.814926 kernel: kvm-guest: setup PV sched yield Mar 7 01:49:39.814941 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 7 01:49:39.814953 kernel: Booting paravirtualized kernel on KVM Mar 7 01:49:39.814965 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:49:39.814976 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:49:39.814989 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:49:39.814999 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:49:39.815012 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:49:39.815024 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:49:39.815034 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:49:39.815053 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:49:39.815064 kernel: random: crng init done Mar 7 01:49:39.815074 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:49:39.815084 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:49:39.815094 kernel: Fallback order for Node 0: 0 Mar 7 01:49:39.815103 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Mar 7 01:49:39.815114 kernel: Policy zone: DMA32 Mar 7 01:49:39.815124 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:49:39.815135 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 166124K reserved, 0K cma-reserved) Mar 7 01:49:39.815149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:49:39.815159 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:49:39.815169 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:49:39.815179 kernel: Dynamic Preempt: voluntary Mar 7 01:49:39.815189 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:49:39.815213 kernel: rcu: RCU event tracing is enabled. Mar 7 01:49:39.815228 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:49:39.815238 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:49:39.815249 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:49:39.815260 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:49:39.815272 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:49:39.815287 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:49:39.815301 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:49:39.815313 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:49:39.815323 kernel: Console: colour dummy device 80x25 Mar 7 01:49:39.815334 kernel: printk: console [ttyS0] enabled Mar 7 01:49:39.815348 kernel: ACPI: Core revision 20230628 Mar 7 01:49:39.815358 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:49:39.815369 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:49:39.815379 kernel: x2apic enabled Mar 7 01:49:39.815390 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:49:39.815400 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:49:39.815410 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:49:39.815421 kernel: kvm-guest: setup PV IPIs Mar 7 01:49:39.815431 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:49:39.815445 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:49:39.815454 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:49:39.815464 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:49:39.815473 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:49:39.815482 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:49:39.815491 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:49:39.815500 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:49:39.815510 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:49:39.815519 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:49:39.815531 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:49:39.815541 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:49:39.815550 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:49:39.815560 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:49:39.815569 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:49:39.815578 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:49:39.815587 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:49:39.815596 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:49:39.815608 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:49:39.815617 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:49:39.815626 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:49:39.815635 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:49:39.815645 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:49:39.815654 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:49:39.815663 kernel: landlock: Up and running. Mar 7 01:49:39.815672 kernel: SELinux: Initializing. Mar 7 01:49:39.816259 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:49:39.816269 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:49:39.816283 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:49:39.816293 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:49:39.816303 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:49:39.816315 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:49:39.816325 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:49:39.816334 kernel: signal: max sigframe size: 1776 Mar 7 01:49:39.816344 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:49:39.816354 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:49:39.816369 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:49:39.816381 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:49:39.816390 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:49:39.816400 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:49:39.816409 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:49:39.816419 kernel: smpboot: Max logical packages: 1 Mar 7 01:49:39.816428 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:49:39.816437 kernel: devtmpfs: initialized Mar 7 01:49:39.816447 kernel: x86/mm: Memory block size: 128MB Mar 7 01:49:39.816461 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 7 01:49:39.816471 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 7 01:49:39.816480 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Mar 7 01:49:39.816490 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 7 01:49:39.816499 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 7 01:49:39.816508 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:49:39.816518 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:49:39.816528 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:49:39.816540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:49:39.816556 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:49:39.816566 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:49:39.816575 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:49:39.816584 kernel: audit: type=2000 audit(1772848165.771:1): state=initialized audit_enabled=0 res=1 Mar 7 01:49:39.816594 kernel: cpuidle: using governor menu Mar 7 01:49:39.816605 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:49:39.816615 kernel: dca service started, version 1.12.1 Mar 7 01:49:39.816625 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:49:39.816635 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:49:39.816651 kernel: PCI: Using configuration type 1 for base access Mar 7 01:49:39.816660 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:49:39.816670 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:49:39.817106 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:49:39.817117 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:49:39.817127 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:49:39.817139 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:49:39.817149 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:49:39.817158 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:49:39.817172 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:49:39.817182 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:49:39.817191 kernel: ACPI: Interpreter enabled Mar 7 01:49:39.817201 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:49:39.817210 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:49:39.817219 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:49:39.817229 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:49:39.817242 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:49:39.817251 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:49:39.819146 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:49:39.819325 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:49:39.819469 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:49:39.819482 kernel: PCI host bridge to bus 0000:00 Mar 7 01:49:39.819626 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:49:39.820563 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:49:39.821367 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:49:39.821507 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:49:39.821634 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:49:39.827176 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Mar 7 01:49:39.827314 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:49:39.829440 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:49:39.833183 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:49:39.833392 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 7 01:49:39.833585 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 7 01:49:39.833983 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 7 01:49:39.834160 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 7 01:49:39.834331 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:49:39.835046 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:49:39.835219 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 7 01:49:39.835401 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 7 01:49:39.835576 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Mar 7 01:49:39.836005 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:49:39.836208 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 7 01:49:39.836398 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 7 01:49:39.836570 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Mar 7 01:49:39.837124 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:49:39.837303 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 7 01:49:39.837488 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 7 01:49:39.837883 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Mar 7 01:49:39.838056 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 7 01:49:39.838229 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:49:39.838416 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:49:39.838592 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 39062 usecs Mar 7 01:49:39.838889 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:49:39.839046 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 7 01:49:39.839202 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 7 01:49:39.839454 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:49:39.839626 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 7 01:49:39.839641 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:49:39.839658 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:49:39.839669 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:49:39.839827 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:49:39.839839 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:49:39.839850 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:49:39.839860 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:49:39.839870 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:49:39.839880 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:49:39.839891 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:49:39.839905 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:49:39.839916 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:49:39.839926 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:49:39.839936 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:49:39.839946 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:49:39.839957 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:49:39.839967 kernel: iommu: Default domain type: Translated Mar 7 01:49:39.839977 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:49:39.839988 kernel: efivars: Registered efivars operations Mar 7 01:49:39.840001 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:49:39.840012 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:49:39.840022 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 7 01:49:39.840033 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Mar 7 01:49:39.840043 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Mar 7 01:49:39.840053 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Mar 7 01:49:39.840227 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:49:39.840412 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:49:39.840585 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:49:39.840605 kernel: vgaarb: loaded Mar 7 01:49:39.840616 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:49:39.840627 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:49:39.840637 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:49:39.840648 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:49:39.840659 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:49:39.840672 kernel: pnp: PnP ACPI init Mar 7 01:49:39.841057 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:49:39.841079 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:49:39.841090 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:49:39.841101 kernel: NET: Registered PF_INET protocol family Mar 7 01:49:39.841111 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:49:39.841122 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:49:39.841133 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:49:39.841143 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:49:39.841154 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:49:39.841165 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:49:39.841179 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:49:39.841189 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:49:39.841201 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:49:39.841214 kernel: NET: Registered PF_XDP protocol family Mar 7 01:49:39.841413 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 7 01:49:39.841617 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 7 01:49:39.841929 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:49:39.842096 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:49:39.842260 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:49:39.842449 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:49:39.842604 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:49:39.842987 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Mar 7 01:49:39.843005 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:49:39.843015 kernel: Initialise system trusted keyrings Mar 7 01:49:39.843027 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:49:39.843037 kernel: Key type asymmetric registered Mar 7 01:49:39.843052 kernel: Asymmetric key parser 'x509' registered Mar 7 01:49:39.843062 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:49:39.843073 kernel: io scheduler mq-deadline registered Mar 7 01:49:39.843083 kernel: io scheduler kyber registered Mar 7 01:49:39.843094 kernel: io scheduler bfq registered Mar 7 01:49:39.843104 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:49:39.843115 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:49:39.843126 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:49:39.843136 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:49:39.843147 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:49:39.843160 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:49:39.843171 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:49:39.843181 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:49:39.843191 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:49:39.843595 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:49:39.843866 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:49:39.843882 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 7 01:49:39.844034 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:49:36 UTC (1772848176) Mar 7 01:49:39.844189 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:49:39.844203 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:49:39.844215 kernel: efifb: probing for efifb Mar 7 01:49:39.844229 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Mar 7 01:49:39.844242 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Mar 7 01:49:39.844252 kernel: efifb: scrolling: redraw Mar 7 01:49:39.844265 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Mar 7 01:49:39.844277 kernel: Console: switching to colour frame buffer device 100x37 Mar 7 01:49:39.844295 kernel: fb0: EFI VGA frame buffer device Mar 7 01:49:39.844308 kernel: pstore: Using crash dump compression: deflate Mar 7 01:49:39.844320 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:49:39.844333 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:49:39.844345 kernel: Segment Routing with IPv6 Mar 7 01:49:39.844357 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:49:39.844369 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:49:39.844382 kernel: Key type dns_resolver registered Mar 7 01:49:39.844417 kernel: IPI shorthand broadcast: enabled Mar 7 01:49:39.844431 kernel: sched_clock: Marking stable (8662106708, 1152571072)->(12387353658, -2572675878) Mar 7 01:49:39.844445 kernel: registered taskstats version 1 Mar 7 01:49:39.844456 kernel: Loading compiled-in X.509 certificates Mar 7 01:49:39.844467 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:49:39.844477 kernel: Key type .fscrypt registered Mar 7 01:49:39.844488 kernel: Key type fscrypt-provisioning registered Mar 7 01:49:39.844498 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:49:39.844509 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:49:39.844520 kernel: ima: No architecture policies found Mar 7 01:49:39.844533 kernel: clk: Disabling unused clocks Mar 7 01:49:39.844545 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:49:39.844558 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:49:39.844569 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:49:39.844580 kernel: Run /init as init process Mar 7 01:49:39.844591 kernel: with arguments: Mar 7 01:49:39.844602 kernel: /init Mar 7 01:49:39.844612 kernel: with environment: Mar 7 01:49:39.844624 kernel: HOME=/ Mar 7 01:49:39.844638 kernel: TERM=linux Mar 7 01:49:39.844855 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:49:39.844877 systemd[1]: Detected virtualization kvm. Mar 7 01:49:39.844893 systemd[1]: Detected architecture x86-64. Mar 7 01:49:39.844904 systemd[1]: Running in initrd. Mar 7 01:49:39.844917 systemd[1]: No hostname configured, using default hostname. Mar 7 01:49:39.844931 systemd[1]: Hostname set to . Mar 7 01:49:39.844951 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:49:39.844965 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:49:39.844979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:49:39.844992 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:49:39.845005 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:49:39.845020 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:49:39.845037 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:49:39.845052 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:49:39.845067 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:49:39.845081 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:49:39.845095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:49:39.845109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:49:39.845127 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:49:39.845141 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:49:39.845153 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:49:39.845168 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:49:39.845179 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:49:39.845193 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:49:39.845206 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:49:39.845220 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:49:39.845233 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:49:39.845252 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:49:39.845267 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:49:39.845279 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:49:39.845293 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:49:39.845307 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:49:39.845320 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:49:39.845335 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:49:39.845347 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:49:39.845365 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:49:39.845413 systemd-journald[194]: Collecting audit messages is disabled. Mar 7 01:49:39.845446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:49:39.845925 systemd-journald[194]: Journal started Mar 7 01:49:39.845960 systemd-journald[194]: Runtime Journal (/run/log/journal/4c02e910e39242be90fc34906eb9611a) is 6.0M, max 48.3M, 42.2M free. Mar 7 01:49:39.866044 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:49:39.867231 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:49:39.902190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:49:39.958927 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:49:40.073822 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:49:40.140333 systemd-modules-load[195]: Inserted module 'overlay' Mar 7 01:49:40.187116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:49:40.207524 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:49:40.301055 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:49:40.347528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:49:40.364550 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:49:40.393337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:49:40.482023 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:49:40.483967 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:49:40.542854 dracut-cmdline[223]: dracut-dracut-053 Mar 7 01:49:40.515174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:49:40.598965 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:49:40.706627 kernel: Bridge firewalling registered Mar 7 01:49:40.715220 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 7 01:49:40.728665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:49:40.791117 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:49:40.887992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:49:41.029668 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:49:41.092113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:49:41.194102 kernel: SCSI subsystem initialized Mar 7 01:49:41.248250 systemd-resolved[307]: Positive Trust Anchors: Mar 7 01:49:41.248962 systemd-resolved[307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:49:41.250319 systemd-resolved[307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:49:41.275587 systemd-resolved[307]: Defaulting to hostname 'linux'. Mar 7 01:49:41.294038 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:49:41.396481 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:49:41.498026 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:49:41.647383 kernel: iscsi: registered transport (tcp) Mar 7 01:49:41.869815 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:49:41.869946 kernel: QLogic iSCSI HBA Driver Mar 7 01:49:42.165478 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:49:42.230181 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:49:42.397567 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:49:42.397643 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:49:42.400877 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:49:42.554892 kernel: raid6: avx2x4 gen() 18818 MB/s Mar 7 01:49:42.576958 kernel: raid6: avx2x2 gen() 18224 MB/s Mar 7 01:49:42.600242 kernel: raid6: avx2x1 gen() 10851 MB/s Mar 7 01:49:42.600330 kernel: raid6: using algorithm avx2x4 gen() 18818 MB/s Mar 7 01:49:42.636317 kernel: raid6: .... xor() 4538 MB/s, rmw enabled Mar 7 01:49:42.636398 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:49:42.700433 kernel: xor: automatically using best checksumming function avx Mar 7 01:49:43.624314 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:49:43.679256 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:49:43.738343 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:49:43.805414 systemd-udevd[417]: Using default interface naming scheme 'v255'. Mar 7 01:49:43.849416 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:49:43.914298 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:49:43.998967 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 7 01:49:44.156238 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:49:44.212871 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:49:44.453305 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:49:44.541651 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:49:44.615553 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:49:44.668547 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:49:44.688494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:49:44.688864 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:49:44.798272 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:49:44.831072 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:49:44.841815 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:49:44.860869 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:49:44.870559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:49:44.992970 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:49:44.993020 kernel: GPT:9289727 != 19775487 Mar 7 01:49:44.993056 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:49:44.993072 kernel: GPT:9289727 != 19775487 Mar 7 01:49:44.993086 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:49:44.993101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:49:44.870841 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:49:44.910413 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:49:44.944036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:49:44.944384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:49:44.949091 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:49:45.059509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:49:45.093080 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:49:45.163502 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:49:45.165955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:49:45.274441 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:49:45.390128 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (459) Mar 7 01:49:45.402256 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:49:45.415227 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:49:45.492055 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Mar 7 01:49:45.445826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:49:45.510066 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:49:45.523958 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:49:45.588378 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:49:45.644306 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:49:45.786995 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:49:45.815463 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:49:45.902962 disk-uuid[506]: Primary Header is updated. Mar 7 01:49:45.902962 disk-uuid[506]: Secondary Entries is updated. Mar 7 01:49:45.902962 disk-uuid[506]: Secondary Header is updated. Mar 7 01:49:45.929510 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:49:45.979378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:49:46.068110 kernel: libata version 3.00 loaded. Mar 7 01:49:46.079424 kernel: AES CTR mode by8 optimization enabled Mar 7 01:49:46.110272 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:49:46.133633 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:49:46.163041 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:49:46.163350 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:49:46.221902 kernel: scsi host0: ahci Mar 7 01:49:46.243015 kernel: scsi host1: ahci Mar 7 01:49:46.277025 kernel: scsi host2: ahci Mar 7 01:49:46.281828 kernel: scsi host3: ahci Mar 7 01:49:46.304161 kernel: scsi host4: ahci Mar 7 01:49:46.304483 kernel: scsi host5: ahci Mar 7 01:49:46.304915 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 7 01:49:46.323084 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 7 01:49:46.343015 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 7 01:49:46.362382 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 7 01:49:46.391370 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 7 01:49:46.391458 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 7 01:49:46.747002 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:49:46.747225 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:49:46.756082 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:49:46.767743 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:49:46.785900 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:49:46.798983 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:49:46.799364 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:49:46.807534 kernel: ata3.00: applying bridge limits Mar 7 01:49:46.815316 kernel: ata3.00: configured for UDMA/100 Mar 7 01:49:46.861569 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:49:47.018451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:49:47.033230 disk-uuid[507]: The operation has completed successfully. Mar 7 01:49:47.139348 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:49:47.139935 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:49:47.193023 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:49:47.881128 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:49:47.881299 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:49:47.973459 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:49:48.054907 sh[595]: Success Mar 7 01:49:48.246152 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:49:48.483152 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:49:48.525989 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:49:48.554605 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:49:48.634009 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:49:48.634109 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:49:48.645424 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:49:48.687354 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:49:48.703326 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:49:48.879194 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:49:48.903256 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:49:48.959436 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:49:49.074404 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:49:49.184380 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:49:49.184467 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:49:49.184488 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:49:49.255130 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:49:49.362934 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:49:49.404997 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:49:49.458053 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:49:49.547240 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:49:50.210525 ignition[685]: Ignition 2.19.0 Mar 7 01:49:50.210542 ignition[685]: Stage: fetch-offline Mar 7 01:49:50.214296 ignition[685]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:49:50.214376 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:49:50.214633 ignition[685]: parsed url from cmdline: "" Mar 7 01:49:50.214640 ignition[685]: no config URL provided Mar 7 01:49:50.214649 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:49:50.214664 ignition[685]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:49:50.214852 ignition[685]: op(1): [started] loading QEMU firmware config module Mar 7 01:49:50.214860 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:49:50.344430 ignition[685]: op(1): [finished] loading QEMU firmware config module Mar 7 01:49:50.491127 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:49:50.810324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:49:51.047262 systemd-networkd[783]: lo: Link UP Mar 7 01:49:51.047601 systemd-networkd[783]: lo: Gained carrier Mar 7 01:49:51.064864 systemd-networkd[783]: Enumeration completed Mar 7 01:49:51.068312 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:49:51.075062 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:49:51.075069 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:49:51.108879 systemd-networkd[783]: eth0: Link UP Mar 7 01:49:51.108887 systemd-networkd[783]: eth0: Gained carrier Mar 7 01:49:51.108906 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:49:51.137528 systemd[1]: Reached target network.target - Network. Mar 7 01:49:51.366203 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:49:52.064052 ignition[685]: parsing config with SHA512: fa77c358b85a7c53d32bb37fe33bd0f5479af54af2705dbf7d51fdcab52de644b1cac2a3d607378bd262ff9d48b8f1c9235350ff609bddf335f5f2fee9381340 Mar 7 01:49:52.087227 unknown[685]: fetched base config from "system" Mar 7 01:49:52.087247 unknown[685]: fetched user config from "qemu" Mar 7 01:49:52.094365 ignition[685]: fetch-offline: fetch-offline passed Mar 7 01:49:52.094546 ignition[685]: Ignition finished successfully Mar 7 01:49:52.169560 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:49:52.214871 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:49:52.291225 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:49:52.536287 systemd-networkd[783]: eth0: Gained IPv6LL Mar 7 01:49:52.558830 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:49:52.547990 ignition[787]: Ignition 2.19.0 Mar 7 01:49:52.548001 ignition[787]: Stage: kargs Mar 7 01:49:52.548253 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:49:52.642960 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:49:52.548269 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:49:52.549844 ignition[787]: kargs: kargs passed Mar 7 01:49:52.549909 ignition[787]: Ignition finished successfully Mar 7 01:49:53.129343 ignition[795]: Ignition 2.19.0 Mar 7 01:49:53.129408 ignition[795]: Stage: disks Mar 7 01:49:53.156852 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:49:53.134186 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:49:53.192662 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:49:53.134210 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:49:53.193011 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:49:53.140509 ignition[795]: disks: disks passed Mar 7 01:49:53.193095 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:49:53.140597 ignition[795]: Ignition finished successfully Mar 7 01:49:53.193158 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:49:53.193208 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:49:53.302595 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:49:53.444054 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:49:53.473126 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:49:53.553116 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:49:54.579011 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:49:54.582268 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:49:54.592559 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:49:54.650870 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:49:54.677997 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:49:54.699324 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:49:54.699472 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:49:54.699523 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:49:54.760445 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:49:54.810167 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:49:55.001935 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Mar 7 01:49:55.043970 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:49:55.044127 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:49:55.044158 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:49:55.147338 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:49:55.163520 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:49:55.244411 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:49:55.312119 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:49:55.358069 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:49:55.409124 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:49:56.046863 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:49:56.142063 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:49:56.266076 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:49:56.201327 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:49:56.247188 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:49:56.460539 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:49:56.760584 ignition[927]: INFO : Ignition 2.19.0 Mar 7 01:49:56.760584 ignition[927]: INFO : Stage: mount Mar 7 01:49:56.800867 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:49:56.800867 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:49:56.800867 ignition[927]: INFO : mount: mount passed Mar 7 01:49:56.800867 ignition[927]: INFO : Ignition finished successfully Mar 7 01:49:56.796346 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:49:56.974053 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:49:57.076277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:49:57.145178 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 7 01:49:57.167105 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:49:57.167182 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:49:57.167198 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:49:57.203095 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:49:57.210361 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:49:57.597771 ignition[957]: INFO : Ignition 2.19.0 Mar 7 01:49:57.597771 ignition[957]: INFO : Stage: files Mar 7 01:49:57.628562 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:49:57.628562 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:49:57.628562 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:49:57.730487 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:49:57.730487 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:49:57.730487 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:49:57.730487 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:49:57.841301 unknown[957]: wrote ssh authorized keys file for user: core Mar 7 01:49:57.862172 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:49:57.862172 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:49:57.862172 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:49:57.862172 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:49:57.862172 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:49:58.283554 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:49:59.745517 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:49:59.745517 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:49:59.745517 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 7 01:50:00.238263 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 7 01:50:02.758409 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:50:02.758409 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:50:02.844428 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:50:03.441151 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 7 01:50:05.786392 kernel: hrtimer: interrupt took 5879978 ns Mar 7 01:50:15.474317 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:50:15.507619 ignition[957]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 7 01:50:15.536570 ignition[957]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:50:16.108486 ignition[957]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:50:16.289970 ignition[957]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:50:16.338544 ignition[957]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:50:16.338544 ignition[957]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:50:16.338544 ignition[957]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:50:16.338544 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:50:16.338544 ignition[957]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:50:16.338544 ignition[957]: INFO : files: files passed Mar 7 01:50:16.338544 ignition[957]: INFO : Ignition finished successfully Mar 7 01:50:16.435139 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:50:16.540401 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:50:16.551346 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:50:16.606522 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:50:16.607005 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:50:16.693186 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:50:16.730821 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:50:16.730821 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:50:16.796416 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:50:16.849388 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:50:16.878178 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:50:16.957515 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:50:17.457373 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:50:17.458637 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:50:17.546302 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:50:17.581407 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:50:17.701015 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:50:17.777616 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:50:17.936580 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:50:18.014663 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:50:18.104974 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:50:18.165658 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:50:18.246986 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:50:18.294486 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:50:18.294991 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:50:18.359371 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:50:18.402005 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:50:18.476226 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:50:18.491170 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:50:18.491588 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:50:18.513496 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:50:18.650635 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:50:18.748491 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:50:18.785654 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:50:18.843336 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:50:18.912602 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:50:18.913219 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:50:18.973345 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:50:18.973581 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:50:19.108800 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:50:19.111300 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:50:19.182429 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:50:19.183092 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:50:19.261631 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:50:19.268286 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:50:19.289032 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:50:19.374804 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:50:19.418963 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:50:19.482759 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:50:19.498600 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:50:19.504279 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:50:19.504462 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:50:19.504660 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:50:19.505028 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:50:19.505219 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:50:19.505387 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:50:19.505575 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:50:19.505970 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:50:19.604807 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:50:19.684498 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:50:19.684969 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:50:19.762605 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:50:19.782608 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:50:19.840450 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:50:20.019631 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:50:20.020207 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:50:20.087139 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:50:20.107497 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:50:20.141833 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:50:20.179319 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:50:20.179554 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:50:20.275411 ignition[1012]: INFO : Ignition 2.19.0 Mar 7 01:50:20.275411 ignition[1012]: INFO : Stage: umount Mar 7 01:50:20.275411 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:50:20.275411 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:50:20.275411 ignition[1012]: INFO : umount: umount passed Mar 7 01:50:20.275411 ignition[1012]: INFO : Ignition finished successfully Mar 7 01:50:20.301254 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:50:20.301619 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:50:20.454144 systemd[1]: Stopped target network.target - Network. Mar 7 01:50:20.554239 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:50:20.554604 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:50:20.675453 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:50:20.676320 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:50:20.754359 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:50:20.755152 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:50:20.771218 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:50:20.771329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:50:20.786597 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:50:20.786816 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:50:20.787314 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:50:20.787507 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:50:20.889412 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:50:20.889815 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:50:20.908772 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 7 01:50:20.976830 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:50:20.983931 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:50:21.062274 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:50:21.062360 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:50:21.283670 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:50:21.332987 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:50:21.333161 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:50:21.359384 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:50:21.359494 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:50:21.382253 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:50:21.382365 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:50:21.398338 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:50:21.398459 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:50:21.449608 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:50:21.585282 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:50:21.592618 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:50:21.612469 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:50:21.614108 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:50:21.647003 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:50:21.647125 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:50:21.657435 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:50:21.657514 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:50:21.662172 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:50:21.662269 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:50:21.787671 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:50:21.795108 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:50:21.846324 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:50:21.846501 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:50:21.929057 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:50:21.948429 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:50:21.952101 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:50:21.958972 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:50:21.959050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:50:21.970608 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:50:21.971006 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:50:21.984272 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:50:21.992147 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:50:22.073075 systemd[1]: Switching root. Mar 7 01:50:22.384540 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 7 01:50:22.384644 systemd-journald[194]: Journal stopped Mar 7 01:50:33.820623 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:50:33.821309 kernel: SELinux: policy capability open_perms=1 Mar 7 01:50:33.821349 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:50:33.821369 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:50:33.821387 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:50:33.821410 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:50:33.821427 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:50:33.821441 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:50:33.821458 kernel: audit: type=1403 audit(1772848223.644:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:50:33.821489 systemd[1]: Successfully loaded SELinux policy in 197.774ms. Mar 7 01:50:33.821525 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 59.852ms. Mar 7 01:50:33.821547 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:50:33.821567 systemd[1]: Detected virtualization kvm. Mar 7 01:50:33.821586 systemd[1]: Detected architecture x86-64. Mar 7 01:50:33.821801 systemd[1]: Detected first boot. Mar 7 01:50:33.821828 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:50:33.821855 zram_generator::config[1071]: No configuration found. Mar 7 01:50:33.830202 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:50:33.830534 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:50:33.830557 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:50:33.830576 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:50:33.830665 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:50:33.830796 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:50:33.830818 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:50:33.830846 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:50:33.830865 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:50:33.830962 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:50:33.830988 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:50:33.831010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:50:33.831034 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:50:33.831055 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:50:33.831086 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:50:33.831107 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:50:33.831133 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:50:33.831151 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:50:33.831258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:50:33.831280 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:50:33.831301 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:50:33.831320 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:50:33.831339 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:50:33.831358 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:50:33.831383 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:50:33.831402 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:50:33.831421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:50:33.831439 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:50:33.831458 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:50:33.831477 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:50:33.831496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:50:33.831517 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:50:33.831538 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:50:33.831565 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:50:33.831585 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:50:33.831605 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:50:33.831626 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:50:33.831648 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:50:33.831668 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:50:33.831937 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:50:33.831969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:50:33.831991 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:50:33.832018 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:50:33.832037 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:50:33.832057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:50:33.832079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:50:33.832100 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:50:33.832121 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:50:33.832143 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:50:33.832163 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 01:50:33.832191 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 01:50:33.832210 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:50:33.832229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:50:33.832250 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:50:33.832270 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:50:33.832290 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:50:33.832470 systemd-journald[1171]: Collecting audit messages is disabled. Mar 7 01:50:33.832522 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:50:33.832546 systemd-journald[1171]: Journal started Mar 7 01:50:33.832665 systemd-journald[1171]: Runtime Journal (/run/log/journal/4c02e910e39242be90fc34906eb9611a) is 6.0M, max 48.3M, 42.2M free. Mar 7 01:50:33.888610 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:50:33.982417 kernel: fuse: init (API version 7.39) Mar 7 01:50:33.969645 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:50:34.006363 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:50:34.099367 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:50:34.180753 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:50:34.204608 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:50:34.231232 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:50:34.267396 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:50:34.312639 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:50:34.403622 kernel: loop: module loaded Mar 7 01:50:34.385596 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:50:34.401569 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:50:34.464174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:50:34.464488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:50:34.615005 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:50:34.615444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:50:34.788306 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:50:34.801627 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:50:34.863591 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:50:34.876813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:50:34.972423 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:50:35.006386 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:50:35.060039 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:50:35.355393 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:50:35.551445 kernel: ACPI: bus type drm_connector registered Mar 7 01:50:35.595655 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:50:35.854944 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:50:35.862000 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:50:35.871139 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:50:35.887441 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:50:35.912566 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:50:35.938108 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:50:35.967843 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:50:35.975470 systemd-journald[1171]: Time spent on flushing to /var/log/journal/4c02e910e39242be90fc34906eb9611a is 258.963ms for 973 entries. Mar 7 01:50:35.975470 systemd-journald[1171]: System Journal (/var/log/journal/4c02e910e39242be90fc34906eb9611a) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:50:36.280746 systemd-journald[1171]: Received client request to flush runtime journal. Mar 7 01:50:35.991260 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:50:36.029044 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:50:36.096243 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:50:36.096576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:50:36.115554 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:50:36.175611 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:50:36.260435 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:50:36.294528 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:50:36.955143 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:50:36.988796 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:50:37.007359 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:50:37.058605 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:50:37.097525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:50:37.212581 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Mar 7 01:50:37.212611 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Mar 7 01:50:37.230748 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:50:37.305609 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:50:37.826543 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:50:38.068785 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:50:38.120221 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Mar 7 01:50:38.120349 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Mar 7 01:50:38.148094 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:50:41.974421 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:50:42.013137 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:50:42.152559 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Mar 7 01:50:42.548359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:50:42.670246 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:50:43.018434 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:50:43.564823 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1251) Mar 7 01:50:43.872449 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 7 01:50:44.100775 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:50:45.692484 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:50:45.773355 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 7 01:50:45.829461 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:50:45.932358 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:50:46.299852 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:50:46.338901 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:50:46.338998 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:50:46.509168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:50:46.623563 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:50:46.630890 systemd-networkd[1239]: lo: Link UP Mar 7 01:50:46.631524 systemd-networkd[1239]: lo: Gained carrier Mar 7 01:50:46.635133 systemd-networkd[1239]: Enumeration completed Mar 7 01:50:46.636982 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:50:46.637080 systemd-networkd[1239]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:50:46.646315 systemd-networkd[1239]: eth0: Link UP Mar 7 01:50:46.646470 systemd-networkd[1239]: eth0: Gained carrier Mar 7 01:50:46.646561 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:50:46.666530 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:50:46.812578 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:50:46.829295 systemd-networkd[1239]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:50:46.847749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:50:46.848309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:50:46.873212 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:50:47.075605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:50:48.043505 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:50:48.574592 systemd-networkd[1239]: eth0: Gained IPv6LL Mar 7 01:50:48.652835 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:50:48.781113 kernel: kvm_amd: TSC scaling supported Mar 7 01:50:48.781259 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:50:48.781295 kernel: kvm_amd: Nested Paging enabled Mar 7 01:50:48.784056 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:50:48.788014 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:50:49.552384 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:50:49.676496 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:50:49.757843 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:50:49.961877 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:50:50.193835 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:50:50.230631 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:50:50.285629 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:50:50.398081 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:50:50.558880 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:50:50.583611 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:50:50.603305 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:50:50.603536 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:50:50.615370 systemd[1]: Reached target machines.target - Containers. Mar 7 01:50:50.636278 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:50:50.663246 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:50:50.693609 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:50:50.716557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:50:50.754078 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:50:50.795550 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:50:50.883245 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:50:50.918890 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:50:50.976119 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:50:51.059233 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:50:51.186306 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:50:51.187845 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:50:51.477084 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:50:51.631188 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 01:50:52.195114 kernel: loop2: detected capacity change from 0 to 228704 Mar 7 01:50:52.766987 kernel: loop3: detected capacity change from 0 to 142488 Mar 7 01:50:53.479568 kernel: loop4: detected capacity change from 0 to 140768 Mar 7 01:50:54.032847 kernel: loop5: detected capacity change from 0 to 228704 Mar 7 01:50:54.308561 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:50:54.310375 (sd-merge)[1315]: Merged extensions into '/usr'. Mar 7 01:50:54.369602 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:50:54.369663 systemd[1]: Reloading... Mar 7 01:50:55.385549 zram_generator::config[1341]: No configuration found. Mar 7 01:50:57.796329 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:50:57.955243 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:50:58.181463 systemd[1]: Reloading finished in 3810 ms. Mar 7 01:50:58.272611 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:50:58.341243 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:50:58.469334 systemd[1]: Starting ensure-sysext.service... Mar 7 01:50:58.541125 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:50:58.592243 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:50:58.592299 systemd[1]: Reloading... Mar 7 01:50:58.783376 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:50:58.785665 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:50:58.806872 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:50:58.842893 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 7 01:50:58.850342 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 7 01:50:58.897535 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:50:58.898126 systemd-tmpfiles[1387]: Skipping /boot Mar 7 01:50:58.952082 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:50:58.952272 systemd-tmpfiles[1387]: Skipping /boot Mar 7 01:50:58.976834 zram_generator::config[1411]: No configuration found. Mar 7 01:51:00.446459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:51:00.734036 systemd[1]: Reloading finished in 2140 ms. Mar 7 01:51:00.789648 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:51:00.887325 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:51:00.940644 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:51:00.997278 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:51:01.044123 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:51:01.081416 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:51:01.139293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:51:01.139801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:51:01.143451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:51:01.196394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:51:01.285744 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:51:01.332125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:51:01.332472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:51:01.362773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:51:01.365190 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:51:01.384133 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:51:01.391844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:51:01.392533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:51:01.401652 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:51:01.402424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:51:01.416174 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:51:01.460549 augenrules[1488]: No rules Mar 7 01:51:01.467323 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:51:01.504355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:51:01.505899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:51:01.531220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:51:01.548257 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:51:01.577484 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:51:01.604920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:51:01.627287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:51:01.642451 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:51:01.659033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:51:01.670245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:51:01.670657 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:51:01.697491 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:51:01.706074 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:51:01.723541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:51:01.724284 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:51:01.744793 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:51:01.745382 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:51:01.763785 systemd-resolved[1469]: Positive Trust Anchors: Mar 7 01:51:01.763808 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:51:01.763852 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:51:01.766439 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:51:01.793650 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:51:01.808064 systemd-resolved[1469]: Defaulting to hostname 'linux'. Mar 7 01:51:01.833108 systemd[1]: Finished ensure-sysext.service. Mar 7 01:51:01.849521 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:51:01.878153 systemd[1]: Reached target network.target - Network. Mar 7 01:51:01.889409 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:51:01.905345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:51:01.927761 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:51:01.928001 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:51:01.953069 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:51:01.966182 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:51:02.358467 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:51:02.373197 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:51:02.386561 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:51:02.398450 systemd-timesyncd[1521]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:51:02.398943 systemd-timesyncd[1521]: Initial clock synchronization to Sat 2026-03-07 01:51:02.473933 UTC. Mar 7 01:51:02.402852 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:51:02.413184 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:51:02.439138 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:51:02.440089 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:51:02.455521 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:51:02.465444 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:51:02.478449 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:51:02.495409 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:51:02.513545 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:51:02.541607 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:51:02.560493 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:51:02.579580 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:51:02.592201 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:51:02.609499 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:51:02.640615 systemd[1]: System is tainted: cgroupsv1 Mar 7 01:51:02.641399 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:51:02.642666 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:51:02.657126 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:51:02.692360 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:51:02.723151 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:51:02.756252 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:51:02.783030 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:51:02.798937 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:51:02.824451 jq[1530]: false Mar 7 01:51:02.854164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:51:02.890511 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:51:02.956092 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:51:02.982765 dbus-daemon[1528]: [system] SELinux support is enabled Mar 7 01:51:02.992292 extend-filesystems[1531]: Found loop3 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found loop4 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found loop5 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found sr0 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found vda Mar 7 01:51:02.992292 extend-filesystems[1531]: Found vda1 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found vda2 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found vda3 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found usr Mar 7 01:51:02.992292 extend-filesystems[1531]: Found vda4 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found vda6 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found vda7 Mar 7 01:51:02.992292 extend-filesystems[1531]: Found vda9 Mar 7 01:51:02.992292 extend-filesystems[1531]: Checking size of /dev/vda9 Mar 7 01:51:03.557931 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:51:03.558005 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1563) Mar 7 01:51:02.996639 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:51:03.567159 extend-filesystems[1531]: Resized partition /dev/vda9 Mar 7 01:51:03.095038 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:51:03.599185 extend-filesystems[1555]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:51:03.164227 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:51:03.358922 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:51:03.576503 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:51:03.604033 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:51:03.737661 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:51:03.785981 jq[1571]: true Mar 7 01:51:03.786627 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:51:03.967853 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:51:03.982237 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:51:03.982764 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:51:03.989428 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:51:03.992796 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:51:04.022129 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:51:04.082111 extend-filesystems[1555]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:51:04.082111 extend-filesystems[1555]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:51:04.082111 extend-filesystems[1555]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:51:04.294907 update_engine[1569]: I20260307 01:51:04.100710 1569 main.cc:92] Flatcar Update Engine starting Mar 7 01:51:04.294907 update_engine[1569]: I20260307 01:51:04.158604 1569 update_check_scheduler.cc:74] Next update check in 9m30s Mar 7 01:51:04.168089 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:51:04.295598 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Mar 7 01:51:04.168600 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:51:04.246516 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:51:04.320464 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:51:04.400478 systemd-logind[1559]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:51:04.403192 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:51:04.417907 systemd-logind[1559]: New seat seat0. Mar 7 01:51:04.460440 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:51:04.540596 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:51:04.592939 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:51:04.596863 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:51:04.612498 jq[1584]: true Mar 7 01:51:04.720391 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:51:04.810777 tar[1581]: linux-amd64/LICENSE Mar 7 01:51:04.827587 tar[1581]: linux-amd64/helm Mar 7 01:51:04.874838 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:51:04.887952 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:51:04.888265 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:51:04.888464 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:51:04.905306 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:51:04.905488 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:51:04.916025 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:51:04.964542 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:51:07.108108 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:51:07.193315 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:51:07.264180 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:51:07.915596 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:51:07.993973 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:51:09.047884 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:51:09.860937 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:51:10.250942 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:51:10.602533 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:58800.service - OpenSSH per-connection server daemon (10.0.0.1:58800). Mar 7 01:51:10.658914 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:51:10.659433 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:51:10.930357 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:51:12.564005 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:51:12.888470 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:51:12.935946 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:51:12.993264 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:51:13.937570 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 58800 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:51:14.046950 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:14.471210 systemd-logind[1559]: New session 1 of user core. Mar 7 01:51:14.533455 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:51:14.613301 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:51:15.250788 containerd[1585]: time="2026-03-07T01:51:15.226133897Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:51:15.567563 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:51:15.653201 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:51:16.106870 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:51:17.105443 containerd[1585]: time="2026-03-07T01:51:17.104431246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.318339501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.319187327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.319303212Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.320408101Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.320446679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.320953733Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.320977713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.321931320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.321968603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.321998623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:17.320917 containerd[1585]: time="2026-03-07T01:51:17.322017245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:17.323527 containerd[1585]: time="2026-03-07T01:51:17.322306245Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:17.323527 containerd[1585]: time="2026-03-07T01:51:17.323071065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:51:17.323527 containerd[1585]: time="2026-03-07T01:51:17.323454026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:51:17.323527 containerd[1585]: time="2026-03-07T01:51:17.323479792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:51:17.323782 containerd[1585]: time="2026-03-07T01:51:17.323671488Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:51:17.325264 containerd[1585]: time="2026-03-07T01:51:17.324029407Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:51:17.484295 containerd[1585]: time="2026-03-07T01:51:17.479807822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:51:17.502476 containerd[1585]: time="2026-03-07T01:51:17.498665023Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:51:17.502476 containerd[1585]: time="2026-03-07T01:51:17.499473117Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:51:17.502476 containerd[1585]: time="2026-03-07T01:51:17.499549249Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:51:17.502476 containerd[1585]: time="2026-03-07T01:51:17.499605808Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:51:17.504455 containerd[1585]: time="2026-03-07T01:51:17.503236745Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:51:17.511312 containerd[1585]: time="2026-03-07T01:51:17.511276523Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:51:17.512174 containerd[1585]: time="2026-03-07T01:51:17.512145961Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:51:17.512300 containerd[1585]: time="2026-03-07T01:51:17.512278360Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:51:17.512381 containerd[1585]: time="2026-03-07T01:51:17.512364727Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:51:17.512532 containerd[1585]: time="2026-03-07T01:51:17.512510833Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:51:17.512770 containerd[1585]: time="2026-03-07T01:51:17.512583092Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:51:17.512770 containerd[1585]: time="2026-03-07T01:51:17.512604313Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:51:17.512770 containerd[1585]: time="2026-03-07T01:51:17.512625563Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:51:17.512770 containerd[1585]: time="2026-03-07T01:51:17.512643925Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:51:17.514941 containerd[1585]: time="2026-03-07T01:51:17.514908716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:51:17.515036 containerd[1585]: time="2026-03-07T01:51:17.515018300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:51:17.515782 containerd[1585]: time="2026-03-07T01:51:17.515083245Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:51:17.515782 containerd[1585]: time="2026-03-07T01:51:17.515327667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.515782 containerd[1585]: time="2026-03-07T01:51:17.515353924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.515782 containerd[1585]: time="2026-03-07T01:51:17.515371282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.515782 containerd[1585]: time="2026-03-07T01:51:17.515389032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.515782 containerd[1585]: time="2026-03-07T01:51:17.515493628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.516026 containerd[1585]: time="2026-03-07T01:51:17.516004504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.516102 containerd[1585]: time="2026-03-07T01:51:17.516086888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.516461 containerd[1585]: time="2026-03-07T01:51:17.516376972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.516563 containerd[1585]: time="2026-03-07T01:51:17.516541919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.516649 containerd[1585]: time="2026-03-07T01:51:17.516630564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.516864 containerd[1585]: time="2026-03-07T01:51:17.516841123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.516966 containerd[1585]: time="2026-03-07T01:51:17.516947255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.517045 containerd[1585]: time="2026-03-07T01:51:17.517028826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.517185 containerd[1585]: time="2026-03-07T01:51:17.517165510Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:51:17.517360 containerd[1585]: time="2026-03-07T01:51:17.517336037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.517736 containerd[1585]: time="2026-03-07T01:51:17.517629260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.518075 containerd[1585]: time="2026-03-07T01:51:17.517845969Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:51:17.521903 containerd[1585]: time="2026-03-07T01:51:17.521031388Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:51:17.521903 containerd[1585]: time="2026-03-07T01:51:17.521213624Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:51:17.521903 containerd[1585]: time="2026-03-07T01:51:17.521239419Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:51:17.521903 containerd[1585]: time="2026-03-07T01:51:17.521260459Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:51:17.521903 containerd[1585]: time="2026-03-07T01:51:17.521275469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.521903 containerd[1585]: time="2026-03-07T01:51:17.521341197Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:51:17.521903 containerd[1585]: time="2026-03-07T01:51:17.521461386Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:51:17.521903 containerd[1585]: time="2026-03-07T01:51:17.521489640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:51:17.528041 containerd[1585]: time="2026-03-07T01:51:17.527346880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:51:17.529950 containerd[1585]: time="2026-03-07T01:51:17.529289672Z" level=info msg="Connect containerd service" Mar 7 01:51:17.606438 containerd[1585]: time="2026-03-07T01:51:17.606127918Z" level=info msg="using legacy CRI server" Mar 7 01:51:17.606949 containerd[1585]: time="2026-03-07T01:51:17.606918312Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:51:17.651653 containerd[1585]: time="2026-03-07T01:51:17.614145071Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:51:17.850179 containerd[1585]: time="2026-03-07T01:51:17.844067462Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:51:17.850179 containerd[1585]: time="2026-03-07T01:51:17.846425673Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:51:17.850179 containerd[1585]: time="2026-03-07T01:51:17.846532487Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:51:17.855287 containerd[1585]: time="2026-03-07T01:51:17.854039246Z" level=info msg="Start subscribing containerd event" Mar 7 01:51:17.855287 containerd[1585]: time="2026-03-07T01:51:17.854350599Z" level=info msg="Start recovering state" Mar 7 01:51:17.855287 containerd[1585]: time="2026-03-07T01:51:17.855043160Z" level=info msg="Start event monitor" Mar 7 01:51:17.870527 containerd[1585]: time="2026-03-07T01:51:17.868765369Z" level=info msg="Start snapshots syncer" Mar 7 01:51:17.870527 containerd[1585]: time="2026-03-07T01:51:17.868860143Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:51:17.870527 containerd[1585]: time="2026-03-07T01:51:17.868879177Z" level=info msg="Start streaming server" Mar 7 01:51:17.881657 containerd[1585]: time="2026-03-07T01:51:17.875482053Z" level=info msg="containerd successfully booted in 2.865225s" Mar 7 01:51:17.877059 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:51:18.651109 systemd[1664]: Queued start job for default target default.target. Mar 7 01:51:18.652017 systemd[1664]: Created slice app.slice - User Application Slice. Mar 7 01:51:18.652051 systemd[1664]: Reached target paths.target - Paths. Mar 7 01:51:18.652072 systemd[1664]: Reached target timers.target - Timers. Mar 7 01:51:18.819304 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:51:19.521787 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:51:19.528372 systemd[1664]: Reached target sockets.target - Sockets. Mar 7 01:51:19.528452 systemd[1664]: Reached target basic.target - Basic System. Mar 7 01:51:19.528543 systemd[1664]: Reached target default.target - Main User Target. Mar 7 01:51:19.528604 systemd[1664]: Startup finished in 3.308s. Mar 7 01:51:19.562173 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:51:19.620946 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:51:20.166129 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:49132.service - OpenSSH per-connection server daemon (10.0.0.1:49132). Mar 7 01:51:20.868821 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 49132 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:51:20.947364 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:21.142471 systemd-logind[1559]: New session 2 of user core. Mar 7 01:51:21.410494 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:51:21.554117 tar[1581]: linux-amd64/README.md Mar 7 01:51:21.890102 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:51:21.969074 sshd[1683]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:22.020090 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:52998.service - OpenSSH per-connection server daemon (10.0.0.1:52998). Mar 7 01:51:22.021033 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:49132.service: Deactivated successfully. Mar 7 01:51:22.069927 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:51:22.074814 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:51:22.097242 systemd-logind[1559]: Removed session 2. Mar 7 01:51:22.385263 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 52998 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:51:22.392784 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:22.662522 systemd-logind[1559]: New session 3 of user core. Mar 7 01:51:22.683459 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:51:23.134428 sshd[1693]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:23.325328 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:52998.service: Deactivated successfully. Mar 7 01:51:23.361891 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:51:23.419650 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:51:23.544094 systemd-logind[1559]: Removed session 3. Mar 7 01:51:31.607086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:51:31.611213 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:51:31.611611 systemd[1]: Startup finished in 55.260s (kernel) + 1min 8.159s (userspace) = 2min 3.419s. Mar 7 01:51:31.614572 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:51:33.143224 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:46070.service - OpenSSH per-connection server daemon (10.0.0.1:46070). Mar 7 01:51:33.372641 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 46070 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:51:33.379284 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:33.405867 systemd-logind[1559]: New session 4 of user core. Mar 7 01:51:33.423535 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:51:34.962256 sshd[1718]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:35.151308 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:46096.service - OpenSSH per-connection server daemon (10.0.0.1:46096). Mar 7 01:51:35.164173 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:46070.service: Deactivated successfully. Mar 7 01:51:35.172215 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:51:35.189985 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:51:35.221978 systemd-logind[1559]: Removed session 4. Mar 7 01:51:36.072850 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 46096 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:51:36.103443 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:36.151951 systemd-logind[1559]: New session 5 of user core. Mar 7 01:51:36.170538 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:51:36.406163 sshd[1724]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:36.481671 systemd[1]: Started sshd@5-10.0.0.122:22-10.0.0.1:46112.service - OpenSSH per-connection server daemon (10.0.0.1:46112). Mar 7 01:51:36.494598 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:46096.service: Deactivated successfully. Mar 7 01:51:36.519861 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:51:36.537963 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:51:36.602194 systemd-logind[1559]: Removed session 5. Mar 7 01:51:36.872397 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 46112 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:51:36.903518 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:36.985993 systemd-logind[1559]: New session 6 of user core. Mar 7 01:51:37.011590 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:51:37.666353 sshd[1731]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:37.724983 systemd[1]: Started sshd@6-10.0.0.122:22-10.0.0.1:46128.service - OpenSSH per-connection server daemon (10.0.0.1:46128). Mar 7 01:51:37.726847 systemd[1]: sshd@5-10.0.0.122:22-10.0.0.1:46112.service: Deactivated successfully. Mar 7 01:51:37.738609 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:51:37.748523 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:51:37.760976 systemd-logind[1559]: Removed session 6. Mar 7 01:51:38.017255 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 46128 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:51:38.024544 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:38.112872 systemd-logind[1559]: New session 7 of user core. Mar 7 01:51:38.130065 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:51:38.370895 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:51:38.382633 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:51:38.513227 sudo[1747]: pam_unix(sudo:session): session closed for user root Mar 7 01:51:38.527819 sshd[1739]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:38.579169 systemd[1]: Started sshd@7-10.0.0.122:22-10.0.0.1:46132.service - OpenSSH per-connection server daemon (10.0.0.1:46132). Mar 7 01:51:38.594274 systemd[1]: sshd@6-10.0.0.122:22-10.0.0.1:46128.service: Deactivated successfully. Mar 7 01:51:38.611585 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:51:38.620819 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:51:38.624390 systemd-logind[1559]: Removed session 7. Mar 7 01:51:38.716786 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 46132 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:51:38.759430 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:38.801559 systemd-logind[1559]: New session 8 of user core. Mar 7 01:51:38.815302 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:51:39.012084 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:51:39.019635 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:51:39.154566 sudo[1757]: pam_unix(sudo:session): session closed for user root Mar 7 01:51:39.410564 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:51:39.411327 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:51:39.743527 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:51:39.926500 auditctl[1760]: No rules Mar 7 01:51:39.936585 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:51:39.937323 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:51:40.018865 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:51:40.338507 kubelet[1712]: E0307 01:51:40.337296 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:51:40.360312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:51:40.360907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:51:40.394049 augenrules[1782]: No rules Mar 7 01:51:40.405039 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:51:40.417959 sudo[1756]: pam_unix(sudo:session): session closed for user root Mar 7 01:51:40.434775 sshd[1749]: pam_unix(sshd:session): session closed for user core Mar 7 01:51:40.466304 systemd[1]: Started sshd@8-10.0.0.122:22-10.0.0.1:48654.service - OpenSSH per-connection server daemon (10.0.0.1:48654). Mar 7 01:51:40.467466 systemd[1]: sshd@7-10.0.0.122:22-10.0.0.1:46132.service: Deactivated successfully. Mar 7 01:51:40.486463 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:51:40.494414 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:51:40.501026 systemd-logind[1559]: Removed session 8. Mar 7 01:51:40.601911 sshd[1788]: Accepted publickey for core from 10.0.0.1 port 48654 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 01:51:40.919447 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:51:41.841484 systemd-logind[1559]: New session 9 of user core. Mar 7 01:51:41.863658 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:51:42.765197 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:51:42.811481 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:51:49.241624 update_engine[1569]: I20260307 01:51:49.230957 1569 update_attempter.cc:509] Updating boot flags... Mar 7 01:51:50.070938 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1821) Mar 7 01:51:50.215130 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:51:50.244794 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:51:50.561251 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:51:50.680943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:51:54.434277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:51:54.505518 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:51:55.768398 kubelet[1844]: E0307 01:51:55.763477 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:51:55.787837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:51:55.788161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:51:56.968611 dockerd[1827]: time="2026-03-07T01:51:56.965424776Z" level=info msg="Starting up" Mar 7 01:51:59.553803 dockerd[1827]: time="2026-03-07T01:51:59.552128158Z" level=info msg="Loading containers: start." Mar 7 01:52:01.416443 kernel: Initializing XFRM netlink socket Mar 7 01:52:02.500412 systemd-networkd[1239]: docker0: Link UP Mar 7 01:52:02.699868 dockerd[1827]: time="2026-03-07T01:52:02.687969512Z" level=info msg="Loading containers: done." Mar 7 01:52:02.960128 dockerd[1827]: time="2026-03-07T01:52:02.951223353Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:52:02.960128 dockerd[1827]: time="2026-03-07T01:52:02.952913232Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:52:03.006438 dockerd[1827]: time="2026-03-07T01:52:03.003218453Z" level=info msg="Daemon has completed initialization" Mar 7 01:52:03.867616 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:52:03.886974 dockerd[1827]: time="2026-03-07T01:52:03.864384197Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:52:05.988033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:52:06.022042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:52:08.101224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:52:08.263890 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:52:08.855112 containerd[1585]: time="2026-03-07T01:52:08.852292443Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:52:09.586055 kubelet[2004]: E0307 01:52:09.584769 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:52:09.607916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:52:09.612326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:52:11.623200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831989107.mount: Deactivated successfully. Mar 7 01:52:19.688836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:52:19.747167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:52:22.243049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:52:22.337460 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:52:23.694193 kubelet[2088]: E0307 01:52:23.689573 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:52:23.725662 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:52:23.733131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:52:25.769917 containerd[1585]: time="2026-03-07T01:52:25.763120770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:25.780012 containerd[1585]: time="2026-03-07T01:52:25.769183207Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 7 01:52:25.789801 containerd[1585]: time="2026-03-07T01:52:25.788822127Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:25.803859 containerd[1585]: time="2026-03-07T01:52:25.803794986Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 16.94579912s" Mar 7 01:52:25.805782 containerd[1585]: time="2026-03-07T01:52:25.804107667Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:52:25.806798 containerd[1585]: time="2026-03-07T01:52:25.806530563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:25.821183 containerd[1585]: time="2026-03-07T01:52:25.819607304Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:52:33.932051 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:52:33.948999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:52:34.803754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:52:34.869805 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:52:35.480444 kubelet[2112]: E0307 01:52:35.480012 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:52:35.512799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:52:35.513261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:52:37.141655 containerd[1585]: time="2026-03-07T01:52:37.139341384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:37.147504 containerd[1585]: time="2026-03-07T01:52:37.146597518Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 7 01:52:37.152422 containerd[1585]: time="2026-03-07T01:52:37.149619708Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:37.165125 containerd[1585]: time="2026-03-07T01:52:37.165053069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:37.167200 containerd[1585]: time="2026-03-07T01:52:37.167156661Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 11.347353402s" Mar 7 01:52:37.170041 containerd[1585]: time="2026-03-07T01:52:37.169340752Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:52:37.193592 containerd[1585]: time="2026-03-07T01:52:37.192939367Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:52:45.720459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:52:45.769308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:52:47.382864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:52:47.546337 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:52:48.413239 kubelet[2141]: E0307 01:52:48.409977 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:52:48.420621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:52:48.421134 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:52:48.549137 containerd[1585]: time="2026-03-07T01:52:48.546463883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:48.556793 containerd[1585]: time="2026-03-07T01:52:48.553748844Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 7 01:52:48.562467 containerd[1585]: time="2026-03-07T01:52:48.561107500Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:48.580869 containerd[1585]: time="2026-03-07T01:52:48.578912693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:52:48.580869 containerd[1585]: time="2026-03-07T01:52:48.580845427Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 11.387847601s" Mar 7 01:52:48.581072 containerd[1585]: time="2026-03-07T01:52:48.580897686Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:52:48.617792 containerd[1585]: time="2026-03-07T01:52:48.615981804Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:52:54.715642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272538767.mount: Deactivated successfully. Mar 7 01:52:58.443648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:52:58.494435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:00.036395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:00.049749 containerd[1585]: time="2026-03-07T01:53:00.047919430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:00.066852 containerd[1585]: time="2026-03-07T01:53:00.066252216Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 7 01:53:00.069878 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:53:00.078163 containerd[1585]: time="2026-03-07T01:53:00.076025694Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:00.084885 containerd[1585]: time="2026-03-07T01:53:00.084798975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:00.087483 containerd[1585]: time="2026-03-07T01:53:00.087118718Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 11.471074687s" Mar 7 01:53:00.087483 containerd[1585]: time="2026-03-07T01:53:00.087472365Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:53:00.132045 containerd[1585]: time="2026-03-07T01:53:00.130646917Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:53:00.661470 kubelet[2171]: E0307 01:53:00.661033 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:53:00.680041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:53:00.680383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:53:01.527587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360549900.mount: Deactivated successfully. Mar 7 01:53:10.989773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 01:53:11.124006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:11.380844 containerd[1585]: time="2026-03-07T01:53:11.379456810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:11.435156 containerd[1585]: time="2026-03-07T01:53:11.434909944Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 7 01:53:11.457552 containerd[1585]: time="2026-03-07T01:53:11.456840505Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:11.621284 containerd[1585]: time="2026-03-07T01:53:11.619137038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:11.966585 containerd[1585]: time="2026-03-07T01:53:11.962755266Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 11.828685363s" Mar 7 01:53:11.966585 containerd[1585]: time="2026-03-07T01:53:11.964096518Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:53:12.026303 containerd[1585]: time="2026-03-07T01:53:12.025390384Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:53:13.571860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:13.619532 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:53:13.935078 kubelet[2246]: E0307 01:53:13.934576 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:53:13.942018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:53:13.942515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:53:14.359902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1229935566.mount: Deactivated successfully. Mar 7 01:53:14.424341 containerd[1585]: time="2026-03-07T01:53:14.422585760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:14.427879 containerd[1585]: time="2026-03-07T01:53:14.427397962Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:53:14.438374 containerd[1585]: time="2026-03-07T01:53:14.436749865Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:14.458091 containerd[1585]: time="2026-03-07T01:53:14.451434027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:14.458091 containerd[1585]: time="2026-03-07T01:53:14.454204914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.428677091s" Mar 7 01:53:14.459655 containerd[1585]: time="2026-03-07T01:53:14.458468776Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:53:14.486950 containerd[1585]: time="2026-03-07T01:53:14.484002404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:53:16.019443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003007011.mount: Deactivated successfully. Mar 7 01:53:24.181619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 7 01:53:24.241169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:26.050264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:26.556065 (kubelet)[2325]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:53:27.522023 kubelet[2325]: E0307 01:53:27.521017 2325 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:53:27.570097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:53:27.571022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:53:30.037034 containerd[1585]: time="2026-03-07T01:53:30.033404880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:30.042808 containerd[1585]: time="2026-03-07T01:53:30.042533680Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 7 01:53:30.054753 containerd[1585]: time="2026-03-07T01:53:30.050972447Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:30.066573 containerd[1585]: time="2026-03-07T01:53:30.065854730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:53:30.068511 containerd[1585]: time="2026-03-07T01:53:30.068209958Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 15.583918912s" Mar 7 01:53:30.069580 containerd[1585]: time="2026-03-07T01:53:30.068995250Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:53:37.680636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 7 01:53:37.703242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:38.300346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:38.353887 (kubelet)[2381]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:53:38.645368 kubelet[2381]: E0307 01:53:38.642144 2381 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:53:38.765777 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:53:38.773044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:53:48.140211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:48.174973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:48.367996 systemd[1]: Reloading requested from client PID 2401 ('systemctl') (unit session-9.scope)... Mar 7 01:53:48.368057 systemd[1]: Reloading... Mar 7 01:53:48.939948 zram_generator::config[2439]: No configuration found. Mar 7 01:53:49.517416 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:53:49.836164 systemd[1]: Reloading finished in 1466 ms. Mar 7 01:53:50.066197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:50.087022 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:50.095899 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:53:50.101025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:50.127065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:53:50.597267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:53:50.653396 (kubelet)[2503]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:53:50.909938 kubelet[2503]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:53:50.909938 kubelet[2503]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:53:50.909938 kubelet[2503]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:53:50.909938 kubelet[2503]: I0307 01:53:50.907473 2503 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:53:54.283231 kubelet[2503]: I0307 01:53:54.283111 2503 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:53:54.283231 kubelet[2503]: I0307 01:53:54.283194 2503 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:53:54.284269 kubelet[2503]: I0307 01:53:54.283585 2503 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:53:54.444835 kubelet[2503]: I0307 01:53:54.443841 2503 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:53:54.466064 kubelet[2503]: E0307 01:53:54.454235 2503 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:53:54.494775 kubelet[2503]: E0307 01:53:54.494047 2503 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:53:54.494775 kubelet[2503]: I0307 01:53:54.494095 2503 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:53:54.545074 kubelet[2503]: I0307 01:53:54.544810 2503 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:53:54.546766 kubelet[2503]: I0307 01:53:54.545659 2503 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:53:54.546766 kubelet[2503]: I0307 01:53:54.545785 2503 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:53:54.546766 kubelet[2503]: I0307 01:53:54.546187 2503 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:53:54.546766 kubelet[2503]: I0307 01:53:54.546209 2503 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:53:54.546766 kubelet[2503]: I0307 01:53:54.546484 2503 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:53:54.568957 kubelet[2503]: I0307 01:53:54.568259 2503 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:53:54.568957 kubelet[2503]: I0307 01:53:54.568336 2503 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:53:54.573916 kubelet[2503]: I0307 01:53:54.571968 2503 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:53:54.573916 kubelet[2503]: I0307 01:53:54.572177 2503 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:53:54.591205 kubelet[2503]: E0307 01:53:54.589119 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:53:54.596984 kubelet[2503]: E0307 01:53:54.594447 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:53:54.630576 kubelet[2503]: I0307 01:53:54.622030 2503 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:53:54.630576 kubelet[2503]: I0307 01:53:54.626233 2503 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:53:54.645180 kubelet[2503]: W0307 01:53:54.645141 2503 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:53:54.747574 kubelet[2503]: I0307 01:53:54.747486 2503 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:53:54.751975 kubelet[2503]: I0307 01:53:54.751440 2503 server.go:1289] "Started kubelet" Mar 7 01:53:54.752284 kubelet[2503]: I0307 01:53:54.752241 2503 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:53:55.052981 kubelet[2503]: E0307 01:53:55.044924 2503 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.122:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.122:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6c4229f52678 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:53:54.751284856 +0000 UTC m=+4.049629364,LastTimestamp:2026-03-07 01:53:54.751284856 +0000 UTC m=+4.049629364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:53:55.058575 kubelet[2503]: I0307 01:53:55.058547 2503 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:53:55.066035 kubelet[2503]: I0307 01:53:55.063663 2503 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:53:55.071787 kubelet[2503]: I0307 01:53:55.069438 2503 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:53:55.071787 kubelet[2503]: I0307 01:53:55.070532 2503 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:53:55.071976 kubelet[2503]: I0307 01:53:55.071913 2503 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:53:55.072584 kubelet[2503]: I0307 01:53:55.072563 2503 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:53:55.083146 kubelet[2503]: I0307 01:53:55.083111 2503 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:53:55.083460 kubelet[2503]: I0307 01:53:55.083442 2503 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:53:55.088047 kubelet[2503]: E0307 01:53:55.088017 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:55.089856 kubelet[2503]: E0307 01:53:55.089407 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:53:55.089856 kubelet[2503]: E0307 01:53:55.089547 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="200ms" Mar 7 01:53:55.138582 kubelet[2503]: I0307 01:53:55.134540 2503 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:53:55.138582 kubelet[2503]: I0307 01:53:55.137567 2503 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:53:55.174962 kubelet[2503]: I0307 01:53:55.169147 2503 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:53:55.213429 kubelet[2503]: E0307 01:53:55.195356 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:55.250028 kubelet[2503]: E0307 01:53:55.249985 2503 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:53:55.297427 kubelet[2503]: E0307 01:53:55.295018 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="400ms" Mar 7 01:53:55.325891 kubelet[2503]: E0307 01:53:55.324827 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:55.427903 kubelet[2503]: E0307 01:53:55.426169 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:55.442194 kubelet[2503]: I0307 01:53:55.440285 2503 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:53:55.448112 kubelet[2503]: I0307 01:53:55.447314 2503 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:53:55.448112 kubelet[2503]: I0307 01:53:55.447429 2503 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:53:55.448112 kubelet[2503]: I0307 01:53:55.447515 2503 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:53:55.448112 kubelet[2503]: I0307 01:53:55.447576 2503 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:53:55.448112 kubelet[2503]: E0307 01:53:55.447963 2503 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:53:55.452164 kubelet[2503]: E0307 01:53:55.450538 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:53:55.458778 kubelet[2503]: I0307 01:53:55.458310 2503 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:53:55.458778 kubelet[2503]: I0307 01:53:55.458338 2503 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:53:55.458778 kubelet[2503]: I0307 01:53:55.458360 2503 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:53:55.471509 kubelet[2503]: E0307 01:53:55.471448 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:53:55.536063 kubelet[2503]: E0307 01:53:55.535131 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:53:55.537940 kubelet[2503]: E0307 01:53:55.536482 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:53:55.538748 kubelet[2503]: I0307 01:53:55.538168 2503 policy_none.go:49] "None policy: Start" Mar 7 01:53:55.538748 kubelet[2503]: I0307 01:53:55.538254 2503 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:53:55.538748 kubelet[2503]: I0307 01:53:55.538284 2503 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:53:55.549740 kubelet[2503]: E0307 01:53:55.549114 2503 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:53:55.580328 kubelet[2503]: E0307 01:53:55.579114 2503 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:53:55.580328 kubelet[2503]: I0307 01:53:55.579899 2503 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:53:55.587243 kubelet[2503]: E0307 01:53:55.587123 2503 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:53:55.601839 kubelet[2503]: E0307 01:53:55.600059 2503 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:53:55.606806 kubelet[2503]: I0307 01:53:55.579921 2503 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:53:55.606806 kubelet[2503]: I0307 01:53:55.605503 2503 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:53:55.700187 kubelet[2503]: E0307 01:53:55.699875 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="800ms" Mar 7 01:53:55.711935 kubelet[2503]: I0307 01:53:55.711062 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:55.713455 kubelet[2503]: E0307 01:53:55.713241 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Mar 7 01:53:55.799176 kubelet[2503]: I0307 01:53:55.798409 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:55.799176 kubelet[2503]: I0307 01:53:55.798462 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:55.799176 kubelet[2503]: I0307 01:53:55.798498 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:55.799176 kubelet[2503]: I0307 01:53:55.798537 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:55.799176 kubelet[2503]: I0307 01:53:55.798570 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af9dbb652d5d7b23793c1824fe3245be-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af9dbb652d5d7b23793c1824fe3245be\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:53:55.799820 kubelet[2503]: I0307 01:53:55.798595 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af9dbb652d5d7b23793c1824fe3245be-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af9dbb652d5d7b23793c1824fe3245be\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:53:55.799820 kubelet[2503]: I0307 01:53:55.798753 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af9dbb652d5d7b23793c1824fe3245be-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af9dbb652d5d7b23793c1824fe3245be\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:53:55.799820 kubelet[2503]: I0307 01:53:55.798785 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:53:55.814954 kubelet[2503]: E0307 01:53:55.806375 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:55.846522 kubelet[2503]: E0307 01:53:55.845541 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:55.853523 kubelet[2503]: E0307 01:53:55.853075 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:53:55.902359 kubelet[2503]: I0307 01:53:55.901849 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:53:55.918882 kubelet[2503]: I0307 01:53:55.918110 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:55.918882 kubelet[2503]: E0307 01:53:55.918594 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Mar 7 01:53:56.001061 kubelet[2503]: E0307 01:53:56.000090 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:53:56.131117 kubelet[2503]: E0307 01:53:56.124233 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:56.147098 kubelet[2503]: E0307 01:53:56.146412 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:56.158089 kubelet[2503]: E0307 01:53:56.155174 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:53:56.178346 containerd[1585]: time="2026-03-07T01:53:56.178088362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 7 01:53:56.183943 containerd[1585]: time="2026-03-07T01:53:56.181008595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af9dbb652d5d7b23793c1824fe3245be,Namespace:kube-system,Attempt:0,}" Mar 7 01:53:56.185440 containerd[1585]: time="2026-03-07T01:53:56.184397104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 7 01:53:56.351818 kubelet[2503]: I0307 01:53:56.343202 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:56.351818 kubelet[2503]: E0307 01:53:56.351747 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Mar 7 01:53:56.502271 kubelet[2503]: E0307 01:53:56.501307 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="1.6s" Mar 7 01:53:56.529841 kubelet[2503]: E0307 01:53:56.528114 2503 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:53:57.041485 kubelet[2503]: E0307 01:53:57.041064 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:53:57.148269 kubelet[2503]: E0307 01:53:57.146886 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:53:57.157063 kubelet[2503]: I0307 01:53:57.156242 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:57.157063 kubelet[2503]: E0307 01:53:57.157022 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Mar 7 01:53:58.120417 kubelet[2503]: E0307 01:53:58.120365 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="3.2s" Mar 7 01:53:58.121826 kubelet[2503]: E0307 01:53:58.121605 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:53:58.158492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558091769.mount: Deactivated successfully. Mar 7 01:53:58.274063 containerd[1585]: time="2026-03-07T01:53:58.271157254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:58.318849 containerd[1585]: time="2026-03-07T01:53:58.316857926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:53:58.325522 containerd[1585]: time="2026-03-07T01:53:58.323773072Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:58.331145 containerd[1585]: time="2026-03-07T01:53:58.329606596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:53:58.337061 containerd[1585]: time="2026-03-07T01:53:58.335316335Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:58.346119 containerd[1585]: time="2026-03-07T01:53:58.344948319Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:58.363598 containerd[1585]: time="2026-03-07T01:53:58.363479242Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:53:58.370052 containerd[1585]: time="2026-03-07T01:53:58.368448729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:53:58.388063 containerd[1585]: time="2026-03-07T01:53:58.380318182Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.193526853s" Mar 7 01:53:58.388063 containerd[1585]: time="2026-03-07T01:53:58.383442364Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.202247953s" Mar 7 01:53:58.448340 containerd[1585]: time="2026-03-07T01:53:58.447456228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.257353515s" Mar 7 01:53:58.579335 kubelet[2503]: E0307 01:53:58.574186 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:53:58.776976 kubelet[2503]: I0307 01:53:58.776924 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:53:58.777804 kubelet[2503]: E0307 01:53:58.777631 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Mar 7 01:53:59.241868 kubelet[2503]: E0307 01:53:59.240541 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:53:59.972072 containerd[1585]: time="2026-03-07T01:53:59.970912108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:53:59.972072 containerd[1585]: time="2026-03-07T01:53:59.971083549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:53:59.972072 containerd[1585]: time="2026-03-07T01:53:59.971106743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:53:59.972072 containerd[1585]: time="2026-03-07T01:53:59.971385002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:00.038856 containerd[1585]: time="2026-03-07T01:54:00.037981551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:54:00.038856 containerd[1585]: time="2026-03-07T01:54:00.038068163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:54:00.038856 containerd[1585]: time="2026-03-07T01:54:00.038136571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:00.038856 containerd[1585]: time="2026-03-07T01:54:00.038470656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:00.243880 containerd[1585]: time="2026-03-07T01:54:00.236963146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:54:00.243880 containerd[1585]: time="2026-03-07T01:54:00.237175293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:54:00.243880 containerd[1585]: time="2026-03-07T01:54:00.237209958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:00.243880 containerd[1585]: time="2026-03-07T01:54:00.237515097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:00.387073 systemd[1]: run-containerd-runc-k8s.io-c5af76485ddd746e3d037248f523311daeacf3b8571a925ba0d6bbdb4886cc31-runc.7QgmHM.mount: Deactivated successfully. Mar 7 01:54:00.561168 kubelet[2503]: E0307 01:54:00.561021 2503 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:54:00.824811 containerd[1585]: time="2026-03-07T01:54:00.820126616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:af9dbb652d5d7b23793c1824fe3245be,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5af76485ddd746e3d037248f523311daeacf3b8571a925ba0d6bbdb4886cc31\"" Mar 7 01:54:00.844174 kubelet[2503]: E0307 01:54:00.840041 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:00.909097 containerd[1585]: time="2026-03-07T01:54:00.909043397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"edcb12c7653526c02baefc8929808e947b8be12821abe31542633436c407834b\"" Mar 7 01:54:00.915419 kubelet[2503]: E0307 01:54:00.915380 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:00.917341 containerd[1585]: time="2026-03-07T01:54:00.917296640Z" level=info msg="CreateContainer within sandbox \"c5af76485ddd746e3d037248f523311daeacf3b8571a925ba0d6bbdb4886cc31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:54:00.940804 containerd[1585]: time="2026-03-07T01:54:00.940183179Z" level=info msg="CreateContainer within sandbox \"edcb12c7653526c02baefc8929808e947b8be12821abe31542633436c407834b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:54:00.971781 containerd[1585]: time="2026-03-07T01:54:00.969154541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2e61fcb84316b31158fbc785302c732857b8cad6212f17a01f29a96acb994c6\"" Mar 7 01:54:00.976322 kubelet[2503]: E0307 01:54:00.976204 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:00.996106 containerd[1585]: time="2026-03-07T01:54:00.996061091Z" level=info msg="CreateContainer within sandbox \"a2e61fcb84316b31158fbc785302c732857b8cad6212f17a01f29a96acb994c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:54:01.033052 containerd[1585]: time="2026-03-07T01:54:01.032901246Z" level=info msg="CreateContainer within sandbox \"edcb12c7653526c02baefc8929808e947b8be12821abe31542633436c407834b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0\"" Mar 7 01:54:01.036303 containerd[1585]: time="2026-03-07T01:54:01.034878410Z" level=info msg="StartContainer for \"39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0\"" Mar 7 01:54:01.054959 containerd[1585]: time="2026-03-07T01:54:01.052827392Z" level=info msg="CreateContainer within sandbox \"c5af76485ddd746e3d037248f523311daeacf3b8571a925ba0d6bbdb4886cc31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1024da5e963192145c048b02e28da54dddcb26675afa674b0c2482bb6595c0d7\"" Mar 7 01:54:01.054959 containerd[1585]: time="2026-03-07T01:54:01.054182742Z" level=info msg="StartContainer for \"1024da5e963192145c048b02e28da54dddcb26675afa674b0c2482bb6595c0d7\"" Mar 7 01:54:01.131810 containerd[1585]: time="2026-03-07T01:54:01.127986940Z" level=info msg="CreateContainer within sandbox \"a2e61fcb84316b31158fbc785302c732857b8cad6212f17a01f29a96acb994c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8\"" Mar 7 01:54:01.131810 containerd[1585]: time="2026-03-07T01:54:01.129355231Z" level=info msg="StartContainer for \"ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8\"" Mar 7 01:54:01.329490 kubelet[2503]: E0307 01:54:01.329013 2503 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="6.4s" Mar 7 01:54:01.532566 containerd[1585]: time="2026-03-07T01:54:01.530986090Z" level=info msg="StartContainer for \"39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0\" returns successfully" Mar 7 01:54:01.544129 containerd[1585]: time="2026-03-07T01:54:01.543852272Z" level=info msg="StartContainer for \"1024da5e963192145c048b02e28da54dddcb26675afa674b0c2482bb6595c0d7\" returns successfully" Mar 7 01:54:01.677835 kubelet[2503]: E0307 01:54:01.671297 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:01.677835 kubelet[2503]: E0307 01:54:01.671508 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:01.683988 kubelet[2503]: E0307 01:54:01.682639 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:01.683988 kubelet[2503]: E0307 01:54:01.683008 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:01.764828 containerd[1585]: time="2026-03-07T01:54:01.764389047Z" level=info msg="StartContainer for \"ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8\" returns successfully" Mar 7 01:54:01.982895 kubelet[2503]: I0307 01:54:01.979644 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:54:01.982895 kubelet[2503]: E0307 01:54:01.980311 2503 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Mar 7 01:54:02.725050 kubelet[2503]: E0307 01:54:02.725017 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:02.727625 kubelet[2503]: E0307 01:54:02.727600 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:02.730046 kubelet[2503]: E0307 01:54:02.730009 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:02.731795 kubelet[2503]: E0307 01:54:02.731283 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:02.732169 kubelet[2503]: E0307 01:54:02.732144 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:02.732492 kubelet[2503]: E0307 01:54:02.732473 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:03.779186 kubelet[2503]: E0307 01:54:03.777610 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:03.781640 kubelet[2503]: E0307 01:54:03.780645 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:03.784279 kubelet[2503]: E0307 01:54:03.783758 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:03.784279 kubelet[2503]: E0307 01:54:03.783962 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:04.839988 kubelet[2503]: E0307 01:54:04.839561 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:04.843209 kubelet[2503]: E0307 01:54:04.840219 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:05.615804 kubelet[2503]: E0307 01:54:05.614617 2503 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:54:09.042949 kubelet[2503]: I0307 01:54:09.035283 2503 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:54:10.705861 kubelet[2503]: E0307 01:54:10.704336 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:10.705861 kubelet[2503]: E0307 01:54:10.705762 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:12.615165 kubelet[2503]: E0307 01:54:12.602301 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:54:12.736323 kubelet[2503]: E0307 01:54:12.735585 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:54:13.277613 kubelet[2503]: E0307 01:54:13.277136 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:54:14.617367 kubelet[2503]: E0307 01:54:14.617005 2503 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.122:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189a6c4229f52678 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:53:54.751284856 +0000 UTC m=+4.049629364,LastTimestamp:2026-03-07 01:53:54.751284856 +0000 UTC m=+4.049629364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:54:14.662923 kubelet[2503]: E0307 01:54:14.662580 2503 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:54:14.667093 kubelet[2503]: E0307 01:54:14.667001 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:15.485379 kubelet[2503]: E0307 01:54:15.484907 2503 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:54:15.619151 kubelet[2503]: E0307 01:54:15.616921 2503 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:54:16.643025 kubelet[2503]: E0307 01:54:16.640465 2503 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 01:54:17.340619 kubelet[2503]: I0307 01:54:17.340114 2503 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:54:17.340619 kubelet[2503]: E0307 01:54:17.340167 2503 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:54:17.459140 kubelet[2503]: E0307 01:54:17.459053 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:17.651325 kubelet[2503]: E0307 01:54:17.641030 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:17.764186 kubelet[2503]: E0307 01:54:17.754545 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:17.857117 kubelet[2503]: E0307 01:54:17.856904 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:17.968852 kubelet[2503]: E0307 01:54:17.960434 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:18.126666 kubelet[2503]: E0307 01:54:18.124863 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:18.229316 kubelet[2503]: E0307 01:54:18.225476 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:18.327411 kubelet[2503]: E0307 01:54:18.326892 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:18.504364 kubelet[2503]: E0307 01:54:18.490973 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:18.592579 kubelet[2503]: E0307 01:54:18.592536 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:18.712096 kubelet[2503]: E0307 01:54:18.711378 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:18.871289 kubelet[2503]: E0307 01:54:18.868372 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:18.971150 kubelet[2503]: E0307 01:54:18.970890 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:19.091391 kubelet[2503]: E0307 01:54:19.087468 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:19.207147 kubelet[2503]: E0307 01:54:19.207018 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:19.310032 kubelet[2503]: E0307 01:54:19.309974 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:19.421480 kubelet[2503]: E0307 01:54:19.413305 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:19.527225 kubelet[2503]: E0307 01:54:19.525624 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:19.629784 kubelet[2503]: E0307 01:54:19.628946 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:19.729533 kubelet[2503]: E0307 01:54:19.729431 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:19.831626 kubelet[2503]: E0307 01:54:19.830505 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:19.931976 kubelet[2503]: E0307 01:54:19.931532 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:20.039529 kubelet[2503]: E0307 01:54:20.039475 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:20.142472 kubelet[2503]: E0307 01:54:20.140279 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:20.335309 kubelet[2503]: E0307 01:54:20.333169 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:20.433917 kubelet[2503]: E0307 01:54:20.433571 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:20.551666 kubelet[2503]: E0307 01:54:20.547555 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:20.652275 kubelet[2503]: E0307 01:54:20.648509 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:20.759570 kubelet[2503]: E0307 01:54:20.754964 2503 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:20.810504 kubelet[2503]: I0307 01:54:20.799283 2503 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:20.942933 kubelet[2503]: I0307 01:54:20.937451 2503 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:21.024570 kubelet[2503]: E0307 01:54:21.024234 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:21.052415 kubelet[2503]: I0307 01:54:21.049544 2503 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:21.052415 kubelet[2503]: I0307 01:54:21.050476 2503 apiserver.go:52] "Watching apiserver" Mar 7 01:54:21.073989 kubelet[2503]: E0307 01:54:21.070447 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:21.085906 kubelet[2503]: I0307 01:54:21.085007 2503 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:54:21.465817 kubelet[2503]: E0307 01:54:21.465046 2503 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:21.465817 kubelet[2503]: I0307 01:54:21.465161 2503 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:21.890333 kubelet[2503]: E0307 01:54:21.883175 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:21.935246 kubelet[2503]: E0307 01:54:21.920973 2503 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:25.718389 kubelet[2503]: I0307 01:54:25.716663 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.71664083 podStartE2EDuration="4.71664083s" podCreationTimestamp="2026-03-07 01:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:54:25.716589962 +0000 UTC m=+35.014934351" watchObservedRunningTime="2026-03-07 01:54:25.71664083 +0000 UTC m=+35.014985228" Mar 7 01:54:25.826020 kubelet[2503]: I0307 01:54:25.825601 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.825576652 podStartE2EDuration="4.825576652s" podCreationTimestamp="2026-03-07 01:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:54:25.822848679 +0000 UTC m=+35.121193107" watchObservedRunningTime="2026-03-07 01:54:25.825576652 +0000 UTC m=+35.123921040" Mar 7 01:54:25.890210 kubelet[2503]: I0307 01:54:25.889246 2503 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.889227423 podStartE2EDuration="5.889227423s" podCreationTimestamp="2026-03-07 01:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:54:25.887275338 +0000 UTC m=+35.185619745" watchObservedRunningTime="2026-03-07 01:54:25.889227423 +0000 UTC m=+35.187571810" Mar 7 01:54:27.305088 systemd[1]: Reloading requested from client PID 2789 ('systemctl') (unit session-9.scope)... Mar 7 01:54:27.305161 systemd[1]: Reloading... Mar 7 01:54:27.696995 zram_generator::config[2828]: No configuration found. Mar 7 01:54:28.200504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:54:28.465239 systemd[1]: Reloading finished in 1157 ms. Mar 7 01:54:28.567473 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:54:28.628358 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:54:28.629940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:54:28.672625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:54:29.161942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:54:29.178419 (kubelet)[2883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:54:29.379167 kubelet[2883]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:54:29.379167 kubelet[2883]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:54:29.381092 kubelet[2883]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:54:29.381092 kubelet[2883]: I0307 01:54:29.379890 2883 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:54:29.404020 kubelet[2883]: I0307 01:54:29.402512 2883 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:54:29.404020 kubelet[2883]: I0307 01:54:29.402550 2883 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:54:29.404020 kubelet[2883]: I0307 01:54:29.403050 2883 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:54:29.409056 kubelet[2883]: I0307 01:54:29.405361 2883 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:54:29.413446 kubelet[2883]: I0307 01:54:29.411645 2883 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:54:29.428055 kubelet[2883]: E0307 01:54:29.426931 2883 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:54:29.428055 kubelet[2883]: I0307 01:54:29.426966 2883 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:54:29.462615 kubelet[2883]: I0307 01:54:29.462329 2883 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:54:29.470920 kubelet[2883]: I0307 01:54:29.467407 2883 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:54:29.470920 kubelet[2883]: I0307 01:54:29.470298 2883 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:54:29.470920 kubelet[2883]: I0307 01:54:29.470514 2883 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:54:29.470920 kubelet[2883]: I0307 01:54:29.470531 2883 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:54:29.470920 kubelet[2883]: I0307 01:54:29.470606 2883 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:54:29.471467 kubelet[2883]: I0307 01:54:29.471226 2883 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:54:29.471467 kubelet[2883]: I0307 01:54:29.471250 2883 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:54:29.471467 kubelet[2883]: I0307 01:54:29.471293 2883 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:54:29.473914 kubelet[2883]: I0307 01:54:29.472660 2883 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:54:29.476829 kubelet[2883]: I0307 01:54:29.476594 2883 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:54:29.478120 kubelet[2883]: I0307 01:54:29.478038 2883 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:54:29.494945 kubelet[2883]: I0307 01:54:29.494910 2883 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:54:29.495382 kubelet[2883]: I0307 01:54:29.495273 2883 server.go:1289] "Started kubelet" Mar 7 01:54:29.498275 kubelet[2883]: I0307 01:54:29.497565 2883 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:54:29.499317 kubelet[2883]: I0307 01:54:29.499237 2883 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:54:29.502322 kubelet[2883]: I0307 01:54:29.502302 2883 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:54:29.505210 kubelet[2883]: I0307 01:54:29.505180 2883 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:54:29.512936 kubelet[2883]: I0307 01:54:29.509665 2883 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:54:29.512936 kubelet[2883]: E0307 01:54:29.510386 2883 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:54:29.512936 kubelet[2883]: I0307 01:54:29.512086 2883 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:54:29.512936 kubelet[2883]: I0307 01:54:29.512282 2883 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:54:29.511856 sudo[2900]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 01:54:29.512484 sudo[2900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 01:54:29.529314 kubelet[2883]: I0307 01:54:29.528851 2883 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:54:29.529314 kubelet[2883]: I0307 01:54:29.528981 2883 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:54:29.531308 kubelet[2883]: I0307 01:54:29.530858 2883 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:54:29.533808 kubelet[2883]: I0307 01:54:29.531631 2883 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:54:29.586122 kubelet[2883]: I0307 01:54:29.586016 2883 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:54:29.604401 kubelet[2883]: E0307 01:54:29.604144 2883 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:54:29.606901 kubelet[2883]: I0307 01:54:29.606875 2883 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:54:29.663965 kubelet[2883]: I0307 01:54:29.663862 2883 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:54:29.665837 kubelet[2883]: I0307 01:54:29.665633 2883 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:54:29.670417 kubelet[2883]: I0307 01:54:29.666010 2883 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:54:29.670417 kubelet[2883]: I0307 01:54:29.670545 2883 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:54:29.678178 kubelet[2883]: E0307 01:54:29.678066 2883 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:54:29.781463 kubelet[2883]: E0307 01:54:29.779103 2883 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:54:29.932645 kubelet[2883]: I0307 01:54:29.931222 2883 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:54:29.932645 kubelet[2883]: I0307 01:54:29.931239 2883 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:54:29.932645 kubelet[2883]: I0307 01:54:29.931261 2883 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:54:29.932645 kubelet[2883]: I0307 01:54:29.931430 2883 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:54:29.932645 kubelet[2883]: I0307 01:54:29.931443 2883 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:54:29.932645 kubelet[2883]: I0307 01:54:29.931466 2883 policy_none.go:49] "None policy: Start" Mar 7 01:54:29.932645 kubelet[2883]: I0307 01:54:29.931479 2883 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:54:29.932645 kubelet[2883]: I0307 01:54:29.931491 2883 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:54:29.932645 kubelet[2883]: I0307 01:54:29.931599 2883 state_mem.go:75] "Updated machine memory state" Mar 7 01:54:29.933822 kubelet[2883]: E0307 01:54:29.933613 2883 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:54:29.937874 kubelet[2883]: I0307 01:54:29.935895 2883 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:54:29.937874 kubelet[2883]: I0307 01:54:29.935912 2883 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:54:29.937874 kubelet[2883]: I0307 01:54:29.936396 2883 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:54:29.950928 kubelet[2883]: E0307 01:54:29.950262 2883 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:54:29.984252 kubelet[2883]: I0307 01:54:29.983521 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:29.986825 kubelet[2883]: I0307 01:54:29.985520 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:29.986825 kubelet[2883]: I0307 01:54:29.986153 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:30.022366 kubelet[2883]: E0307 01:54:30.022243 2883 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:30.025125 kubelet[2883]: E0307 01:54:30.025052 2883 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:30.026507 kubelet[2883]: E0307 01:54:30.025464 2883 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:30.126340 kubelet[2883]: I0307 01:54:30.125352 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:30.126340 kubelet[2883]: I0307 01:54:30.125850 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:30.130852 kubelet[2883]: I0307 01:54:30.126881 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:30.130852 kubelet[2883]: I0307 01:54:30.126915 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:30.130852 kubelet[2883]: I0307 01:54:30.126973 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af9dbb652d5d7b23793c1824fe3245be-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"af9dbb652d5d7b23793c1824fe3245be\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:30.130852 kubelet[2883]: I0307 01:54:30.127037 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af9dbb652d5d7b23793c1824fe3245be-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"af9dbb652d5d7b23793c1824fe3245be\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:30.130852 kubelet[2883]: I0307 01:54:30.127074 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af9dbb652d5d7b23793c1824fe3245be-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"af9dbb652d5d7b23793c1824fe3245be\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:54:30.131109 kubelet[2883]: I0307 01:54:30.127116 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:54:30.131109 kubelet[2883]: I0307 01:54:30.127145 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:30.147422 kubelet[2883]: I0307 01:54:30.147191 2883 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:54:30.213870 kubelet[2883]: I0307 01:54:30.213198 2883 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 01:54:30.213870 kubelet[2883]: I0307 01:54:30.213312 2883 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:54:30.327861 kubelet[2883]: E0307 01:54:30.326263 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:30.327861 kubelet[2883]: E0307 01:54:30.326504 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:30.327861 kubelet[2883]: E0307 01:54:30.327387 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:30.479136 kubelet[2883]: I0307 01:54:30.477654 2883 apiserver.go:52] "Watching apiserver" Mar 7 01:54:30.513091 kubelet[2883]: I0307 01:54:30.512325 2883 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:54:30.737318 kubelet[2883]: E0307 01:54:30.736895 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:30.743425 kubelet[2883]: I0307 01:54:30.739567 2883 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:30.743425 kubelet[2883]: E0307 01:54:30.741295 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:30.808664 kubelet[2883]: E0307 01:54:30.808621 2883 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 01:54:30.808664 kubelet[2883]: E0307 01:54:30.809014 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:31.209441 kubelet[2883]: I0307 01:54:31.209402 2883 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:54:31.214483 containerd[1585]: time="2026-03-07T01:54:31.214228556Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:54:31.215368 kubelet[2883]: I0307 01:54:31.214853 2883 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:54:31.721345 sudo[2900]: pam_unix(sudo:session): session closed for user root Mar 7 01:54:31.745632 kubelet[2883]: E0307 01:54:31.745596 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:31.750570 kubelet[2883]: E0307 01:54:31.749313 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:32.402377 kubelet[2883]: I0307 01:54:32.399562 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ndrw\" (UniqueName: \"kubernetes.io/projected/cf7861cf-8a92-4e00-bd14-36c529eb4dd4-kube-api-access-7ndrw\") pod \"kube-proxy-hlw2t\" (UID: \"cf7861cf-8a92-4e00-bd14-36c529eb4dd4\") " pod="kube-system/kube-proxy-hlw2t" Mar 7 01:54:32.402377 kubelet[2883]: I0307 01:54:32.399629 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf7861cf-8a92-4e00-bd14-36c529eb4dd4-kube-proxy\") pod \"kube-proxy-hlw2t\" (UID: \"cf7861cf-8a92-4e00-bd14-36c529eb4dd4\") " pod="kube-system/kube-proxy-hlw2t" Mar 7 01:54:32.402377 kubelet[2883]: I0307 01:54:32.399661 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf7861cf-8a92-4e00-bd14-36c529eb4dd4-xtables-lock\") pod \"kube-proxy-hlw2t\" (UID: \"cf7861cf-8a92-4e00-bd14-36c529eb4dd4\") " pod="kube-system/kube-proxy-hlw2t" Mar 7 01:54:32.404169 kubelet[2883]: I0307 01:54:32.403902 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf7861cf-8a92-4e00-bd14-36c529eb4dd4-lib-modules\") pod \"kube-proxy-hlw2t\" (UID: \"cf7861cf-8a92-4e00-bd14-36c529eb4dd4\") " pod="kube-system/kube-proxy-hlw2t" Mar 7 01:54:32.619438 kubelet[2883]: E0307 01:54:32.619233 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:32.625436 containerd[1585]: time="2026-03-07T01:54:32.625292710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hlw2t,Uid:cf7861cf-8a92-4e00-bd14-36c529eb4dd4,Namespace:kube-system,Attempt:0,}" Mar 7 01:54:32.753386 kubelet[2883]: E0307 01:54:32.753348 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:32.772088 containerd[1585]: time="2026-03-07T01:54:32.768663836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:54:32.772088 containerd[1585]: time="2026-03-07T01:54:32.771208866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:54:32.772088 containerd[1585]: time="2026-03-07T01:54:32.771233882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:32.772088 containerd[1585]: time="2026-03-07T01:54:32.771524515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:33.013086 containerd[1585]: time="2026-03-07T01:54:33.012353104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hlw2t,Uid:cf7861cf-8a92-4e00-bd14-36c529eb4dd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"40ef2c8edb36b74e707bca0c0f2914f8cd7480e01994e8464b691fa1ed276e15\"" Mar 7 01:54:33.015963 kubelet[2883]: E0307 01:54:33.014893 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:33.037304 containerd[1585]: time="2026-03-07T01:54:33.037247064Z" level=info msg="CreateContainer within sandbox \"40ef2c8edb36b74e707bca0c0f2914f8cd7480e01994e8464b691fa1ed276e15\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:54:33.139606 containerd[1585]: time="2026-03-07T01:54:33.139548678Z" level=info msg="CreateContainer within sandbox \"40ef2c8edb36b74e707bca0c0f2914f8cd7480e01994e8464b691fa1ed276e15\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"97e0aaf53d98b4ece394f48bda17c7133a46e60222bdc192c1529f2f061aaa51\"" Mar 7 01:54:33.152351 containerd[1585]: time="2026-03-07T01:54:33.141036997Z" level=info msg="StartContainer for \"97e0aaf53d98b4ece394f48bda17c7133a46e60222bdc192c1529f2f061aaa51\"" Mar 7 01:54:33.701944 containerd[1585]: time="2026-03-07T01:54:33.701655896Z" level=info msg="StartContainer for \"97e0aaf53d98b4ece394f48bda17c7133a46e60222bdc192c1529f2f061aaa51\" returns successfully" Mar 7 01:54:33.820159 kubelet[2883]: E0307 01:54:33.820009 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:33.830944 kubelet[2883]: I0307 01:54:33.829220 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/966190a8-7fd8-41d7-9d65-c6161d0460a8-clustermesh-secrets\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.830944 kubelet[2883]: I0307 01:54:33.829269 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-lib-modules\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.830944 kubelet[2883]: I0307 01:54:33.829306 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-run\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.830944 kubelet[2883]: I0307 01:54:33.829339 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-host-proc-sys-net\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.830944 kubelet[2883]: I0307 01:54:33.829518 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/966190a8-7fd8-41d7-9d65-c6161d0460a8-hubble-tls\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.830944 kubelet[2883]: I0307 01:54:33.829552 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxhkv\" (UniqueName: \"kubernetes.io/projected/966190a8-7fd8-41d7-9d65-c6161d0460a8-kube-api-access-dxhkv\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.831285 kubelet[2883]: I0307 01:54:33.829582 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-hostproc\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.831285 kubelet[2883]: I0307 01:54:33.829642 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-etc-cni-netd\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.831285 kubelet[2883]: I0307 01:54:33.829838 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3714da96-de37-4fe0-b3e7-778d1d5a47dc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-c77kn\" (UID: \"3714da96-de37-4fe0-b3e7-778d1d5a47dc\") " pod="kube-system/cilium-operator-6c4d7847fc-c77kn" Mar 7 01:54:33.831285 kubelet[2883]: I0307 01:54:33.829876 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdd45\" (UniqueName: \"kubernetes.io/projected/3714da96-de37-4fe0-b3e7-778d1d5a47dc-kube-api-access-jdd45\") pod \"cilium-operator-6c4d7847fc-c77kn\" (UID: \"3714da96-de37-4fe0-b3e7-778d1d5a47dc\") " pod="kube-system/cilium-operator-6c4d7847fc-c77kn" Mar 7 01:54:33.831285 kubelet[2883]: I0307 01:54:33.829903 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-cgroup\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.831472 kubelet[2883]: I0307 01:54:33.829924 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cni-path\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.831472 kubelet[2883]: I0307 01:54:33.829951 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-xtables-lock\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.831472 kubelet[2883]: I0307 01:54:33.829978 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-config-path\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.831472 kubelet[2883]: I0307 01:54:33.830005 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-host-proc-sys-kernel\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.831472 kubelet[2883]: I0307 01:54:33.830036 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-bpf-maps\") pod \"cilium-4nwwv\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " pod="kube-system/cilium-4nwwv" Mar 7 01:54:33.914222 kubelet[2883]: I0307 01:54:33.913804 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hlw2t" podStartSLOduration=1.913653993 podStartE2EDuration="1.913653993s" podCreationTimestamp="2026-03-07 01:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:54:33.865022281 +0000 UTC m=+4.666991950" watchObservedRunningTime="2026-03-07 01:54:33.913653993 +0000 UTC m=+4.715623671" Mar 7 01:54:34.039336 kubelet[2883]: E0307 01:54:34.038035 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:34.039470 containerd[1585]: time="2026-03-07T01:54:34.039063385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-c77kn,Uid:3714da96-de37-4fe0-b3e7-778d1d5a47dc,Namespace:kube-system,Attempt:0,}" Mar 7 01:54:34.121351 kubelet[2883]: E0307 01:54:34.121259 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:34.123266 containerd[1585]: time="2026-03-07T01:54:34.123119169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4nwwv,Uid:966190a8-7fd8-41d7-9d65-c6161d0460a8,Namespace:kube-system,Attempt:0,}" Mar 7 01:54:34.158308 containerd[1585]: time="2026-03-07T01:54:34.154923799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:54:34.158308 containerd[1585]: time="2026-03-07T01:54:34.155000252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:54:34.158308 containerd[1585]: time="2026-03-07T01:54:34.155034265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:34.158308 containerd[1585]: time="2026-03-07T01:54:34.155190207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:34.293341 containerd[1585]: time="2026-03-07T01:54:34.289420847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:54:34.293341 containerd[1585]: time="2026-03-07T01:54:34.289600311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:54:34.293341 containerd[1585]: time="2026-03-07T01:54:34.289629706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:34.293341 containerd[1585]: time="2026-03-07T01:54:34.289931519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:54:34.495337 containerd[1585]: time="2026-03-07T01:54:34.495231554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4nwwv,Uid:966190a8-7fd8-41d7-9d65-c6161d0460a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\"" Mar 7 01:54:34.508007 kubelet[2883]: E0307 01:54:34.502113 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:34.508869 containerd[1585]: time="2026-03-07T01:54:34.503586290Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 01:54:34.552891 containerd[1585]: time="2026-03-07T01:54:34.551267678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-c77kn,Uid:3714da96-de37-4fe0-b3e7-778d1d5a47dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf\"" Mar 7 01:54:34.561424 kubelet[2883]: E0307 01:54:34.561308 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:34.888003 kubelet[2883]: E0307 01:54:34.886380 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:35.824172 kubelet[2883]: E0307 01:54:35.823640 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:38.073213 kubelet[2883]: E0307 01:54:38.072947 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:38.836988 kubelet[2883]: E0307 01:54:38.836631 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:39.840860 kubelet[2883]: E0307 01:54:39.840644 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:40.425042 kubelet[2883]: E0307 01:54:40.421644 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:40.848162 kubelet[2883]: E0307 01:54:40.842819 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:54:52.511819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount946199213.mount: Deactivated successfully. Mar 7 01:55:03.993827 containerd[1585]: time="2026-03-07T01:55:03.993488809Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:55:03.999893 containerd[1585]: time="2026-03-07T01:55:03.998644372Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 7 01:55:04.002607 containerd[1585]: time="2026-03-07T01:55:04.002524295Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:55:04.015772 containerd[1585]: time="2026-03-07T01:55:04.015573108Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 29.507192197s" Mar 7 01:55:04.015772 containerd[1585]: time="2026-03-07T01:55:04.015658247Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 7 01:55:04.020928 containerd[1585]: time="2026-03-07T01:55:04.018617387Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 01:55:04.091793 containerd[1585]: time="2026-03-07T01:55:04.091385831Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:55:04.186894 containerd[1585]: time="2026-03-07T01:55:04.186551595Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0\"" Mar 7 01:55:04.189330 containerd[1585]: time="2026-03-07T01:55:04.188427805Z" level=info msg="StartContainer for \"3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0\"" Mar 7 01:55:04.513020 containerd[1585]: time="2026-03-07T01:55:04.512907684Z" level=info msg="StartContainer for \"3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0\" returns successfully" Mar 7 01:55:04.988419 containerd[1585]: time="2026-03-07T01:55:04.986315342Z" level=info msg="shim disconnected" id=3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0 namespace=k8s.io Mar 7 01:55:04.988419 containerd[1585]: time="2026-03-07T01:55:04.986560381Z" level=warning msg="cleaning up after shim disconnected" id=3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0 namespace=k8s.io Mar 7 01:55:04.988419 containerd[1585]: time="2026-03-07T01:55:04.986579758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:55:05.144929 kubelet[2883]: E0307 01:55:05.144642 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:05.148198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0-rootfs.mount: Deactivated successfully. Mar 7 01:55:05.174232 containerd[1585]: time="2026-03-07T01:55:05.172168259Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:55:05.288344 containerd[1585]: time="2026-03-07T01:55:05.288154668Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29\"" Mar 7 01:55:05.308172 containerd[1585]: time="2026-03-07T01:55:05.308120567Z" level=info msg="StartContainer for \"e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29\"" Mar 7 01:55:05.582595 containerd[1585]: time="2026-03-07T01:55:05.582294589Z" level=info msg="StartContainer for \"e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29\" returns successfully" Mar 7 01:55:05.615132 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:55:05.615662 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:55:05.617938 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:55:05.639953 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:55:05.726825 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:55:05.879340 containerd[1585]: time="2026-03-07T01:55:05.877476454Z" level=info msg="shim disconnected" id=e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29 namespace=k8s.io Mar 7 01:55:05.879340 containerd[1585]: time="2026-03-07T01:55:05.878793694Z" level=warning msg="cleaning up after shim disconnected" id=e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29 namespace=k8s.io Mar 7 01:55:05.879340 containerd[1585]: time="2026-03-07T01:55:05.878815926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:55:06.151012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29-rootfs.mount: Deactivated successfully. Mar 7 01:55:06.159339 kubelet[2883]: E0307 01:55:06.157867 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:06.178765 containerd[1585]: time="2026-03-07T01:55:06.176212510Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:55:06.269896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2083057332.mount: Deactivated successfully. Mar 7 01:55:06.296357 containerd[1585]: time="2026-03-07T01:55:06.296253777Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d\"" Mar 7 01:55:06.298036 containerd[1585]: time="2026-03-07T01:55:06.297196748Z" level=info msg="StartContainer for \"56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d\"" Mar 7 01:55:06.469375 containerd[1585]: time="2026-03-07T01:55:06.469179042Z" level=info msg="StartContainer for \"56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d\" returns successfully" Mar 7 01:55:06.618878 containerd[1585]: time="2026-03-07T01:55:06.618644837Z" level=info msg="shim disconnected" id=56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d namespace=k8s.io Mar 7 01:55:06.618878 containerd[1585]: time="2026-03-07T01:55:06.618777376Z" level=warning msg="cleaning up after shim disconnected" id=56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d namespace=k8s.io Mar 7 01:55:06.618878 containerd[1585]: time="2026-03-07T01:55:06.618787715Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:55:07.152042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d-rootfs.mount: Deactivated successfully. Mar 7 01:55:07.183194 kubelet[2883]: E0307 01:55:07.181853 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:07.252232 containerd[1585]: time="2026-03-07T01:55:07.250267580Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:55:07.324256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014242935.mount: Deactivated successfully. Mar 7 01:55:07.368855 containerd[1585]: time="2026-03-07T01:55:07.368128236Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c\"" Mar 7 01:55:07.371566 containerd[1585]: time="2026-03-07T01:55:07.371418597Z" level=info msg="StartContainer for \"f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c\"" Mar 7 01:55:07.606298 containerd[1585]: time="2026-03-07T01:55:07.603585528Z" level=info msg="StartContainer for \"f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c\" returns successfully" Mar 7 01:55:07.775034 containerd[1585]: time="2026-03-07T01:55:07.774580431Z" level=info msg="shim disconnected" id=f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c namespace=k8s.io Mar 7 01:55:07.775034 containerd[1585]: time="2026-03-07T01:55:07.774652917Z" level=warning msg="cleaning up after shim disconnected" id=f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c namespace=k8s.io Mar 7 01:55:07.775034 containerd[1585]: time="2026-03-07T01:55:07.774667644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:55:08.150618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c-rootfs.mount: Deactivated successfully. Mar 7 01:55:08.197837 kubelet[2883]: E0307 01:55:08.197659 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:08.246436 containerd[1585]: time="2026-03-07T01:55:08.246248526Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:55:08.364968 containerd[1585]: time="2026-03-07T01:55:08.364195502Z" level=info msg="CreateContainer within sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0\"" Mar 7 01:55:08.369243 containerd[1585]: time="2026-03-07T01:55:08.367633609Z" level=info msg="StartContainer for \"87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0\"" Mar 7 01:55:08.880001 containerd[1585]: time="2026-03-07T01:55:08.878979817Z" level=info msg="StartContainer for \"87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0\" returns successfully" Mar 7 01:55:09.811856 containerd[1585]: time="2026-03-07T01:55:09.810210613Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:55:09.827080 containerd[1585]: time="2026-03-07T01:55:09.826483068Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 7 01:55:09.834339 containerd[1585]: time="2026-03-07T01:55:09.831879016Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:55:09.834339 containerd[1585]: time="2026-03-07T01:55:09.834040469Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.815373368s" Mar 7 01:55:09.847558 containerd[1585]: time="2026-03-07T01:55:09.839566621Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 7 01:55:09.852310 kubelet[2883]: I0307 01:55:09.849567 2883 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:55:09.883563 containerd[1585]: time="2026-03-07T01:55:09.883395318Z" level=info msg="CreateContainer within sandbox \"c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 01:55:09.993845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800825940.mount: Deactivated successfully. Mar 7 01:55:10.046009 containerd[1585]: time="2026-03-07T01:55:10.043541237Z" level=info msg="CreateContainer within sandbox \"c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\"" Mar 7 01:55:10.054637 containerd[1585]: time="2026-03-07T01:55:10.050048173Z" level=info msg="StartContainer for \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\"" Mar 7 01:55:10.150834 kubelet[2883]: I0307 01:55:10.150608 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/396fcd0f-2a4b-4793-acf5-c418c8985524-config-volume\") pod \"coredns-674b8bbfcf-dbt47\" (UID: \"396fcd0f-2a4b-4793-acf5-c418c8985524\") " pod="kube-system/coredns-674b8bbfcf-dbt47" Mar 7 01:55:10.160920 kubelet[2883]: I0307 01:55:10.158440 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckh5\" (UniqueName: \"kubernetes.io/projected/396fcd0f-2a4b-4793-acf5-c418c8985524-kube-api-access-tckh5\") pod \"coredns-674b8bbfcf-dbt47\" (UID: \"396fcd0f-2a4b-4793-acf5-c418c8985524\") " pod="kube-system/coredns-674b8bbfcf-dbt47" Mar 7 01:55:10.269526 kubelet[2883]: I0307 01:55:10.269403 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs7hg\" (UniqueName: \"kubernetes.io/projected/5950be0a-cb3f-497c-b8b7-7bbc768a3c21-kube-api-access-bs7hg\") pod \"coredns-674b8bbfcf-g9xnq\" (UID: \"5950be0a-cb3f-497c-b8b7-7bbc768a3c21\") " pod="kube-system/coredns-674b8bbfcf-g9xnq" Mar 7 01:55:10.269928 kubelet[2883]: I0307 01:55:10.269543 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5950be0a-cb3f-497c-b8b7-7bbc768a3c21-config-volume\") pod \"coredns-674b8bbfcf-g9xnq\" (UID: \"5950be0a-cb3f-497c-b8b7-7bbc768a3c21\") " pod="kube-system/coredns-674b8bbfcf-g9xnq" Mar 7 01:55:10.447338 kubelet[2883]: E0307 01:55:10.446870 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:10.475359 containerd[1585]: time="2026-03-07T01:55:10.469021191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dbt47,Uid:396fcd0f-2a4b-4793-acf5-c418c8985524,Namespace:kube-system,Attempt:0,}" Mar 7 01:55:10.475612 kubelet[2883]: E0307 01:55:10.473352 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:10.592540 kubelet[2883]: E0307 01:55:10.592496 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:10.602614 containerd[1585]: time="2026-03-07T01:55:10.598943021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g9xnq,Uid:5950be0a-cb3f-497c-b8b7-7bbc768a3c21,Namespace:kube-system,Attempt:0,}" Mar 7 01:55:10.966318 containerd[1585]: time="2026-03-07T01:55:10.951951806Z" level=info msg="StartContainer for \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\" returns successfully" Mar 7 01:55:11.532881 systemd[1]: run-containerd-runc-k8s.io-240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26-runc.Sea8S0.mount: Deactivated successfully. Mar 7 01:55:19.481538 kubelet[2883]: E0307 01:55:19.481496 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:19.483066 kubelet[2883]: E0307 01:55:19.482149 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:19.563469 kubelet[2883]: I0307 01:55:19.563402 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4nwwv" podStartSLOduration=17.048732714 podStartE2EDuration="46.563378423s" podCreationTimestamp="2026-03-07 01:54:33 +0000 UTC" firstStartedPulling="2026-03-07 01:54:34.503158131 +0000 UTC m=+5.305127780" lastFinishedPulling="2026-03-07 01:55:04.01780384 +0000 UTC m=+34.819773489" observedRunningTime="2026-03-07 01:55:10.601269794 +0000 UTC m=+41.403239474" watchObservedRunningTime="2026-03-07 01:55:19.563378423 +0000 UTC m=+50.365348072" Mar 7 01:55:20.484616 kubelet[2883]: E0307 01:55:20.484403 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:22.474404 kubelet[2883]: E0307 01:55:22.473594 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:24.275821 systemd-networkd[1239]: cilium_host: Link UP Mar 7 01:55:24.292539 systemd-networkd[1239]: cilium_net: Link UP Mar 7 01:55:24.304183 systemd-networkd[1239]: cilium_net: Gained carrier Mar 7 01:55:24.306384 systemd-networkd[1239]: cilium_host: Gained carrier Mar 7 01:55:24.306657 systemd-networkd[1239]: cilium_net: Gained IPv6LL Mar 7 01:55:24.311289 systemd-networkd[1239]: cilium_host: Gained IPv6LL Mar 7 01:55:25.023923 systemd-networkd[1239]: cilium_vxlan: Link UP Mar 7 01:55:25.024117 systemd-networkd[1239]: cilium_vxlan: Gained carrier Mar 7 01:55:26.257905 kernel: NET: Registered PF_ALG protocol family Mar 7 01:55:26.491650 systemd-networkd[1239]: cilium_vxlan: Gained IPv6LL Mar 7 01:55:30.324838 systemd-networkd[1239]: lxc_health: Link UP Mar 7 01:55:30.341921 systemd-networkd[1239]: lxc_health: Gained carrier Mar 7 01:55:30.955007 systemd[1]: run-containerd-runc-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-runc.s6BCbb.mount: Deactivated successfully. Mar 7 01:55:31.081486 systemd-networkd[1239]: lxce270bd3cb3c6: Link UP Mar 7 01:55:31.157791 kernel: eth0: renamed from tmp7be8e Mar 7 01:55:31.196980 systemd-networkd[1239]: lxc8f576909a78e: Link UP Mar 7 01:55:31.240873 kernel: eth0: renamed from tmp84e68 Mar 7 01:55:31.268974 systemd-networkd[1239]: lxce270bd3cb3c6: Gained carrier Mar 7 01:55:31.269527 systemd-networkd[1239]: lxc8f576909a78e: Gained carrier Mar 7 01:55:31.672475 systemd-networkd[1239]: lxc_health: Gained IPv6LL Mar 7 01:55:34.819780 systemd-networkd[1239]: lxc8f576909a78e: Gained IPv6LL Mar 7 01:55:41.722908 systemd-networkd[1239]: lxce270bd3cb3c6: Gained IPv6LL Mar 7 01:55:53.907107 kubelet[2883]: E0307 01:55:53.898040 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:53.977168 kubelet[2883]: E0307 01:55:53.962814 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:54.262589 kubelet[2883]: E0307 01:55:54.259949 2883 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.873s" Mar 7 01:55:54.363956 kubelet[2883]: E0307 01:55:54.363053 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:54.747663 kubelet[2883]: I0307 01:55:54.747513 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-c77kn" podStartSLOduration=46.454336658 podStartE2EDuration="1m21.74744417s" podCreationTimestamp="2026-03-07 01:54:33 +0000 UTC" firstStartedPulling="2026-03-07 01:54:34.568268446 +0000 UTC m=+5.370238095" lastFinishedPulling="2026-03-07 01:55:09.861375958 +0000 UTC m=+40.663345607" observedRunningTime="2026-03-07 01:55:19.564660265 +0000 UTC m=+50.366629914" watchObservedRunningTime="2026-03-07 01:55:54.74744417 +0000 UTC m=+85.549413829" Mar 7 01:55:55.058349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8-rootfs.mount: Deactivated successfully. Mar 7 01:55:55.086327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0-rootfs.mount: Deactivated successfully. Mar 7 01:55:55.177361 containerd[1585]: time="2026-03-07T01:55:55.173315581Z" level=info msg="shim disconnected" id=39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0 namespace=k8s.io Mar 7 01:55:55.184794 containerd[1585]: time="2026-03-07T01:55:55.184645448Z" level=info msg="shim disconnected" id=ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8 namespace=k8s.io Mar 7 01:55:55.185036 containerd[1585]: time="2026-03-07T01:55:55.185006434Z" level=warning msg="cleaning up after shim disconnected" id=ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8 namespace=k8s.io Mar 7 01:55:55.185234 containerd[1585]: time="2026-03-07T01:55:55.185212040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:55:55.192842 containerd[1585]: time="2026-03-07T01:55:55.192782042Z" level=warning msg="cleaning up after shim disconnected" id=39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0 namespace=k8s.io Mar 7 01:55:55.193194 containerd[1585]: time="2026-03-07T01:55:55.193078216Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:55:55.279949 kubelet[2883]: E0307 01:55:55.267917 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:55.279949 kubelet[2883]: E0307 01:55:55.277332 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:55.443020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26-rootfs.mount: Deactivated successfully. Mar 7 01:55:55.488033 containerd[1585]: time="2026-03-07T01:55:55.487936069Z" level=info msg="shim disconnected" id=240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26 namespace=k8s.io Mar 7 01:55:55.491903 containerd[1585]: time="2026-03-07T01:55:55.488314528Z" level=warning msg="cleaning up after shim disconnected" id=240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26 namespace=k8s.io Mar 7 01:55:55.491903 containerd[1585]: time="2026-03-07T01:55:55.488350575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:55:56.126955 containerd[1585]: time="2026-03-07T01:55:56.126896511Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:55:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:55:56.159018 containerd[1585]: time="2026-03-07T01:55:56.155870868Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:55:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:55:56.299562 kubelet[2883]: I0307 01:55:56.298597 2883 scope.go:117] "RemoveContainer" containerID="ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8" Mar 7 01:55:56.299562 kubelet[2883]: E0307 01:55:56.298854 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:56.327819 containerd[1585]: time="2026-03-07T01:55:56.326284184Z" level=info msg="CreateContainer within sandbox \"a2e61fcb84316b31158fbc785302c732857b8cad6212f17a01f29a96acb994c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:55:56.346276 kubelet[2883]: I0307 01:55:56.333655 2883 scope.go:117] "RemoveContainer" containerID="39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0" Mar 7 01:55:56.346276 kubelet[2883]: E0307 01:55:56.333885 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:56.380078 containerd[1585]: time="2026-03-07T01:55:56.376159722Z" level=info msg="CreateContainer within sandbox \"edcb12c7653526c02baefc8929808e947b8be12821abe31542633436c407834b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 01:55:56.532094 kubelet[2883]: I0307 01:55:56.532050 2883 scope.go:117] "RemoveContainer" containerID="240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26" Mar 7 01:55:56.545123 kubelet[2883]: E0307 01:55:56.545077 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:56.604173 containerd[1585]: time="2026-03-07T01:55:56.604113679Z" level=info msg="CreateContainer within sandbox \"c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Mar 7 01:55:56.699997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514115884.mount: Deactivated successfully. Mar 7 01:55:56.737476 containerd[1585]: time="2026-03-07T01:55:56.734977300Z" level=info msg="CreateContainer within sandbox \"a2e61fcb84316b31158fbc785302c732857b8cad6212f17a01f29a96acb994c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2a564b271076b1b5dbaab8c97b740a1fbf1f0fc54a79813841c8f1714599a0e0\"" Mar 7 01:55:56.738857 containerd[1585]: time="2026-03-07T01:55:56.738519946Z" level=info msg="StartContainer for \"2a564b271076b1b5dbaab8c97b740a1fbf1f0fc54a79813841c8f1714599a0e0\"" Mar 7 01:55:56.828228 containerd[1585]: time="2026-03-07T01:55:56.827959644Z" level=info msg="CreateContainer within sandbox \"edcb12c7653526c02baefc8929808e947b8be12821abe31542633436c407834b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8bef3eba01bafe9b17215e2715de136be72d40e977dd93b605593e391de87f89\"" Mar 7 01:55:56.845806 containerd[1585]: time="2026-03-07T01:55:56.843112685Z" level=info msg="StartContainer for \"8bef3eba01bafe9b17215e2715de136be72d40e977dd93b605593e391de87f89\"" Mar 7 01:55:56.940281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121400473.mount: Deactivated successfully. Mar 7 01:55:56.991804 containerd[1585]: time="2026-03-07T01:55:56.989623873Z" level=info msg="CreateContainer within sandbox \"c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\"" Mar 7 01:55:57.030150 containerd[1585]: time="2026-03-07T01:55:57.030096264Z" level=info msg="StartContainer for \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\"" Mar 7 01:55:57.733467 containerd[1585]: time="2026-03-07T01:55:57.731270717Z" level=info msg="StartContainer for \"2a564b271076b1b5dbaab8c97b740a1fbf1f0fc54a79813841c8f1714599a0e0\" returns successfully" Mar 7 01:55:57.832892 containerd[1585]: time="2026-03-07T01:55:57.831293028Z" level=info msg="StartContainer for \"8bef3eba01bafe9b17215e2715de136be72d40e977dd93b605593e391de87f89\" returns successfully" Mar 7 01:55:58.075897 containerd[1585]: time="2026-03-07T01:55:58.075539863Z" level=info msg="StartContainer for \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\" returns successfully" Mar 7 01:55:58.779846 kubelet[2883]: E0307 01:55:58.778029 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:58.898825 kubelet[2883]: E0307 01:55:58.879338 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:55:58.993569 kubelet[2883]: E0307 01:55:58.991295 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:00.021672 kubelet[2883]: E0307 01:56:00.021237 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:00.058609 kubelet[2883]: E0307 01:56:00.050197 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:01.079169 kubelet[2883]: E0307 01:56:01.079130 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:02.107866 kubelet[2883]: E0307 01:56:02.107277 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:02.683061 kubelet[2883]: E0307 01:56:02.681370 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:04.907230 kubelet[2883]: E0307 01:56:04.901927 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:06.970552 systemd[1]: run-containerd-runc-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-runc.AbKHWR.mount: Deactivated successfully. Mar 7 01:56:10.037953 systemd[1]: run-containerd-runc-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-runc.s4h5BI.mount: Deactivated successfully. Mar 7 01:56:10.497886 kubelet[2883]: E0307 01:56:10.491105 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:11.353018 kubelet[2883]: E0307 01:56:11.352189 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:13.209593 systemd[1]: run-containerd-runc-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-runc.CsXZXn.mount: Deactivated successfully. Mar 7 01:56:14.927333 kubelet[2883]: E0307 01:56:14.920289 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:20.129095 containerd[1585]: time="2026-03-07T01:56:20.124147129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:56:20.129095 containerd[1585]: time="2026-03-07T01:56:20.124415490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:56:20.129095 containerd[1585]: time="2026-03-07T01:56:20.124440638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:56:20.132628 containerd[1585]: time="2026-03-07T01:56:20.127375164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:56:20.531894 systemd[1]: run-containerd-runc-k8s.io-7be8e752de8f0efe97edc3f4d020b3633cb3e8ab85b2d64e1a7c03b856373efa-runc.pY80Sq.mount: Deactivated successfully. Mar 7 01:56:20.801142 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:56:21.468922 containerd[1585]: time="2026-03-07T01:56:21.465819542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g9xnq,Uid:5950be0a-cb3f-497c-b8b7-7bbc768a3c21,Namespace:kube-system,Attempt:0,} returns sandbox id \"7be8e752de8f0efe97edc3f4d020b3633cb3e8ab85b2d64e1a7c03b856373efa\"" Mar 7 01:56:21.516308 kubelet[2883]: E0307 01:56:21.492809 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:21.742338 containerd[1585]: time="2026-03-07T01:56:21.712159617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:56:21.742338 containerd[1585]: time="2026-03-07T01:56:21.712241681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:56:21.827905 containerd[1585]: time="2026-03-07T01:56:21.799845855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:56:21.827905 containerd[1585]: time="2026-03-07T01:56:21.800216608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:56:21.853882 containerd[1585]: time="2026-03-07T01:56:21.739660575Z" level=info msg="CreateContainer within sandbox \"7be8e752de8f0efe97edc3f4d020b3633cb3e8ab85b2d64e1a7c03b856373efa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:56:49.467351 kubelet[2883]: E0307 01:56:49.452125 2883 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Mar 7 01:56:49.554001 kubelet[2883]: E0307 01:56:49.553953 2883 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Mar 7 01:56:49.687748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159026678.mount: Deactivated successfully. Mar 7 01:56:49.726807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173696386.mount: Deactivated successfully. Mar 7 01:56:49.793893 kubelet[2883]: E0307 01:56:49.793846 2883 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Mar 7 01:56:49.860963 containerd[1585]: time="2026-03-07T01:56:49.860764624Z" level=info msg="CreateContainer within sandbox \"7be8e752de8f0efe97edc3f4d020b3633cb3e8ab85b2d64e1a7c03b856373efa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9786e4d559a1c62eea108262f172fe34f857d1fd3b2510fe5a5d0313d3beeae9\"" Mar 7 01:56:49.914029 containerd[1585]: time="2026-03-07T01:56:49.913975898Z" level=info msg="StartContainer for \"9786e4d559a1c62eea108262f172fe34f857d1fd3b2510fe5a5d0313d3beeae9\"" Mar 7 01:56:50.289194 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:56:50.529097 kubelet[2883]: E0307 01:56:50.518254 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:50.663648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a564b271076b1b5dbaab8c97b740a1fbf1f0fc54a79813841c8f1714599a0e0-rootfs.mount: Deactivated successfully. Mar 7 01:56:50.686653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bef3eba01bafe9b17215e2715de136be72d40e977dd93b605593e391de87f89-rootfs.mount: Deactivated successfully. Mar 7 01:56:50.715307 containerd[1585]: time="2026-03-07T01:56:50.714201329Z" level=info msg="shim disconnected" id=8bef3eba01bafe9b17215e2715de136be72d40e977dd93b605593e391de87f89 namespace=k8s.io Mar 7 01:56:50.715307 containerd[1585]: time="2026-03-07T01:56:50.714376185Z" level=warning msg="cleaning up after shim disconnected" id=8bef3eba01bafe9b17215e2715de136be72d40e977dd93b605593e391de87f89 namespace=k8s.io Mar 7 01:56:50.715307 containerd[1585]: time="2026-03-07T01:56:50.714396022Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:56:50.724310 containerd[1585]: time="2026-03-07T01:56:50.724073142Z" level=info msg="shim disconnected" id=2a564b271076b1b5dbaab8c97b740a1fbf1f0fc54a79813841c8f1714599a0e0 namespace=k8s.io Mar 7 01:56:50.724310 containerd[1585]: time="2026-03-07T01:56:50.724187706Z" level=warning msg="cleaning up after shim disconnected" id=2a564b271076b1b5dbaab8c97b740a1fbf1f0fc54a79813841c8f1714599a0e0 namespace=k8s.io Mar 7 01:56:50.724310 containerd[1585]: time="2026-03-07T01:56:50.724203716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:56:50.757832 containerd[1585]: time="2026-03-07T01:56:50.755162564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dbt47,Uid:396fcd0f-2a4b-4793-acf5-c418c8985524,Namespace:kube-system,Attempt:0,} returns sandbox id \"84e68e148e65709fa2a0a4bc4ad4279aec845e4b549fcf620787404dd22fde26\"" Mar 7 01:56:50.761991 systemd[1]: run-containerd-runc-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-runc.t7VH0T.mount: Deactivated successfully. Mar 7 01:56:50.768640 kubelet[2883]: E0307 01:56:50.768603 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:50.807111 containerd[1585]: time="2026-03-07T01:56:50.806958508Z" level=info msg="CreateContainer within sandbox \"84e68e148e65709fa2a0a4bc4ad4279aec845e4b549fcf620787404dd22fde26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:56:50.897273 containerd[1585]: time="2026-03-07T01:56:50.897206549Z" level=info msg="StartContainer for \"9786e4d559a1c62eea108262f172fe34f857d1fd3b2510fe5a5d0313d3beeae9\" returns successfully" Mar 7 01:56:50.989670 containerd[1585]: time="2026-03-07T01:56:50.986999240Z" level=info msg="CreateContainer within sandbox \"84e68e148e65709fa2a0a4bc4ad4279aec845e4b549fcf620787404dd22fde26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"786c1f9a99f4142c80dd6c563e04c8440bf8dcc46b209689e1592fe81d37dd0a\"" Mar 7 01:56:50.992795 containerd[1585]: time="2026-03-07T01:56:50.991233895Z" level=info msg="StartContainer for \"786c1f9a99f4142c80dd6c563e04c8440bf8dcc46b209689e1592fe81d37dd0a\"" Mar 7 01:56:51.211869 kubelet[2883]: E0307 01:56:51.203298 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:51.337232 kubelet[2883]: I0307 01:56:51.336385 2883 scope.go:117] "RemoveContainer" containerID="ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8" Mar 7 01:56:51.339377 kubelet[2883]: I0307 01:56:51.339344 2883 scope.go:117] "RemoveContainer" containerID="2a564b271076b1b5dbaab8c97b740a1fbf1f0fc54a79813841c8f1714599a0e0" Mar 7 01:56:51.345765 kubelet[2883]: E0307 01:56:51.342396 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:51.345765 kubelet[2883]: E0307 01:56:51.342870 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(8747e1f8a49a618fbc1324a8fe2d3754)\"" pod="kube-system/kube-controller-manager-localhost" podUID="8747e1f8a49a618fbc1324a8fe2d3754" Mar 7 01:56:51.364480 containerd[1585]: time="2026-03-07T01:56:51.364321411Z" level=info msg="RemoveContainer for \"ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8\"" Mar 7 01:56:51.414963 kubelet[2883]: I0307 01:56:51.414071 2883 scope.go:117] "RemoveContainer" containerID="8bef3eba01bafe9b17215e2715de136be72d40e977dd93b605593e391de87f89" Mar 7 01:56:51.414963 kubelet[2883]: E0307 01:56:51.414322 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:51.419109 kubelet[2883]: E0307 01:56:51.419047 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(e944e4cb17af904786c3a2e01e298498)\"" pod="kube-system/kube-scheduler-localhost" podUID="e944e4cb17af904786c3a2e01e298498" Mar 7 01:56:51.434831 containerd[1585]: time="2026-03-07T01:56:51.434616000Z" level=info msg="StartContainer for \"786c1f9a99f4142c80dd6c563e04c8440bf8dcc46b209689e1592fe81d37dd0a\" returns successfully" Mar 7 01:56:51.458268 kubelet[2883]: I0307 01:56:51.454509 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g9xnq" podStartSLOduration=139.454438522 podStartE2EDuration="2m19.454438522s" podCreationTimestamp="2026-03-07 01:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:56:51.373033435 +0000 UTC m=+142.175003084" watchObservedRunningTime="2026-03-07 01:56:51.454438522 +0000 UTC m=+142.256408172" Mar 7 01:56:51.545959 containerd[1585]: time="2026-03-07T01:56:51.545909696Z" level=info msg="RemoveContainer for \"ce97c6cd3fd65af31c81b3527c240ba11d53523247745960ea45ea4f3e0b07c8\" returns successfully" Mar 7 01:56:51.551361 kubelet[2883]: I0307 01:56:51.551318 2883 scope.go:117] "RemoveContainer" containerID="39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0" Mar 7 01:56:51.572124 containerd[1585]: time="2026-03-07T01:56:51.572077337Z" level=info msg="RemoveContainer for \"39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0\"" Mar 7 01:56:51.593876 containerd[1585]: time="2026-03-07T01:56:51.592605669Z" level=info msg="RemoveContainer for \"39336094ec7be83d009b1d890613e8836779d2d88a97e6a97f4fd21061cd7cd0\" returns successfully" Mar 7 01:56:52.422110 kubelet[2883]: E0307 01:56:52.421458 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:52.472778 kubelet[2883]: E0307 01:56:52.472001 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:52.526299 kubelet[2883]: I0307 01:56:52.516255 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dbt47" podStartSLOduration=140.51623365 podStartE2EDuration="2m20.51623365s" podCreationTimestamp="2026-03-07 01:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:56:52.515013769 +0000 UTC m=+143.316983448" watchObservedRunningTime="2026-03-07 01:56:52.51623365 +0000 UTC m=+143.318203300" Mar 7 01:56:52.585425 kubelet[2883]: I0307 01:56:52.584032 2883 scope.go:117] "RemoveContainer" containerID="2a564b271076b1b5dbaab8c97b740a1fbf1f0fc54a79813841c8f1714599a0e0" Mar 7 01:56:52.585425 kubelet[2883]: E0307 01:56:52.584198 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:52.585425 kubelet[2883]: E0307 01:56:52.584327 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(8747e1f8a49a618fbc1324a8fe2d3754)\"" pod="kube-system/kube-controller-manager-localhost" podUID="8747e1f8a49a618fbc1324a8fe2d3754" Mar 7 01:56:53.472913 kubelet[2883]: E0307 01:56:53.472672 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:53.474195 kubelet[2883]: E0307 01:56:53.474151 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:55.573568 kubelet[2883]: E0307 01:56:55.572616 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:58.759198 kubelet[2883]: I0307 01:56:58.757225 2883 scope.go:117] "RemoveContainer" containerID="8bef3eba01bafe9b17215e2715de136be72d40e977dd93b605593e391de87f89" Mar 7 01:56:58.759198 kubelet[2883]: E0307 01:56:58.757924 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:56:58.759198 kubelet[2883]: E0307 01:56:58.758125 2883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(e944e4cb17af904786c3a2e01e298498)\"" pod="kube-system/kube-scheduler-localhost" podUID="e944e4cb17af904786c3a2e01e298498" Mar 7 01:56:59.066088 systemd[1]: run-containerd-runc-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-runc.msjWwW.mount: Deactivated successfully. Mar 7 01:56:59.711634 kubelet[2883]: E0307 01:56:59.711309 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:00.677938 kubelet[2883]: E0307 01:57:00.677832 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:03.745962 kubelet[2883]: I0307 01:57:03.744086 2883 scope.go:117] "RemoveContainer" containerID="2a564b271076b1b5dbaab8c97b740a1fbf1f0fc54a79813841c8f1714599a0e0" Mar 7 01:57:03.745962 kubelet[2883]: E0307 01:57:03.744441 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:03.792357 containerd[1585]: time="2026-03-07T01:57:03.791789667Z" level=info msg="CreateContainer within sandbox \"a2e61fcb84316b31158fbc785302c732857b8cad6212f17a01f29a96acb994c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Mar 7 01:57:03.909259 containerd[1585]: time="2026-03-07T01:57:03.906050825Z" level=info msg="CreateContainer within sandbox \"a2e61fcb84316b31158fbc785302c732857b8cad6212f17a01f29a96acb994c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"dfa6165db59df00006bc501a898bd42f1af533f04e55742ae62d4e730348e756\"" Mar 7 01:57:03.909259 containerd[1585]: time="2026-03-07T01:57:03.907461291Z" level=info msg="StartContainer for \"dfa6165db59df00006bc501a898bd42f1af533f04e55742ae62d4e730348e756\"" Mar 7 01:57:04.226844 containerd[1585]: time="2026-03-07T01:57:04.223511705Z" level=info msg="StartContainer for \"dfa6165db59df00006bc501a898bd42f1af533f04e55742ae62d4e730348e756\" returns successfully" Mar 7 01:57:04.658390 kubelet[2883]: E0307 01:57:04.658248 2883 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:53690->127.0.0.1:40277: write tcp 127.0.0.1:53690->127.0.0.1:40277: write: broken pipe Mar 7 01:57:04.850251 kubelet[2883]: E0307 01:57:04.849204 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:04.865146 systemd[1]: run-containerd-runc-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-runc.v1oans.mount: Deactivated successfully. Mar 7 01:57:05.858857 kubelet[2883]: E0307 01:57:05.857623 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:09.672958 kubelet[2883]: I0307 01:57:09.672783 2883 scope.go:117] "RemoveContainer" containerID="8bef3eba01bafe9b17215e2715de136be72d40e977dd93b605593e391de87f89" Mar 7 01:57:09.672958 kubelet[2883]: E0307 01:57:09.672966 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:09.679366 kubelet[2883]: E0307 01:57:09.677967 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:09.721395 containerd[1585]: time="2026-03-07T01:57:09.718484307Z" level=info msg="CreateContainer within sandbox \"edcb12c7653526c02baefc8929808e947b8be12821abe31542633436c407834b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Mar 7 01:57:09.863451 containerd[1585]: time="2026-03-07T01:57:09.862503492Z" level=info msg="CreateContainer within sandbox \"edcb12c7653526c02baefc8929808e947b8be12821abe31542633436c407834b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"52332fc14f3d558151b32f18863c73cc29581bfa3b21fe71114cfef436ab301b\"" Mar 7 01:57:09.867757 containerd[1585]: time="2026-03-07T01:57:09.866782536Z" level=info msg="StartContainer for \"52332fc14f3d558151b32f18863c73cc29581bfa3b21fe71114cfef436ab301b\"" Mar 7 01:57:10.393488 containerd[1585]: time="2026-03-07T01:57:10.389315518Z" level=info msg="StartContainer for \"52332fc14f3d558151b32f18863c73cc29581bfa3b21fe71114cfef436ab301b\" returns successfully" Mar 7 01:57:10.937663 kubelet[2883]: E0307 01:57:10.937024 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:11.945455 kubelet[2883]: E0307 01:57:11.945103 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:14.932410 kubelet[2883]: E0307 01:57:14.931835 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:15.679366 kubelet[2883]: E0307 01:57:15.678115 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:20.405985 kubelet[2883]: E0307 01:57:20.405404 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:21.066766 kubelet[2883]: E0307 01:57:21.064463 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:22.075414 kubelet[2883]: E0307 01:57:22.074971 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:57:25.610333 systemd[1]: run-containerd-runc-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-runc.X0Fu5b.mount: Deactivated successfully. Mar 7 01:57:31.208042 systemd[1]: run-containerd-runc-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-runc.ycbmwB.mount: Deactivated successfully. Mar 7 01:57:39.954282 sudo[1795]: pam_unix(sudo:session): session closed for user root Mar 7 01:57:39.997115 sshd[1788]: pam_unix(sshd:session): session closed for user core Mar 7 01:57:40.037903 systemd[1]: sshd@8-10.0.0.122:22-10.0.0.1:48654.service: Deactivated successfully. Mar 7 01:57:40.065153 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:57:40.071871 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:57:40.087245 systemd-logind[1559]: Removed session 9. Mar 7 01:58:00.675862 kubelet[2883]: E0307 01:58:00.672586 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:02.675520 kubelet[2883]: E0307 01:58:02.674999 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:07.678122 kubelet[2883]: E0307 01:58:07.676164 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:19.758970 kubelet[2883]: E0307 01:58:19.757104 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:22.677940 kubelet[2883]: E0307 01:58:22.675537 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:29.676260 kubelet[2883]: E0307 01:58:29.676213 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:34.675018 kubelet[2883]: E0307 01:58:34.672517 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:58:37.676068 kubelet[2883]: E0307 01:58:37.673146 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:06.688215 kubelet[2883]: E0307 01:59:06.687013 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:10.693220 kubelet[2883]: E0307 01:59:10.686638 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:12.675042 kubelet[2883]: E0307 01:59:12.673618 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:31.729585 kubelet[2883]: E0307 01:59:31.729001 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:39.679967 kubelet[2883]: E0307 01:59:39.678482 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:43.706298 kubelet[2883]: E0307 01:59:43.698604 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:59:44.677332 kubelet[2883]: E0307 01:59:44.674255 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:00.673192 kubelet[2883]: E0307 02:00:00.672559 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:17.678951 kubelet[2883]: E0307 02:00:17.675565 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:21.520209 systemd[1]: Started sshd@9-10.0.0.122:22-10.0.0.1:59150.service - OpenSSH per-connection server daemon (10.0.0.1:59150). Mar 7 02:00:21.867967 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 59150 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:00:21.886478 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:21.995424 systemd-logind[1559]: New session 10 of user core. Mar 7 02:00:22.008649 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 02:00:22.886762 sshd[5333]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:22.922873 systemd[1]: sshd@9-10.0.0.122:22-10.0.0.1:59150.service: Deactivated successfully. Mar 7 02:00:22.952592 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 02:00:22.957362 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Mar 7 02:00:22.972361 systemd-logind[1559]: Removed session 10. Mar 7 02:00:27.946411 systemd[1]: Started sshd@10-10.0.0.122:22-10.0.0.1:59178.service - OpenSSH per-connection server daemon (10.0.0.1:59178). Mar 7 02:00:28.213502 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 59178 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:00:28.238188 sshd[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:28.341221 systemd-logind[1559]: New session 11 of user core. Mar 7 02:00:28.380837 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 02:00:29.330116 sshd[5355]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:29.370311 systemd[1]: sshd@10-10.0.0.122:22-10.0.0.1:59178.service: Deactivated successfully. Mar 7 02:00:29.395513 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 02:00:29.444122 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Mar 7 02:00:29.446434 systemd-logind[1559]: Removed session 11. Mar 7 02:00:30.675416 kubelet[2883]: E0307 02:00:30.674214 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:34.250391 update_engine[1569]: I20260307 02:00:34.238435 1569 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 02:00:34.250391 update_engine[1569]: I20260307 02:00:34.238631 1569 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 02:00:34.250391 update_engine[1569]: I20260307 02:00:34.239216 1569 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 02:00:34.346573 update_engine[1569]: I20260307 02:00:34.340106 1569 omaha_request_params.cc:62] Current group set to lts Mar 7 02:00:34.351216 kubelet[2883]: E0307 02:00:34.351169 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:34.388831 update_engine[1569]: I20260307 02:00:34.372769 1569 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 02:00:34.388831 update_engine[1569]: I20260307 02:00:34.372816 1569 update_attempter.cc:643] Scheduling an action processor start. Mar 7 02:00:34.388831 update_engine[1569]: I20260307 02:00:34.372943 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 02:00:34.388831 update_engine[1569]: I20260307 02:00:34.373107 1569 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 02:00:34.388831 update_engine[1569]: I20260307 02:00:34.373463 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 02:00:34.388831 update_engine[1569]: I20260307 02:00:34.373487 1569 omaha_request_action.cc:272] Request: Mar 7 02:00:34.388831 update_engine[1569]: Mar 7 02:00:34.388831 update_engine[1569]: Mar 7 02:00:34.388831 update_engine[1569]: Mar 7 02:00:34.388831 update_engine[1569]: Mar 7 02:00:34.388831 update_engine[1569]: Mar 7 02:00:34.388831 update_engine[1569]: Mar 7 02:00:34.388831 update_engine[1569]: Mar 7 02:00:34.388831 update_engine[1569]: Mar 7 02:00:34.388831 update_engine[1569]: I20260307 02:00:34.373536 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:00:34.412104 systemd[1]: Started sshd@11-10.0.0.122:22-10.0.0.1:45652.service - OpenSSH per-connection server daemon (10.0.0.1:45652). Mar 7 02:00:34.536472 update_engine[1569]: I20260307 02:00:34.522051 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:00:34.576442 update_engine[1569]: I20260307 02:00:34.567649 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:00:34.595610 update_engine[1569]: E20260307 02:00:34.594388 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:00:34.595610 update_engine[1569]: I20260307 02:00:34.594652 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 02:00:34.667651 locksmithd[1608]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 02:00:34.754867 sshd[5375]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:00:34.772083 sshd[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:34.812996 systemd-logind[1559]: New session 12 of user core. Mar 7 02:00:34.826801 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 02:00:35.665820 sshd[5375]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:35.686402 systemd[1]: sshd@11-10.0.0.122:22-10.0.0.1:45652.service: Deactivated successfully. Mar 7 02:00:35.703162 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 02:00:35.704532 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Mar 7 02:00:35.712259 systemd-logind[1559]: Removed session 12. Mar 7 02:00:40.682156 kubelet[2883]: E0307 02:00:40.677582 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:40.711290 systemd[1]: Started sshd@12-10.0.0.122:22-10.0.0.1:40698.service - OpenSSH per-connection server daemon (10.0.0.1:40698). Mar 7 02:00:41.044330 sshd[5393]: Accepted publickey for core from 10.0.0.1 port 40698 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:00:41.048482 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:41.115572 systemd-logind[1559]: New session 13 of user core. Mar 7 02:00:41.140502 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 02:00:41.631849 sshd[5393]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:41.649447 systemd[1]: sshd@12-10.0.0.122:22-10.0.0.1:40698.service: Deactivated successfully. Mar 7 02:00:41.670121 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Mar 7 02:00:41.670417 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 02:00:41.690446 systemd-logind[1559]: Removed session 13. Mar 7 02:00:45.194822 update_engine[1569]: I20260307 02:00:45.192618 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:00:45.194822 update_engine[1569]: I20260307 02:00:45.193216 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:00:45.218344 update_engine[1569]: I20260307 02:00:45.198627 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:00:45.219327 update_engine[1569]: E20260307 02:00:45.219018 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:00:45.219327 update_engine[1569]: I20260307 02:00:45.219095 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 02:00:46.686083 kubelet[2883]: E0307 02:00:46.683040 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:00:46.692649 systemd[1]: Started sshd@13-10.0.0.122:22-10.0.0.1:40714.service - OpenSSH per-connection server daemon (10.0.0.1:40714). Mar 7 02:00:47.053266 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 40714 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:00:47.060487 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:47.101118 systemd-logind[1559]: New session 14 of user core. Mar 7 02:00:47.130173 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 02:00:47.863848 sshd[5411]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:47.911003 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Mar 7 02:00:47.916403 systemd[1]: sshd@13-10.0.0.122:22-10.0.0.1:40714.service: Deactivated successfully. Mar 7 02:00:47.937266 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 02:00:47.953403 systemd-logind[1559]: Removed session 14. Mar 7 02:00:52.914433 systemd[1]: Started sshd@14-10.0.0.122:22-10.0.0.1:48818.service - OpenSSH per-connection server daemon (10.0.0.1:48818). Mar 7 02:00:53.108440 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 48818 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:00:53.116006 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:53.145998 systemd-logind[1559]: New session 15 of user core. Mar 7 02:00:53.179860 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 02:00:53.718181 sshd[5428]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:53.732924 systemd[1]: sshd@14-10.0.0.122:22-10.0.0.1:48818.service: Deactivated successfully. Mar 7 02:00:53.743642 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Mar 7 02:00:53.756278 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 02:00:53.760613 systemd-logind[1559]: Removed session 15. Mar 7 02:00:55.203733 update_engine[1569]: I20260307 02:00:55.201232 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:00:55.210781 update_engine[1569]: I20260307 02:00:55.206184 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:00:55.210781 update_engine[1569]: I20260307 02:00:55.206550 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:00:55.233612 update_engine[1569]: E20260307 02:00:55.229555 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:00:55.233612 update_engine[1569]: I20260307 02:00:55.229815 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 02:00:58.756441 systemd[1]: Started sshd@15-10.0.0.122:22-10.0.0.1:48822.service - OpenSSH per-connection server daemon (10.0.0.1:48822). Mar 7 02:00:58.988128 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 48822 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:00:58.994265 sshd[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:00:59.039643 systemd-logind[1559]: New session 16 of user core. Mar 7 02:00:59.051835 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 02:00:59.486227 sshd[5447]: pam_unix(sshd:session): session closed for user core Mar 7 02:00:59.520848 systemd[1]: sshd@15-10.0.0.122:22-10.0.0.1:48822.service: Deactivated successfully. Mar 7 02:00:59.556780 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 02:00:59.577479 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Mar 7 02:00:59.609418 systemd-logind[1559]: Removed session 16. Mar 7 02:01:03.676387 kubelet[2883]: E0307 02:01:03.672417 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:04.567245 systemd[1]: Started sshd@16-10.0.0.122:22-10.0.0.1:45614.service - OpenSSH per-connection server daemon (10.0.0.1:45614). Mar 7 02:01:04.717314 sshd[5466]: Accepted publickey for core from 10.0.0.1 port 45614 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:04.729846 sshd[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:04.882055 systemd-logind[1559]: New session 17 of user core. Mar 7 02:01:04.931096 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 02:01:05.211791 update_engine[1569]: I20260307 02:01:05.210763 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:01:05.211791 update_engine[1569]: I20260307 02:01:05.211288 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:01:05.211791 update_engine[1569]: I20260307 02:01:05.211627 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:01:05.247240 update_engine[1569]: E20260307 02:01:05.244843 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:01:05.247240 update_engine[1569]: I20260307 02:01:05.245018 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 02:01:05.247240 update_engine[1569]: I20260307 02:01:05.245045 1569 omaha_request_action.cc:617] Omaha request response: Mar 7 02:01:05.247240 update_engine[1569]: E20260307 02:01:05.245177 1569 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 02:01:05.247240 update_engine[1569]: I20260307 02:01:05.245216 1569 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 02:01:05.247240 update_engine[1569]: I20260307 02:01:05.245229 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:01:05.247240 update_engine[1569]: I20260307 02:01:05.245239 1569 update_attempter.cc:306] Processing Done. Mar 7 02:01:05.247240 update_engine[1569]: E20260307 02:01:05.245265 1569 update_attempter.cc:619] Update failed. Mar 7 02:01:05.247240 update_engine[1569]: I20260307 02:01:05.245276 1569 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 02:01:05.247240 update_engine[1569]: I20260307 02:01:05.245289 1569 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 02:01:05.247240 update_engine[1569]: I20260307 02:01:05.245301 1569 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 02:01:05.249457 update_engine[1569]: I20260307 02:01:05.247954 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 02:01:05.249457 update_engine[1569]: I20260307 02:01:05.248078 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 02:01:05.249457 update_engine[1569]: I20260307 02:01:05.248095 1569 omaha_request_action.cc:272] Request: Mar 7 02:01:05.249457 update_engine[1569]: Mar 7 02:01:05.249457 update_engine[1569]: Mar 7 02:01:05.249457 update_engine[1569]: Mar 7 02:01:05.249457 update_engine[1569]: Mar 7 02:01:05.249457 update_engine[1569]: Mar 7 02:01:05.249457 update_engine[1569]: Mar 7 02:01:05.249457 update_engine[1569]: I20260307 02:01:05.248111 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:01:05.249457 update_engine[1569]: I20260307 02:01:05.248436 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:01:05.258587 update_engine[1569]: I20260307 02:01:05.258489 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:01:05.260079 locksmithd[1608]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 02:01:05.295899 update_engine[1569]: E20260307 02:01:05.295367 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:01:05.295899 update_engine[1569]: I20260307 02:01:05.295492 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 02:01:05.295899 update_engine[1569]: I20260307 02:01:05.295516 1569 omaha_request_action.cc:617] Omaha request response: Mar 7 02:01:05.295899 update_engine[1569]: I20260307 02:01:05.295531 1569 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:01:05.295899 update_engine[1569]: I20260307 02:01:05.295540 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:01:05.295899 update_engine[1569]: I20260307 02:01:05.295557 1569 update_attempter.cc:306] Processing Done. Mar 7 02:01:05.295899 update_engine[1569]: I20260307 02:01:05.295570 1569 update_attempter.cc:310] Error event sent. Mar 7 02:01:05.295899 update_engine[1569]: I20260307 02:01:05.295631 1569 update_check_scheduler.cc:74] Next update check in 47m47s Mar 7 02:01:05.314023 locksmithd[1608]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 02:01:05.636494 sshd[5466]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:05.691765 systemd[1]: sshd@16-10.0.0.122:22-10.0.0.1:45614.service: Deactivated successfully. Mar 7 02:01:05.738199 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Mar 7 02:01:05.740464 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 02:01:05.760618 systemd-logind[1559]: Removed session 17. Mar 7 02:01:07.678122 kubelet[2883]: E0307 02:01:07.672540 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:10.699164 systemd[1]: Started sshd@17-10.0.0.122:22-10.0.0.1:57944.service - OpenSSH per-connection server daemon (10.0.0.1:57944). Mar 7 02:01:10.833613 sshd[5483]: Accepted publickey for core from 10.0.0.1 port 57944 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:10.843951 sshd[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:10.863649 systemd-logind[1559]: New session 18 of user core. Mar 7 02:01:10.895633 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 02:01:11.544531 sshd[5483]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:11.586138 systemd[1]: sshd@17-10.0.0.122:22-10.0.0.1:57944.service: Deactivated successfully. Mar 7 02:01:11.602654 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 02:01:11.605847 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Mar 7 02:01:11.624837 systemd-logind[1559]: Removed session 18. Mar 7 02:01:16.582090 systemd[1]: Started sshd@18-10.0.0.122:22-10.0.0.1:57952.service - OpenSSH per-connection server daemon (10.0.0.1:57952). Mar 7 02:01:16.734315 sshd[5500]: Accepted publickey for core from 10.0.0.1 port 57952 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:16.739508 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:16.811445 systemd-logind[1559]: New session 19 of user core. Mar 7 02:01:16.836328 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 02:01:17.380416 sshd[5500]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:17.392061 systemd[1]: sshd@18-10.0.0.122:22-10.0.0.1:57952.service: Deactivated successfully. Mar 7 02:01:17.411192 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Mar 7 02:01:17.426291 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 02:01:17.437045 systemd-logind[1559]: Removed session 19. Mar 7 02:01:21.678519 kubelet[2883]: E0307 02:01:21.675495 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:22.422104 systemd[1]: Started sshd@19-10.0.0.122:22-10.0.0.1:33458.service - OpenSSH per-connection server daemon (10.0.0.1:33458). Mar 7 02:01:22.493793 sshd[5516]: Accepted publickey for core from 10.0.0.1 port 33458 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:22.498408 sshd[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:22.532942 systemd-logind[1559]: New session 20 of user core. Mar 7 02:01:22.556153 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 02:01:23.133451 sshd[5516]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:23.153959 systemd[1]: sshd@19-10.0.0.122:22-10.0.0.1:33458.service: Deactivated successfully. Mar 7 02:01:23.174790 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 02:01:23.198219 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Mar 7 02:01:23.217334 systemd-logind[1559]: Removed session 20. Mar 7 02:01:24.677671 kubelet[2883]: E0307 02:01:24.673280 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:28.167585 systemd[1]: Started sshd@20-10.0.0.122:22-10.0.0.1:33460.service - OpenSSH per-connection server daemon (10.0.0.1:33460). Mar 7 02:01:28.321084 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 33460 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:28.328183 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:28.391910 systemd-logind[1559]: New session 21 of user core. Mar 7 02:01:28.399095 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 02:01:28.915597 sshd[5532]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:28.930842 systemd[1]: sshd@20-10.0.0.122:22-10.0.0.1:33460.service: Deactivated successfully. Mar 7 02:01:28.966521 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 02:01:28.967991 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Mar 7 02:01:28.987122 systemd-logind[1559]: Removed session 21. Mar 7 02:01:33.976472 systemd[1]: Started sshd@21-10.0.0.122:22-10.0.0.1:42672.service - OpenSSH per-connection server daemon (10.0.0.1:42672). Mar 7 02:01:34.276843 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 42672 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:34.288804 sshd[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:34.382457 systemd-logind[1559]: New session 22 of user core. Mar 7 02:01:34.407448 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 02:01:35.090343 sshd[5552]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:35.113240 systemd[1]: sshd@21-10.0.0.122:22-10.0.0.1:42672.service: Deactivated successfully. Mar 7 02:01:35.140441 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 02:01:35.141225 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Mar 7 02:01:35.160612 systemd-logind[1559]: Removed session 22. Mar 7 02:01:39.714423 kubelet[2883]: E0307 02:01:39.712828 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:40.156965 systemd[1]: Started sshd@22-10.0.0.122:22-10.0.0.1:48270.service - OpenSSH per-connection server daemon (10.0.0.1:48270). Mar 7 02:01:40.370493 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 48270 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:40.390603 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:40.449333 systemd-logind[1559]: New session 23 of user core. Mar 7 02:01:40.489108 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 02:01:41.348358 sshd[5570]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:41.390138 systemd[1]: sshd@22-10.0.0.122:22-10.0.0.1:48270.service: Deactivated successfully. Mar 7 02:01:41.420668 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 02:01:41.425548 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Mar 7 02:01:41.436507 systemd-logind[1559]: Removed session 23. Mar 7 02:01:46.435832 systemd[1]: Started sshd@23-10.0.0.122:22-10.0.0.1:48274.service - OpenSSH per-connection server daemon (10.0.0.1:48274). Mar 7 02:01:46.655453 sshd[5587]: Accepted publickey for core from 10.0.0.1 port 48274 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:46.674180 sshd[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:46.692059 kubelet[2883]: E0307 02:01:46.681343 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:01:46.770277 systemd-logind[1559]: New session 24 of user core. Mar 7 02:01:46.786930 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 02:01:47.427208 sshd[5587]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:47.440368 systemd[1]: sshd@23-10.0.0.122:22-10.0.0.1:48274.service: Deactivated successfully. Mar 7 02:01:47.450030 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 02:01:47.454033 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Mar 7 02:01:47.464929 systemd-logind[1559]: Removed session 24. Mar 7 02:01:52.492875 systemd[1]: Started sshd@24-10.0.0.122:22-10.0.0.1:42600.service - OpenSSH per-connection server daemon (10.0.0.1:42600). Mar 7 02:01:52.671289 sshd[5603]: Accepted publickey for core from 10.0.0.1 port 42600 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:52.686246 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:52.761952 systemd-logind[1559]: New session 25 of user core. Mar 7 02:01:52.797559 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 02:01:53.463018 sshd[5603]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:53.493463 systemd[1]: sshd@24-10.0.0.122:22-10.0.0.1:42600.service: Deactivated successfully. Mar 7 02:01:53.558012 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 02:01:53.587829 systemd-logind[1559]: Session 25 logged out. Waiting for processes to exit. Mar 7 02:01:53.621493 systemd-logind[1559]: Removed session 25. Mar 7 02:01:58.501429 systemd[1]: Started sshd@25-10.0.0.122:22-10.0.0.1:42606.service - OpenSSH per-connection server daemon (10.0.0.1:42606). Mar 7 02:01:58.819521 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 42606 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:01:58.828871 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:58.850430 systemd-logind[1559]: New session 26 of user core. Mar 7 02:01:58.863318 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 02:01:59.885653 sshd[5619]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:59.947353 systemd[1]: sshd@25-10.0.0.122:22-10.0.0.1:42606.service: Deactivated successfully. Mar 7 02:02:00.027660 systemd-logind[1559]: Session 26 logged out. Waiting for processes to exit. Mar 7 02:02:00.029477 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 02:02:00.060074 systemd-logind[1559]: Removed session 26. Mar 7 02:02:04.947806 systemd[1]: Started sshd@26-10.0.0.122:22-10.0.0.1:51060.service - OpenSSH per-connection server daemon (10.0.0.1:51060). Mar 7 02:02:05.309636 sshd[5637]: Accepted publickey for core from 10.0.0.1 port 51060 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:05.331537 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:05.365995 systemd-logind[1559]: New session 27 of user core. Mar 7 02:02:05.385549 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 02:02:06.115439 sshd[5637]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:06.121465 systemd[1]: sshd@26-10.0.0.122:22-10.0.0.1:51060.service: Deactivated successfully. Mar 7 02:02:06.129872 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 02:02:06.130101 systemd-logind[1559]: Session 27 logged out. Waiting for processes to exit. Mar 7 02:02:06.136547 systemd-logind[1559]: Removed session 27. Mar 7 02:02:07.689388 kubelet[2883]: E0307 02:02:07.689209 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:11.166351 systemd[1]: Started sshd@27-10.0.0.122:22-10.0.0.1:60240.service - OpenSSH per-connection server daemon (10.0.0.1:60240). Mar 7 02:02:11.473792 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 60240 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:11.477802 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:11.537499 systemd-logind[1559]: New session 28 of user core. Mar 7 02:02:11.546593 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 02:02:11.681386 kubelet[2883]: E0307 02:02:11.674917 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:12.136595 sshd[5654]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:12.170994 systemd[1]: sshd@27-10.0.0.122:22-10.0.0.1:60240.service: Deactivated successfully. Mar 7 02:02:12.226647 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 02:02:12.232598 systemd-logind[1559]: Session 28 logged out. Waiting for processes to exit. Mar 7 02:02:12.272899 systemd-logind[1559]: Removed session 28. Mar 7 02:02:16.692853 kubelet[2883]: E0307 02:02:16.692530 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:17.246351 systemd[1]: Started sshd@28-10.0.0.122:22-10.0.0.1:60250.service - OpenSSH per-connection server daemon (10.0.0.1:60250). Mar 7 02:02:17.573418 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 60250 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:17.562413 sshd[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:17.634501 systemd-logind[1559]: New session 29 of user core. Mar 7 02:02:17.649559 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 02:02:19.241598 sshd[5673]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:19.273908 systemd-logind[1559]: Session 29 logged out. Waiting for processes to exit. Mar 7 02:02:19.277389 systemd[1]: sshd@28-10.0.0.122:22-10.0.0.1:60250.service: Deactivated successfully. Mar 7 02:02:19.288780 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 02:02:19.306138 systemd-logind[1559]: Removed session 29. Mar 7 02:02:24.329217 systemd[1]: Started sshd@29-10.0.0.122:22-10.0.0.1:58182.service - OpenSSH per-connection server daemon (10.0.0.1:58182). Mar 7 02:02:24.565843 sshd[5691]: Accepted publickey for core from 10.0.0.1 port 58182 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:24.570122 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:24.632500 systemd-logind[1559]: New session 30 of user core. Mar 7 02:02:24.645005 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 02:02:24.675902 kubelet[2883]: E0307 02:02:24.675509 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:25.297633 sshd[5691]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:25.317331 systemd[1]: sshd@29-10.0.0.122:22-10.0.0.1:58182.service: Deactivated successfully. Mar 7 02:02:25.339092 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 02:02:25.340031 systemd-logind[1559]: Session 30 logged out. Waiting for processes to exit. Mar 7 02:02:25.350370 systemd-logind[1559]: Removed session 30. Mar 7 02:02:27.707936 kubelet[2883]: E0307 02:02:27.698887 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:30.428501 systemd[1]: Started sshd@30-10.0.0.122:22-10.0.0.1:43008.service - OpenSSH per-connection server daemon (10.0.0.1:43008). Mar 7 02:02:30.760445 sshd[5710]: Accepted publickey for core from 10.0.0.1 port 43008 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:30.774000 sshd[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:30.808538 systemd-logind[1559]: New session 31 of user core. Mar 7 02:02:30.827473 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 02:02:32.064584 sshd[5710]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:32.088380 systemd[1]: sshd@30-10.0.0.122:22-10.0.0.1:43008.service: Deactivated successfully. Mar 7 02:02:32.095031 systemd-logind[1559]: Session 31 logged out. Waiting for processes to exit. Mar 7 02:02:32.140075 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 02:02:32.144937 systemd-logind[1559]: Removed session 31. Mar 7 02:02:37.118547 systemd[1]: Started sshd@31-10.0.0.122:22-10.0.0.1:43022.service - OpenSSH per-connection server daemon (10.0.0.1:43022). Mar 7 02:02:37.390151 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 43022 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:37.408045 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:37.453018 systemd-logind[1559]: New session 32 of user core. Mar 7 02:02:37.473954 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 02:02:38.070296 sshd[5729]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:38.085016 systemd[1]: sshd@31-10.0.0.122:22-10.0.0.1:43022.service: Deactivated successfully. Mar 7 02:02:38.104150 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 02:02:38.107806 systemd-logind[1559]: Session 32 logged out. Waiting for processes to exit. Mar 7 02:02:38.127945 systemd-logind[1559]: Removed session 32. Mar 7 02:02:42.681032 kubelet[2883]: E0307 02:02:42.678641 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:02:43.113253 systemd[1]: Started sshd@32-10.0.0.122:22-10.0.0.1:58820.service - OpenSSH per-connection server daemon (10.0.0.1:58820). Mar 7 02:02:43.312974 sshd[5745]: Accepted publickey for core from 10.0.0.1 port 58820 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:43.316448 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:43.338878 systemd-logind[1559]: New session 33 of user core. Mar 7 02:02:43.353085 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 02:02:43.871312 sshd[5745]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:43.895983 systemd[1]: sshd@32-10.0.0.122:22-10.0.0.1:58820.service: Deactivated successfully. Mar 7 02:02:43.920464 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 02:02:43.925124 systemd-logind[1559]: Session 33 logged out. Waiting for processes to exit. Mar 7 02:02:43.938499 systemd-logind[1559]: Removed session 33. Mar 7 02:02:48.943050 systemd[1]: Started sshd@33-10.0.0.122:22-10.0.0.1:58834.service - OpenSSH per-connection server daemon (10.0.0.1:58834). Mar 7 02:02:49.176968 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 58834 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:49.190579 sshd[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:49.248048 systemd-logind[1559]: New session 34 of user core. Mar 7 02:02:49.286964 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 7 02:02:49.977068 sshd[5768]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:50.030081 systemd[1]: sshd@33-10.0.0.122:22-10.0.0.1:58834.service: Deactivated successfully. Mar 7 02:02:50.071186 systemd[1]: session-34.scope: Deactivated successfully. Mar 7 02:02:50.086377 systemd-logind[1559]: Session 34 logged out. Waiting for processes to exit. Mar 7 02:02:50.094439 systemd-logind[1559]: Removed session 34. Mar 7 02:02:55.018936 systemd[1]: Started sshd@34-10.0.0.122:22-10.0.0.1:59210.service - OpenSSH per-connection server daemon (10.0.0.1:59210). Mar 7 02:02:55.294530 sshd[5784]: Accepted publickey for core from 10.0.0.1 port 59210 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:55.310893 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:55.353596 systemd-logind[1559]: New session 35 of user core. Mar 7 02:02:55.378329 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 7 02:02:56.409669 sshd[5784]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:56.468354 systemd[1]: Started sshd@35-10.0.0.122:22-10.0.0.1:59214.service - OpenSSH per-connection server daemon (10.0.0.1:59214). Mar 7 02:02:56.469388 systemd[1]: sshd@34-10.0.0.122:22-10.0.0.1:59210.service: Deactivated successfully. Mar 7 02:02:56.490178 systemd[1]: session-35.scope: Deactivated successfully. Mar 7 02:02:56.512627 systemd-logind[1559]: Session 35 logged out. Waiting for processes to exit. Mar 7 02:02:56.570907 systemd-logind[1559]: Removed session 35. Mar 7 02:02:56.723093 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 59214 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:56.749671 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:56.797649 systemd-logind[1559]: New session 36 of user core. Mar 7 02:02:56.825396 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 7 02:02:58.157996 sshd[5797]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:58.214388 systemd[1]: Started sshd@36-10.0.0.122:22-10.0.0.1:59226.service - OpenSSH per-connection server daemon (10.0.0.1:59226). Mar 7 02:02:58.264923 systemd[1]: sshd@35-10.0.0.122:22-10.0.0.1:59214.service: Deactivated successfully. Mar 7 02:02:58.286607 systemd-logind[1559]: Session 36 logged out. Waiting for processes to exit. Mar 7 02:02:58.296611 systemd[1]: session-36.scope: Deactivated successfully. Mar 7 02:02:58.358513 systemd-logind[1559]: Removed session 36. Mar 7 02:02:58.691223 sshd[5811]: Accepted publickey for core from 10.0.0.1 port 59226 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:02:58.697156 sshd[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:02:58.760522 systemd-logind[1559]: New session 37 of user core. Mar 7 02:02:58.817016 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 7 02:02:59.696451 sshd[5811]: pam_unix(sshd:session): session closed for user core Mar 7 02:02:59.726222 systemd[1]: sshd@36-10.0.0.122:22-10.0.0.1:59226.service: Deactivated successfully. Mar 7 02:02:59.751150 systemd[1]: session-37.scope: Deactivated successfully. Mar 7 02:02:59.752621 systemd-logind[1559]: Session 37 logged out. Waiting for processes to exit. Mar 7 02:02:59.773549 systemd-logind[1559]: Removed session 37. Mar 7 02:03:04.724574 systemd[1]: Started sshd@37-10.0.0.122:22-10.0.0.1:57232.service - OpenSSH per-connection server daemon (10.0.0.1:57232). Mar 7 02:03:04.889930 sshd[5831]: Accepted publickey for core from 10.0.0.1 port 57232 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:04.893834 sshd[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:04.956673 systemd-logind[1559]: New session 38 of user core. Mar 7 02:03:04.974181 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 7 02:03:05.514469 sshd[5831]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:05.527394 systemd[1]: sshd@37-10.0.0.122:22-10.0.0.1:57232.service: Deactivated successfully. Mar 7 02:03:05.534936 systemd-logind[1559]: Session 38 logged out. Waiting for processes to exit. Mar 7 02:03:05.536164 systemd[1]: session-38.scope: Deactivated successfully. Mar 7 02:03:05.545176 systemd-logind[1559]: Removed session 38. Mar 7 02:03:05.681958 kubelet[2883]: E0307 02:03:05.676090 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:08.674389 kubelet[2883]: E0307 02:03:08.674233 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:10.617119 systemd[1]: Started sshd@38-10.0.0.122:22-10.0.0.1:52424.service - OpenSSH per-connection server daemon (10.0.0.1:52424). Mar 7 02:03:10.762671 sshd[5847]: Accepted publickey for core from 10.0.0.1 port 52424 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:10.772467 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:10.788787 systemd-logind[1559]: New session 39 of user core. Mar 7 02:03:10.807199 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 7 02:03:11.431200 sshd[5847]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:11.442269 systemd[1]: sshd@38-10.0.0.122:22-10.0.0.1:52424.service: Deactivated successfully. Mar 7 02:03:11.463829 systemd-logind[1559]: Session 39 logged out. Waiting for processes to exit. Mar 7 02:03:11.466486 systemd[1]: session-39.scope: Deactivated successfully. Mar 7 02:03:11.473635 systemd-logind[1559]: Removed session 39. Mar 7 02:03:16.485444 systemd[1]: Started sshd@39-10.0.0.122:22-10.0.0.1:52426.service - OpenSSH per-connection server daemon (10.0.0.1:52426). Mar 7 02:03:16.676022 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 52426 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:16.683277 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:16.744228 systemd-logind[1559]: New session 40 of user core. Mar 7 02:03:16.765261 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 7 02:03:17.314090 sshd[5863]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:17.327933 systemd[1]: sshd@39-10.0.0.122:22-10.0.0.1:52426.service: Deactivated successfully. Mar 7 02:03:17.345026 systemd[1]: session-40.scope: Deactivated successfully. Mar 7 02:03:17.348083 systemd-logind[1559]: Session 40 logged out. Waiting for processes to exit. Mar 7 02:03:17.355497 systemd-logind[1559]: Removed session 40. Mar 7 02:03:22.366378 systemd[1]: Started sshd@40-10.0.0.122:22-10.0.0.1:47786.service - OpenSSH per-connection server daemon (10.0.0.1:47786). Mar 7 02:03:22.495395 sshd[5879]: Accepted publickey for core from 10.0.0.1 port 47786 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:22.497912 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:22.556507 systemd-logind[1559]: New session 41 of user core. Mar 7 02:03:22.568439 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 7 02:03:23.062537 sshd[5879]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:23.093473 systemd[1]: sshd@40-10.0.0.122:22-10.0.0.1:47786.service: Deactivated successfully. Mar 7 02:03:23.124006 systemd-logind[1559]: Session 41 logged out. Waiting for processes to exit. Mar 7 02:03:23.128438 systemd[1]: session-41.scope: Deactivated successfully. Mar 7 02:03:23.135232 systemd-logind[1559]: Removed session 41. Mar 7 02:03:24.684223 kubelet[2883]: E0307 02:03:24.682899 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:25.675306 kubelet[2883]: E0307 02:03:25.675042 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:28.113948 systemd[1]: Started sshd@41-10.0.0.122:22-10.0.0.1:47800.service - OpenSSH per-connection server daemon (10.0.0.1:47800). Mar 7 02:03:28.317507 sshd[5894]: Accepted publickey for core from 10.0.0.1 port 47800 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:28.335190 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:28.638579 systemd-logind[1559]: New session 42 of user core. Mar 7 02:03:28.655264 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 7 02:03:29.531396 sshd[5894]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:29.561781 systemd[1]: sshd@41-10.0.0.122:22-10.0.0.1:47800.service: Deactivated successfully. Mar 7 02:03:29.589194 systemd-logind[1559]: Session 42 logged out. Waiting for processes to exit. Mar 7 02:03:29.611970 systemd[1]: session-42.scope: Deactivated successfully. Mar 7 02:03:29.631958 systemd-logind[1559]: Removed session 42. Mar 7 02:03:34.575130 systemd[1]: Started sshd@42-10.0.0.122:22-10.0.0.1:33474.service - OpenSSH per-connection server daemon (10.0.0.1:33474). Mar 7 02:03:34.792426 sshd[5914]: Accepted publickey for core from 10.0.0.1 port 33474 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:34.805007 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:34.860092 systemd-logind[1559]: New session 43 of user core. Mar 7 02:03:34.872225 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 7 02:03:35.791909 sshd[5914]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:35.829955 systemd[1]: sshd@42-10.0.0.122:22-10.0.0.1:33474.service: Deactivated successfully. Mar 7 02:03:35.834885 systemd-logind[1559]: Session 43 logged out. Waiting for processes to exit. Mar 7 02:03:35.850152 systemd[1]: session-43.scope: Deactivated successfully. Mar 7 02:03:35.864100 systemd-logind[1559]: Removed session 43. Mar 7 02:03:36.678721 kubelet[2883]: E0307 02:03:36.678462 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:40.843802 systemd[1]: Started sshd@43-10.0.0.122:22-10.0.0.1:35162.service - OpenSSH per-connection server daemon (10.0.0.1:35162). Mar 7 02:03:41.166082 sshd[5936]: Accepted publickey for core from 10.0.0.1 port 35162 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:41.175084 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:41.233337 systemd-logind[1559]: New session 44 of user core. Mar 7 02:03:41.247342 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 7 02:03:41.705021 kubelet[2883]: E0307 02:03:41.695452 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:42.721291 sshd[5936]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:42.746490 systemd-logind[1559]: Session 44 logged out. Waiting for processes to exit. Mar 7 02:03:42.750981 systemd[1]: sshd@43-10.0.0.122:22-10.0.0.1:35162.service: Deactivated successfully. Mar 7 02:03:42.774853 systemd[1]: session-44.scope: Deactivated successfully. Mar 7 02:03:42.785479 systemd-logind[1559]: Removed session 44. Mar 7 02:03:47.753185 systemd[1]: Started sshd@44-10.0.0.122:22-10.0.0.1:35170.service - OpenSSH per-connection server daemon (10.0.0.1:35170). Mar 7 02:03:47.895664 sshd[5952]: Accepted publickey for core from 10.0.0.1 port 35170 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:47.912034 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:47.958000 systemd-logind[1559]: New session 45 of user core. Mar 7 02:03:47.971815 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 7 02:03:48.574474 sshd[5952]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:48.598497 systemd[1]: sshd@44-10.0.0.122:22-10.0.0.1:35170.service: Deactivated successfully. Mar 7 02:03:48.619966 systemd-logind[1559]: Session 45 logged out. Waiting for processes to exit. Mar 7 02:03:48.631616 systemd[1]: session-45.scope: Deactivated successfully. Mar 7 02:03:48.648216 systemd-logind[1559]: Removed session 45. Mar 7 02:03:49.681100 kubelet[2883]: E0307 02:03:49.680779 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:53.605141 systemd[1]: Started sshd@45-10.0.0.122:22-10.0.0.1:50820.service - OpenSSH per-connection server daemon (10.0.0.1:50820). Mar 7 02:03:53.773114 sshd[5967]: Accepted publickey for core from 10.0.0.1 port 50820 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:53.783118 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:54.718539 systemd-logind[1559]: New session 46 of user core. Mar 7 02:03:54.742515 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 7 02:03:55.190148 sshd[5967]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:55.227882 systemd[1]: Started sshd@46-10.0.0.122:22-10.0.0.1:50828.service - OpenSSH per-connection server daemon (10.0.0.1:50828). Mar 7 02:03:55.239102 systemd[1]: sshd@45-10.0.0.122:22-10.0.0.1:50820.service: Deactivated successfully. Mar 7 02:03:55.265352 systemd[1]: session-46.scope: Deactivated successfully. Mar 7 02:03:55.280831 systemd-logind[1559]: Session 46 logged out. Waiting for processes to exit. Mar 7 02:03:55.295873 systemd-logind[1559]: Removed session 46. Mar 7 02:03:55.349815 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 50828 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:55.365633 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:55.402507 systemd-logind[1559]: New session 47 of user core. Mar 7 02:03:55.420579 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 7 02:03:57.445780 sshd[5980]: pam_unix(sshd:session): session closed for user core Mar 7 02:03:57.481926 systemd[1]: Started sshd@47-10.0.0.122:22-10.0.0.1:50830.service - OpenSSH per-connection server daemon (10.0.0.1:50830). Mar 7 02:03:57.490477 systemd[1]: sshd@46-10.0.0.122:22-10.0.0.1:50828.service: Deactivated successfully. Mar 7 02:03:57.519339 systemd-logind[1559]: Session 47 logged out. Waiting for processes to exit. Mar 7 02:03:57.547231 systemd[1]: session-47.scope: Deactivated successfully. Mar 7 02:03:57.553662 systemd-logind[1559]: Removed session 47. Mar 7 02:03:57.732650 sshd[5995]: Accepted publickey for core from 10.0.0.1 port 50830 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:03:57.747208 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:03:57.794585 systemd-logind[1559]: New session 48 of user core. Mar 7 02:03:57.817107 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 7 02:04:00.835153 sshd[5995]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:00.868224 systemd[1]: sshd@47-10.0.0.122:22-10.0.0.1:50830.service: Deactivated successfully. Mar 7 02:04:00.935990 systemd-logind[1559]: Session 48 logged out. Waiting for processes to exit. Mar 7 02:04:01.007814 systemd[1]: Started sshd@48-10.0.0.122:22-10.0.0.1:36788.service - OpenSSH per-connection server daemon (10.0.0.1:36788). Mar 7 02:04:01.008391 systemd[1]: session-48.scope: Deactivated successfully. Mar 7 02:04:01.062025 systemd-logind[1559]: Removed session 48. Mar 7 02:04:01.248810 sshd[6023]: Accepted publickey for core from 10.0.0.1 port 36788 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:01.251824 sshd[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:01.278591 systemd-logind[1559]: New session 49 of user core. Mar 7 02:04:01.296073 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 7 02:04:02.672478 kubelet[2883]: E0307 02:04:02.671854 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:03.071382 sshd[6023]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:03.122263 systemd[1]: Started sshd@49-10.0.0.122:22-10.0.0.1:36804.service - OpenSSH per-connection server daemon (10.0.0.1:36804). Mar 7 02:04:03.123173 systemd[1]: sshd@48-10.0.0.122:22-10.0.0.1:36788.service: Deactivated successfully. Mar 7 02:04:03.162850 systemd[1]: session-49.scope: Deactivated successfully. Mar 7 02:04:03.163319 systemd-logind[1559]: Session 49 logged out. Waiting for processes to exit. Mar 7 02:04:03.230390 systemd-logind[1559]: Removed session 49. Mar 7 02:04:03.408612 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 36804 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:03.400164 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:03.445363 systemd-logind[1559]: New session 50 of user core. Mar 7 02:04:03.462978 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 7 02:04:03.992558 sshd[6034]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:04.017049 systemd[1]: sshd@49-10.0.0.122:22-10.0.0.1:36804.service: Deactivated successfully. Mar 7 02:04:04.037305 systemd-logind[1559]: Session 50 logged out. Waiting for processes to exit. Mar 7 02:04:04.042057 systemd[1]: session-50.scope: Deactivated successfully. Mar 7 02:04:04.048976 systemd-logind[1559]: Removed session 50. Mar 7 02:04:09.074344 systemd[1]: Started sshd@50-10.0.0.122:22-10.0.0.1:36814.service - OpenSSH per-connection server daemon (10.0.0.1:36814). Mar 7 02:04:09.281785 sshd[6054]: Accepted publickey for core from 10.0.0.1 port 36814 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:09.285825 sshd[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:09.350212 systemd-logind[1559]: New session 51 of user core. Mar 7 02:04:09.375050 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 7 02:04:09.936816 sshd[6054]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:09.947313 systemd[1]: sshd@50-10.0.0.122:22-10.0.0.1:36814.service: Deactivated successfully. Mar 7 02:04:09.988580 systemd[1]: session-51.scope: Deactivated successfully. Mar 7 02:04:10.010952 systemd-logind[1559]: Session 51 logged out. Waiting for processes to exit. Mar 7 02:04:10.018883 systemd-logind[1559]: Removed session 51. Mar 7 02:04:10.684140 kubelet[2883]: E0307 02:04:10.684096 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:14.966315 systemd[1]: Started sshd@51-10.0.0.122:22-10.0.0.1:36416.service - OpenSSH per-connection server daemon (10.0.0.1:36416). Mar 7 02:04:15.117628 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 36416 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:15.125244 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:15.159914 systemd-logind[1559]: New session 52 of user core. Mar 7 02:04:15.188101 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 7 02:04:15.680225 kubelet[2883]: E0307 02:04:15.673667 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:15.713338 sshd[6069]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:15.740245 systemd[1]: sshd@51-10.0.0.122:22-10.0.0.1:36416.service: Deactivated successfully. Mar 7 02:04:15.764653 systemd[1]: session-52.scope: Deactivated successfully. Mar 7 02:04:15.784921 systemd-logind[1559]: Session 52 logged out. Waiting for processes to exit. Mar 7 02:04:15.793390 systemd-logind[1559]: Removed session 52. Mar 7 02:04:20.765653 systemd[1]: Started sshd@52-10.0.0.122:22-10.0.0.1:38242.service - OpenSSH per-connection server daemon (10.0.0.1:38242). Mar 7 02:04:20.972693 sshd[6085]: Accepted publickey for core from 10.0.0.1 port 38242 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:20.990918 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:21.027853 systemd-logind[1559]: New session 53 of user core. Mar 7 02:04:21.042612 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 7 02:04:21.470575 sshd[6085]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:21.485648 systemd[1]: sshd@52-10.0.0.122:22-10.0.0.1:38242.service: Deactivated successfully. Mar 7 02:04:21.495862 systemd-logind[1559]: Session 53 logged out. Waiting for processes to exit. Mar 7 02:04:21.499023 systemd[1]: session-53.scope: Deactivated successfully. Mar 7 02:04:21.512534 systemd-logind[1559]: Removed session 53. Mar 7 02:04:26.949050 systemd[1]: Started sshd@53-10.0.0.122:22-10.0.0.1:38250.service - OpenSSH per-connection server daemon (10.0.0.1:38250). Mar 7 02:04:27.068400 sshd[6100]: Accepted publickey for core from 10.0.0.1 port 38250 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:27.072251 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:27.134009 systemd-logind[1559]: New session 54 of user core. Mar 7 02:04:27.142568 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 7 02:04:27.651388 sshd[6100]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:27.676843 systemd[1]: sshd@53-10.0.0.122:22-10.0.0.1:38250.service: Deactivated successfully. Mar 7 02:04:27.698482 systemd[1]: session-54.scope: Deactivated successfully. Mar 7 02:04:27.703856 systemd-logind[1559]: Session 54 logged out. Waiting for processes to exit. Mar 7 02:04:27.706585 systemd-logind[1559]: Removed session 54. Mar 7 02:04:32.680975 systemd[1]: Started sshd@54-10.0.0.122:22-10.0.0.1:40366.service - OpenSSH per-connection server daemon (10.0.0.1:40366). Mar 7 02:04:32.706296 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Mar 7 02:04:32.860900 sshd[6119]: Accepted publickey for core from 10.0.0.1 port 40366 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:32.872134 systemd-tmpfiles[6120]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 02:04:32.874325 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:32.881406 systemd-tmpfiles[6120]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 02:04:32.885994 systemd-tmpfiles[6120]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 02:04:32.888321 systemd-tmpfiles[6120]: ACLs are not supported, ignoring. Mar 7 02:04:32.890448 systemd-tmpfiles[6120]: ACLs are not supported, ignoring. Mar 7 02:04:32.901153 systemd-logind[1559]: New session 55 of user core. Mar 7 02:04:32.925208 systemd-tmpfiles[6120]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:04:32.925276 systemd-tmpfiles[6120]: Skipping /boot Mar 7 02:04:32.927309 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 7 02:04:32.971355 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Mar 7 02:04:32.974933 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Mar 7 02:04:33.516275 sshd[6119]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:33.550929 systemd[1]: sshd@54-10.0.0.122:22-10.0.0.1:40366.service: Deactivated successfully. Mar 7 02:04:33.568657 systemd-logind[1559]: Session 55 logged out. Waiting for processes to exit. Mar 7 02:04:33.570948 systemd[1]: session-55.scope: Deactivated successfully. Mar 7 02:04:33.574670 systemd-logind[1559]: Removed session 55. Mar 7 02:04:35.682252 kubelet[2883]: E0307 02:04:35.679401 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:38.554099 systemd[1]: Started sshd@55-10.0.0.122:22-10.0.0.1:40370.service - OpenSSH per-connection server daemon (10.0.0.1:40370). Mar 7 02:04:38.634345 sshd[6140]: Accepted publickey for core from 10.0.0.1 port 40370 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:38.638941 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:38.664813 systemd-logind[1559]: New session 56 of user core. Mar 7 02:04:38.678398 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 7 02:04:39.081431 sshd[6140]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:39.090844 systemd[1]: sshd@55-10.0.0.122:22-10.0.0.1:40370.service: Deactivated successfully. Mar 7 02:04:39.095824 systemd-logind[1559]: Session 56 logged out. Waiting for processes to exit. Mar 7 02:04:39.096126 systemd[1]: session-56.scope: Deactivated successfully. Mar 7 02:04:39.099514 systemd-logind[1559]: Removed session 56. Mar 7 02:04:41.687461 kubelet[2883]: E0307 02:04:41.687176 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:44.129163 systemd[1]: Started sshd@56-10.0.0.122:22-10.0.0.1:42514.service - OpenSSH per-connection server daemon (10.0.0.1:42514). Mar 7 02:04:44.211172 sshd[6159]: Accepted publickey for core from 10.0.0.1 port 42514 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:44.217071 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:44.247475 systemd-logind[1559]: New session 57 of user core. Mar 7 02:04:44.262621 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 7 02:04:44.638937 sshd[6159]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:44.657476 systemd[1]: sshd@56-10.0.0.122:22-10.0.0.1:42514.service: Deactivated successfully. Mar 7 02:04:44.682478 systemd[1]: session-57.scope: Deactivated successfully. Mar 7 02:04:44.683853 systemd-logind[1559]: Session 57 logged out. Waiting for processes to exit. Mar 7 02:04:44.707601 systemd-logind[1559]: Removed session 57. Mar 7 02:04:49.664229 systemd[1]: Started sshd@57-10.0.0.122:22-10.0.0.1:42520.service - OpenSSH per-connection server daemon (10.0.0.1:42520). Mar 7 02:04:49.675293 kubelet[2883]: E0307 02:04:49.672379 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:49.757439 sshd[6174]: Accepted publickey for core from 10.0.0.1 port 42520 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:49.771318 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:49.806056 systemd-logind[1559]: New session 58 of user core. Mar 7 02:04:49.821268 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 7 02:04:50.228894 sshd[6174]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:50.255801 systemd[1]: Started sshd@58-10.0.0.122:22-10.0.0.1:40120.service - OpenSSH per-connection server daemon (10.0.0.1:40120). Mar 7 02:04:50.257980 systemd[1]: sshd@57-10.0.0.122:22-10.0.0.1:42520.service: Deactivated successfully. Mar 7 02:04:50.266063 systemd-logind[1559]: Session 58 logged out. Waiting for processes to exit. Mar 7 02:04:50.275456 systemd[1]: session-58.scope: Deactivated successfully. Mar 7 02:04:50.313467 systemd-logind[1559]: Removed session 58. Mar 7 02:04:50.372772 sshd[6187]: Accepted publickey for core from 10.0.0.1 port 40120 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:50.375427 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:50.405119 systemd-logind[1559]: New session 59 of user core. Mar 7 02:04:50.433207 systemd[1]: Started session-59.scope - Session 59 of User core. Mar 7 02:04:53.393081 containerd[1585]: time="2026-03-07T02:04:53.392866848Z" level=info msg="StopContainer for \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\" with timeout 30 (s)" Mar 7 02:04:53.398391 containerd[1585]: time="2026-03-07T02:04:53.396513227Z" level=info msg="Stop container \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\" with signal terminated" Mar 7 02:04:53.554826 containerd[1585]: time="2026-03-07T02:04:53.553666854Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 02:04:53.595220 containerd[1585]: time="2026-03-07T02:04:53.595089785Z" level=info msg="StopContainer for \"87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0\" with timeout 2 (s)" Mar 7 02:04:53.597530 containerd[1585]: time="2026-03-07T02:04:53.597167013Z" level=info msg="Stop container \"87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0\" with signal terminated" Mar 7 02:04:53.613342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd-rootfs.mount: Deactivated successfully. Mar 7 02:04:53.628213 systemd-networkd[1239]: lxc_health: Link DOWN Mar 7 02:04:53.628223 systemd-networkd[1239]: lxc_health: Lost carrier Mar 7 02:04:53.652283 containerd[1585]: time="2026-03-07T02:04:53.651809353Z" level=info msg="shim disconnected" id=39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd namespace=k8s.io Mar 7 02:04:53.653113 containerd[1585]: time="2026-03-07T02:04:53.652816073Z" level=warning msg="cleaning up after shim disconnected" id=39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd namespace=k8s.io Mar 7 02:04:53.653113 containerd[1585]: time="2026-03-07T02:04:53.652842593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:53.726418 containerd[1585]: time="2026-03-07T02:04:53.726107093Z" level=info msg="StopContainer for \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\" returns successfully" Mar 7 02:04:53.748114 containerd[1585]: time="2026-03-07T02:04:53.748041859Z" level=info msg="StopPodSandbox for \"c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf\"" Mar 7 02:04:53.748435 containerd[1585]: time="2026-03-07T02:04:53.748397974Z" level=info msg="Container to stop \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:53.748616 containerd[1585]: time="2026-03-07T02:04:53.748536793Z" level=info msg="Container to stop \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:53.766872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf-shm.mount: Deactivated successfully. Mar 7 02:04:54.190473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf-rootfs.mount: Deactivated successfully. Mar 7 02:04:54.212055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0-rootfs.mount: Deactivated successfully. Mar 7 02:04:54.219132 containerd[1585]: time="2026-03-07T02:04:54.218656098Z" level=info msg="shim disconnected" id=c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf namespace=k8s.io Mar 7 02:04:54.219132 containerd[1585]: time="2026-03-07T02:04:54.218805647Z" level=warning msg="cleaning up after shim disconnected" id=c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf namespace=k8s.io Mar 7 02:04:54.219132 containerd[1585]: time="2026-03-07T02:04:54.218818782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:54.237222 containerd[1585]: time="2026-03-07T02:04:54.231209589Z" level=info msg="shim disconnected" id=87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0 namespace=k8s.io Mar 7 02:04:54.237222 containerd[1585]: time="2026-03-07T02:04:54.231810912Z" level=warning msg="cleaning up after shim disconnected" id=87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0 namespace=k8s.io Mar 7 02:04:54.237222 containerd[1585]: time="2026-03-07T02:04:54.231942467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:54.364899 containerd[1585]: time="2026-03-07T02:04:54.358999894Z" level=info msg="StopContainer for \"87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0\" returns successfully" Mar 7 02:04:54.374137 containerd[1585]: time="2026-03-07T02:04:54.373914601Z" level=info msg="StopPodSandbox for \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\"" Mar 7 02:04:54.374137 containerd[1585]: time="2026-03-07T02:04:54.373987057Z" level=info msg="Container to stop \"87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:54.374137 containerd[1585]: time="2026-03-07T02:04:54.374013976Z" level=info msg="Container to stop \"56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:54.374137 containerd[1585]: time="2026-03-07T02:04:54.374033433Z" level=info msg="Container to stop \"f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:54.374137 containerd[1585]: time="2026-03-07T02:04:54.374049503Z" level=info msg="Container to stop \"3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:54.374137 containerd[1585]: time="2026-03-07T02:04:54.374066835Z" level=info msg="Container to stop \"e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:04:54.377651 containerd[1585]: time="2026-03-07T02:04:54.377482513Z" level=warning msg="cleanup warnings time=\"2026-03-07T02:04:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 02:04:54.387020 containerd[1585]: time="2026-03-07T02:04:54.386409811Z" level=info msg="TearDown network for sandbox \"c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf\" successfully" Mar 7 02:04:54.387020 containerd[1585]: time="2026-03-07T02:04:54.386485833Z" level=info msg="StopPodSandbox for \"c9e22b4d3a9c7fd71f6e4018ed762afc0b85bc02544a9f66e7712397a80c4ccf\" returns successfully" Mar 7 02:04:54.432889 kubelet[2883]: I0307 02:04:54.432755 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3714da96-de37-4fe0-b3e7-778d1d5a47dc-cilium-config-path\") pod \"3714da96-de37-4fe0-b3e7-778d1d5a47dc\" (UID: \"3714da96-de37-4fe0-b3e7-778d1d5a47dc\") " Mar 7 02:04:54.432889 kubelet[2883]: I0307 02:04:54.432845 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdd45\" (UniqueName: \"kubernetes.io/projected/3714da96-de37-4fe0-b3e7-778d1d5a47dc-kube-api-access-jdd45\") pod \"3714da96-de37-4fe0-b3e7-778d1d5a47dc\" (UID: \"3714da96-de37-4fe0-b3e7-778d1d5a47dc\") " Mar 7 02:04:54.452895 kubelet[2883]: I0307 02:04:54.452418 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3714da96-de37-4fe0-b3e7-778d1d5a47dc-kube-api-access-jdd45" (OuterVolumeSpecName: "kube-api-access-jdd45") pod "3714da96-de37-4fe0-b3e7-778d1d5a47dc" (UID: "3714da96-de37-4fe0-b3e7-778d1d5a47dc"). InnerVolumeSpecName "kube-api-access-jdd45". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:04:54.453747 kubelet[2883]: I0307 02:04:54.453520 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3714da96-de37-4fe0-b3e7-778d1d5a47dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3714da96-de37-4fe0-b3e7-778d1d5a47dc" (UID: "3714da96-de37-4fe0-b3e7-778d1d5a47dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:04:54.469316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9-shm.mount: Deactivated successfully. Mar 7 02:04:54.471265 systemd[1]: var-lib-kubelet-pods-3714da96\x2dde37\x2d4fe0\x2db3e7\x2d778d1d5a47dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djdd45.mount: Deactivated successfully. Mar 7 02:04:54.551895 kubelet[2883]: I0307 02:04:54.538032 2883 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3714da96-de37-4fe0-b3e7-778d1d5a47dc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.551895 kubelet[2883]: I0307 02:04:54.538081 2883 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jdd45\" (UniqueName: \"kubernetes.io/projected/3714da96-de37-4fe0-b3e7-778d1d5a47dc-kube-api-access-jdd45\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.550737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9-rootfs.mount: Deactivated successfully. Mar 7 02:04:54.577621 containerd[1585]: time="2026-03-07T02:04:54.577261886Z" level=info msg="shim disconnected" id=09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9 namespace=k8s.io Mar 7 02:04:54.577621 containerd[1585]: time="2026-03-07T02:04:54.577330975Z" level=warning msg="cleaning up after shim disconnected" id=09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9 namespace=k8s.io Mar 7 02:04:54.577621 containerd[1585]: time="2026-03-07T02:04:54.577343799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:04:54.688183 kubelet[2883]: I0307 02:04:54.687901 2883 scope.go:117] "RemoveContainer" containerID="39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd" Mar 7 02:04:54.697113 containerd[1585]: time="2026-03-07T02:04:54.696989900Z" level=info msg="RemoveContainer for \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\"" Mar 7 02:04:54.711305 containerd[1585]: time="2026-03-07T02:04:54.710175058Z" level=warning msg="cleanup warnings time=\"2026-03-07T02:04:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 02:04:54.714083 containerd[1585]: time="2026-03-07T02:04:54.713925580Z" level=info msg="TearDown network for sandbox \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" successfully" Mar 7 02:04:54.714083 containerd[1585]: time="2026-03-07T02:04:54.713966407Z" level=info msg="StopPodSandbox for \"09a3eaf75f32cb5e6df4cb545b3829984f6f6e164051014d3706699b650125d9\" returns successfully" Mar 7 02:04:54.727155 containerd[1585]: time="2026-03-07T02:04:54.726937493Z" level=info msg="RemoveContainer for \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\" returns successfully" Mar 7 02:04:54.728015 kubelet[2883]: I0307 02:04:54.727816 2883 scope.go:117] "RemoveContainer" containerID="240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26" Mar 7 02:04:54.737176 containerd[1585]: time="2026-03-07T02:04:54.735622548Z" level=info msg="RemoveContainer for \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\"" Mar 7 02:04:54.753482 containerd[1585]: time="2026-03-07T02:04:54.753345501Z" level=info msg="RemoveContainer for \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\" returns successfully" Mar 7 02:04:54.757032 kubelet[2883]: I0307 02:04:54.755809 2883 scope.go:117] "RemoveContainer" containerID="39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd" Mar 7 02:04:54.759087 containerd[1585]: time="2026-03-07T02:04:54.757859426Z" level=error msg="ContainerStatus for \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\": not found" Mar 7 02:04:54.759189 kubelet[2883]: E0307 02:04:54.759088 2883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\": not found" containerID="39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd" Mar 7 02:04:54.759821 kubelet[2883]: I0307 02:04:54.759200 2883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd"} err="failed to get container status \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"39178bfbf7bb591099d0a489d4371f958ba273d5763612d68ee03df69e3468fd\": not found" Mar 7 02:04:54.759821 kubelet[2883]: I0307 02:04:54.759415 2883 scope.go:117] "RemoveContainer" containerID="240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26" Mar 7 02:04:54.760033 containerd[1585]: time="2026-03-07T02:04:54.759907371Z" level=error msg="ContainerStatus for \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\": not found" Mar 7 02:04:54.760109 kubelet[2883]: E0307 02:04:54.760085 2883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\": not found" containerID="240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26" Mar 7 02:04:54.760158 kubelet[2883]: I0307 02:04:54.760117 2883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26"} err="failed to get container status \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\": rpc error: code = NotFound desc = an error occurred when try to find container \"240abe1e2deb5e0c1fa530667956684a747f41e030aff97e062b4c151bd8ce26\": not found" Mar 7 02:04:54.840630 kubelet[2883]: I0307 02:04:54.839827 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-run\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.840630 kubelet[2883]: I0307 02:04:54.839899 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cni-path\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.840630 kubelet[2883]: I0307 02:04:54.839931 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-host-proc-sys-net\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.840630 kubelet[2883]: I0307 02:04:54.839971 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/966190a8-7fd8-41d7-9d65-c6161d0460a8-clustermesh-secrets\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.840630 kubelet[2883]: I0307 02:04:54.840022 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-hostproc\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.840630 kubelet[2883]: I0307 02:04:54.840053 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxhkv\" (UniqueName: \"kubernetes.io/projected/966190a8-7fd8-41d7-9d65-c6161d0460a8-kube-api-access-dxhkv\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.841075 kubelet[2883]: I0307 02:04:54.840165 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-lib-modules\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.841075 kubelet[2883]: I0307 02:04:54.840194 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-bpf-maps\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.841075 kubelet[2883]: I0307 02:04:54.840219 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-xtables-lock\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.841075 kubelet[2883]: I0307 02:04:54.840253 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/966190a8-7fd8-41d7-9d65-c6161d0460a8-hubble-tls\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.841075 kubelet[2883]: I0307 02:04:54.840283 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-config-path\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.841075 kubelet[2883]: I0307 02:04:54.840311 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-cgroup\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.841291 kubelet[2883]: I0307 02:04:54.840336 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-host-proc-sys-kernel\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.841291 kubelet[2883]: I0307 02:04:54.840365 2883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-etc-cni-netd\") pod \"966190a8-7fd8-41d7-9d65-c6161d0460a8\" (UID: \"966190a8-7fd8-41d7-9d65-c6161d0460a8\") " Mar 7 02:04:54.841291 kubelet[2883]: I0307 02:04:54.840945 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.841291 kubelet[2883]: I0307 02:04:54.841024 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.841291 kubelet[2883]: I0307 02:04:54.841058 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cni-path" (OuterVolumeSpecName: "cni-path") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.841623 kubelet[2883]: I0307 02:04:54.841085 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.841623 kubelet[2883]: I0307 02:04:54.841304 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.841623 kubelet[2883]: I0307 02:04:54.841347 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-hostproc" (OuterVolumeSpecName: "hostproc") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.848744 kubelet[2883]: I0307 02:04:54.845076 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.848744 kubelet[2883]: I0307 02:04:54.845152 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.848744 kubelet[2883]: I0307 02:04:54.845176 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.848744 kubelet[2883]: I0307 02:04:54.844949 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:04:54.853548 kubelet[2883]: I0307 02:04:54.853508 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:04:54.858549 kubelet[2883]: I0307 02:04:54.858443 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966190a8-7fd8-41d7-9d65-c6161d0460a8-kube-api-access-dxhkv" (OuterVolumeSpecName: "kube-api-access-dxhkv") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "kube-api-access-dxhkv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:04:54.859794 systemd[1]: var-lib-kubelet-pods-966190a8\x2d7fd8\x2d41d7\x2d9d65\x2dc6161d0460a8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 02:04:54.860089 systemd[1]: var-lib-kubelet-pods-966190a8\x2d7fd8\x2d41d7\x2d9d65\x2dc6161d0460a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxhkv.mount: Deactivated successfully. Mar 7 02:04:54.861213 kubelet[2883]: I0307 02:04:54.861034 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966190a8-7fd8-41d7-9d65-c6161d0460a8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:04:54.861213 kubelet[2883]: I0307 02:04:54.861164 2883 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966190a8-7fd8-41d7-9d65-c6161d0460a8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "966190a8-7fd8-41d7-9d65-c6161d0460a8" (UID: "966190a8-7fd8-41d7-9d65-c6161d0460a8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 02:04:54.874509 systemd[1]: var-lib-kubelet-pods-966190a8\x2d7fd8\x2d41d7\x2d9d65\x2dc6161d0460a8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 02:04:54.941556 kubelet[2883]: I0307 02:04:54.941396 2883 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.941556 kubelet[2883]: I0307 02:04:54.941464 2883 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.941556 kubelet[2883]: I0307 02:04:54.941477 2883 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.941556 kubelet[2883]: I0307 02:04:54.941490 2883 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/966190a8-7fd8-41d7-9d65-c6161d0460a8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.941556 kubelet[2883]: I0307 02:04:54.941503 2883 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.941556 kubelet[2883]: I0307 02:04:54.941517 2883 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.941556 kubelet[2883]: I0307 02:04:54.941528 2883 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.941556 kubelet[2883]: I0307 02:04:54.941538 2883 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.942360 kubelet[2883]: I0307 02:04:54.941551 2883 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.942360 kubelet[2883]: I0307 02:04:54.941620 2883 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.942360 kubelet[2883]: I0307 02:04:54.941636 2883 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.942360 kubelet[2883]: I0307 02:04:54.941650 2883 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/966190a8-7fd8-41d7-9d65-c6161d0460a8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.942360 kubelet[2883]: I0307 02:04:54.941660 2883 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/966190a8-7fd8-41d7-9d65-c6161d0460a8-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:54.942360 kubelet[2883]: I0307 02:04:54.941672 2883 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dxhkv\" (UniqueName: \"kubernetes.io/projected/966190a8-7fd8-41d7-9d65-c6161d0460a8-kube-api-access-dxhkv\") on node \"localhost\" DevicePath \"\"" Mar 7 02:04:55.177337 sshd[6187]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:55.191463 systemd[1]: Started sshd@59-10.0.0.122:22-10.0.0.1:40130.service - OpenSSH per-connection server daemon (10.0.0.1:40130). Mar 7 02:04:55.197931 systemd[1]: sshd@58-10.0.0.122:22-10.0.0.1:40120.service: Deactivated successfully. Mar 7 02:04:55.211083 systemd[1]: session-59.scope: Deactivated successfully. Mar 7 02:04:55.218539 systemd-logind[1559]: Session 59 logged out. Waiting for processes to exit. Mar 7 02:04:55.225602 systemd-logind[1559]: Removed session 59. Mar 7 02:04:55.302611 sshd[6352]: Accepted publickey for core from 10.0.0.1 port 40130 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:55.334285 sshd[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:55.428912 systemd-logind[1559]: New session 60 of user core. Mar 7 02:04:55.440881 systemd[1]: Started session-60.scope - Session 60 of User core. Mar 7 02:04:55.697010 kubelet[2883]: I0307 02:04:55.686637 2883 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3714da96-de37-4fe0-b3e7-778d1d5a47dc" path="/var/lib/kubelet/pods/3714da96-de37-4fe0-b3e7-778d1d5a47dc/volumes" Mar 7 02:04:55.821558 kubelet[2883]: I0307 02:04:55.821314 2883 scope.go:117] "RemoveContainer" containerID="87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0" Mar 7 02:04:55.880997 containerd[1585]: time="2026-03-07T02:04:55.880338065Z" level=info msg="RemoveContainer for \"87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0\"" Mar 7 02:04:55.913367 containerd[1585]: time="2026-03-07T02:04:55.913092922Z" level=info msg="RemoveContainer for \"87e55ededdfbc98c59c3fd22185bbde1a7f8a07b3e7969c3b3f15beb25254ae0\" returns successfully" Mar 7 02:04:55.914275 kubelet[2883]: I0307 02:04:55.914152 2883 scope.go:117] "RemoveContainer" containerID="f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c" Mar 7 02:04:55.937939 containerd[1585]: time="2026-03-07T02:04:55.933538381Z" level=info msg="RemoveContainer for \"f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c\"" Mar 7 02:04:55.949396 containerd[1585]: time="2026-03-07T02:04:55.949023763Z" level=info msg="RemoveContainer for \"f49c04df042890779dfa838a1887fea202149f4c2348e48fb17f733ff2fb964c\" returns successfully" Mar 7 02:04:55.952503 kubelet[2883]: I0307 02:04:55.949956 2883 scope.go:117] "RemoveContainer" containerID="56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d" Mar 7 02:04:55.953963 containerd[1585]: time="2026-03-07T02:04:55.953879149Z" level=info msg="RemoveContainer for \"56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d\"" Mar 7 02:04:55.990424 containerd[1585]: time="2026-03-07T02:04:55.988412125Z" level=info msg="RemoveContainer for \"56ff113b9f69096b4bfb1e021c664f48d228bb561f8b05348a4abe499c21932d\" returns successfully" Mar 7 02:04:55.990651 kubelet[2883]: I0307 02:04:55.989126 2883 scope.go:117] "RemoveContainer" containerID="e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29" Mar 7 02:04:55.995632 containerd[1585]: time="2026-03-07T02:04:55.994414748Z" level=info msg="RemoveContainer for \"e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29\"" Mar 7 02:04:56.024538 containerd[1585]: time="2026-03-07T02:04:56.024218345Z" level=info msg="RemoveContainer for \"e699e6959d7537ff63c5aa8d55268caaf8f077888dafca35300a5d33ae715e29\" returns successfully" Mar 7 02:04:56.025615 kubelet[2883]: I0307 02:04:56.025478 2883 scope.go:117] "RemoveContainer" containerID="3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0" Mar 7 02:04:56.040829 containerd[1585]: time="2026-03-07T02:04:56.038839324Z" level=info msg="RemoveContainer for \"3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0\"" Mar 7 02:04:56.050833 containerd[1585]: time="2026-03-07T02:04:56.050413554Z" level=info msg="RemoveContainer for \"3ebec3c636fb2d1b2fce8029dfe997995a6bb32f3763a12ac397a39348596df0\" returns successfully" Mar 7 02:04:56.482775 kubelet[2883]: E0307 02:04:56.482172 2883 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 02:04:57.486020 sshd[6352]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:57.547352 systemd[1]: Started sshd@60-10.0.0.122:22-10.0.0.1:40134.service - OpenSSH per-connection server daemon (10.0.0.1:40134). Mar 7 02:04:57.551437 systemd[1]: sshd@59-10.0.0.122:22-10.0.0.1:40130.service: Deactivated successfully. Mar 7 02:04:57.584454 systemd[1]: session-60.scope: Deactivated successfully. Mar 7 02:04:57.587834 systemd-logind[1559]: Session 60 logged out. Waiting for processes to exit. Mar 7 02:04:57.591818 systemd-logind[1559]: Removed session 60. Mar 7 02:04:57.672944 sshd[6366]: Accepted publickey for core from 10.0.0.1 port 40134 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:57.693337 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:57.726998 kubelet[2883]: I0307 02:04:57.717810 2883 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966190a8-7fd8-41d7-9d65-c6161d0460a8" path="/var/lib/kubelet/pods/966190a8-7fd8-41d7-9d65-c6161d0460a8/volumes" Mar 7 02:04:57.734225 systemd-logind[1559]: New session 61 of user core. Mar 7 02:04:57.739932 kubelet[2883]: I0307 02:04:57.739502 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-cilium-run\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.740892 kubelet[2883]: I0307 02:04:57.740117 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-xtables-lock\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.740892 kubelet[2883]: I0307 02:04:57.740162 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-host-proc-sys-net\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.740892 kubelet[2883]: I0307 02:04:57.740193 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-hostproc\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.740892 kubelet[2883]: I0307 02:04:57.740230 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/441359d2-5e53-4580-aa31-bcb175e27106-clustermesh-secrets\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.740892 kubelet[2883]: I0307 02:04:57.740252 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-cilium-cgroup\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.740892 kubelet[2883]: I0307 02:04:57.740274 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-cni-path\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.741152 kubelet[2883]: I0307 02:04:57.740294 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/441359d2-5e53-4580-aa31-bcb175e27106-cilium-config-path\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.741152 kubelet[2883]: I0307 02:04:57.740323 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/441359d2-5e53-4580-aa31-bcb175e27106-hubble-tls\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.741152 kubelet[2883]: I0307 02:04:57.740346 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-lib-modules\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.741152 kubelet[2883]: I0307 02:04:57.740369 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjm9\" (UniqueName: \"kubernetes.io/projected/441359d2-5e53-4580-aa31-bcb175e27106-kube-api-access-7zjm9\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.741152 kubelet[2883]: I0307 02:04:57.740467 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/441359d2-5e53-4580-aa31-bcb175e27106-cilium-ipsec-secrets\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.741315 kubelet[2883]: I0307 02:04:57.740498 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-host-proc-sys-kernel\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.741315 kubelet[2883]: I0307 02:04:57.740635 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-bpf-maps\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.741315 kubelet[2883]: I0307 02:04:57.740665 2883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/441359d2-5e53-4580-aa31-bcb175e27106-etc-cni-netd\") pod \"cilium-xgzbw\" (UID: \"441359d2-5e53-4580-aa31-bcb175e27106\") " pod="kube-system/cilium-xgzbw" Mar 7 02:04:57.778401 systemd[1]: Started session-61.scope - Session 61 of User core. Mar 7 02:04:57.957372 sshd[6366]: pam_unix(sshd:session): session closed for user core Mar 7 02:04:58.006154 systemd[1]: Started sshd@61-10.0.0.122:22-10.0.0.1:40148.service - OpenSSH per-connection server daemon (10.0.0.1:40148). Mar 7 02:04:58.030403 systemd[1]: sshd@60-10.0.0.122:22-10.0.0.1:40134.service: Deactivated successfully. Mar 7 02:04:58.055125 systemd[1]: session-61.scope: Deactivated successfully. Mar 7 02:04:58.072551 systemd-logind[1559]: Session 61 logged out. Waiting for processes to exit. Mar 7 02:04:58.090822 kubelet[2883]: E0307 02:04:58.081897 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:58.093511 containerd[1585]: time="2026-03-07T02:04:58.093459571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xgzbw,Uid:441359d2-5e53-4580-aa31-bcb175e27106,Namespace:kube-system,Attempt:0,}" Mar 7 02:04:58.111286 systemd-logind[1559]: Removed session 61. Mar 7 02:04:58.192962 sshd[6379]: Accepted publickey for core from 10.0.0.1 port 40148 ssh2: RSA SHA256:FaKL72VfKns/3XaiikJSJBegvMb77gTkRQImwJ2PJcg Mar 7 02:04:58.200107 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:04:58.228237 systemd-logind[1559]: New session 62 of user core. Mar 7 02:04:58.235672 containerd[1585]: time="2026-03-07T02:04:58.234866835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:04:58.235672 containerd[1585]: time="2026-03-07T02:04:58.234958295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:04:58.235672 containerd[1585]: time="2026-03-07T02:04:58.234982330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:58.235672 containerd[1585]: time="2026-03-07T02:04:58.235200578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:58.243365 systemd[1]: Started session-62.scope - Session 62 of User core. Mar 7 02:04:58.449832 containerd[1585]: time="2026-03-07T02:04:58.449141044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xgzbw,Uid:441359d2-5e53-4580-aa31-bcb175e27106,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\"" Mar 7 02:04:58.456649 kubelet[2883]: E0307 02:04:58.453407 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:58.495781 containerd[1585]: time="2026-03-07T02:04:58.495642146Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 02:04:58.647851 containerd[1585]: time="2026-03-07T02:04:58.646053384Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"99a0e1f7cbc4d995d959cccb495b0ae8a24554e7c3cfcd710e94cd1cda384262\"" Mar 7 02:04:58.671139 containerd[1585]: time="2026-03-07T02:04:58.667944547Z" level=info msg="StartContainer for \"99a0e1f7cbc4d995d959cccb495b0ae8a24554e7c3cfcd710e94cd1cda384262\"" Mar 7 02:04:59.015370 containerd[1585]: time="2026-03-07T02:04:59.014963907Z" level=info msg="StartContainer for \"99a0e1f7cbc4d995d959cccb495b0ae8a24554e7c3cfcd710e94cd1cda384262\" returns successfully" Mar 7 02:04:59.317922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99a0e1f7cbc4d995d959cccb495b0ae8a24554e7c3cfcd710e94cd1cda384262-rootfs.mount: Deactivated successfully. Mar 7 02:04:59.371827 containerd[1585]: time="2026-03-07T02:04:59.370870419Z" level=info msg="shim disconnected" id=99a0e1f7cbc4d995d959cccb495b0ae8a24554e7c3cfcd710e94cd1cda384262 namespace=k8s.io Mar 7 02:04:59.371827 containerd[1585]: time="2026-03-07T02:04:59.370993899Z" level=warning msg="cleaning up after shim disconnected" id=99a0e1f7cbc4d995d959cccb495b0ae8a24554e7c3cfcd710e94cd1cda384262 namespace=k8s.io Mar 7 02:04:59.371827 containerd[1585]: time="2026-03-07T02:04:59.371011242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:00.032189 kubelet[2883]: E0307 02:05:00.031767 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:00.055953 containerd[1585]: time="2026-03-07T02:05:00.055128189Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 02:05:00.122649 containerd[1585]: time="2026-03-07T02:05:00.122419141Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"43bfba918b8c3d50b8ec63ea7567b8aaa4af3503c998ab7581039d5f604a560e\"" Mar 7 02:05:00.127861 containerd[1585]: time="2026-03-07T02:05:00.124303990Z" level=info msg="StartContainer for \"43bfba918b8c3d50b8ec63ea7567b8aaa4af3503c998ab7581039d5f604a560e\"" Mar 7 02:05:00.303933 containerd[1585]: time="2026-03-07T02:05:00.300052630Z" level=info msg="StartContainer for \"43bfba918b8c3d50b8ec63ea7567b8aaa4af3503c998ab7581039d5f604a560e\" returns successfully" Mar 7 02:05:00.431292 containerd[1585]: time="2026-03-07T02:05:00.429019624Z" level=info msg="shim disconnected" id=43bfba918b8c3d50b8ec63ea7567b8aaa4af3503c998ab7581039d5f604a560e namespace=k8s.io Mar 7 02:05:00.431292 containerd[1585]: time="2026-03-07T02:05:00.429219347Z" level=warning msg="cleaning up after shim disconnected" id=43bfba918b8c3d50b8ec63ea7567b8aaa4af3503c998ab7581039d5f604a560e namespace=k8s.io Mar 7 02:05:00.431292 containerd[1585]: time="2026-03-07T02:05:00.429235708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:00.493359 kubelet[2883]: I0307 02:05:00.493239 2883 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T02:05:00Z","lastTransitionTime":"2026-03-07T02:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 02:05:01.078257 kubelet[2883]: E0307 02:05:01.077344 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:01.117064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43bfba918b8c3d50b8ec63ea7567b8aaa4af3503c998ab7581039d5f604a560e-rootfs.mount: Deactivated successfully. Mar 7 02:05:01.146456 containerd[1585]: time="2026-03-07T02:05:01.144093328Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 02:05:01.263148 containerd[1585]: time="2026-03-07T02:05:01.262915016Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"31378f4641f20dc82b84f33cd3e775500f71c22d490856225c19a4fafd9f0ef2\"" Mar 7 02:05:01.264809 containerd[1585]: time="2026-03-07T02:05:01.264259806Z" level=info msg="StartContainer for \"31378f4641f20dc82b84f33cd3e775500f71c22d490856225c19a4fafd9f0ef2\"" Mar 7 02:05:01.490324 kubelet[2883]: E0307 02:05:01.490253 2883 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 02:05:01.583981 containerd[1585]: time="2026-03-07T02:05:01.581492271Z" level=info msg="StartContainer for \"31378f4641f20dc82b84f33cd3e775500f71c22d490856225c19a4fafd9f0ef2\" returns successfully" Mar 7 02:05:01.709797 containerd[1585]: time="2026-03-07T02:05:01.708770027Z" level=info msg="shim disconnected" id=31378f4641f20dc82b84f33cd3e775500f71c22d490856225c19a4fafd9f0ef2 namespace=k8s.io Mar 7 02:05:01.709797 containerd[1585]: time="2026-03-07T02:05:01.708838885Z" level=warning msg="cleaning up after shim disconnected" id=31378f4641f20dc82b84f33cd3e775500f71c22d490856225c19a4fafd9f0ef2 namespace=k8s.io Mar 7 02:05:01.709797 containerd[1585]: time="2026-03-07T02:05:01.708852991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:02.106362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31378f4641f20dc82b84f33cd3e775500f71c22d490856225c19a4fafd9f0ef2-rootfs.mount: Deactivated successfully. Mar 7 02:05:02.110777 kubelet[2883]: E0307 02:05:02.110539 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:02.129080 containerd[1585]: time="2026-03-07T02:05:02.128830041Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 02:05:02.240369 containerd[1585]: time="2026-03-07T02:05:02.239652649Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"41d0d3d938140567b3ff4d11c3060fe9158469034785bb1c152e319a980e091a\"" Mar 7 02:05:02.246096 containerd[1585]: time="2026-03-07T02:05:02.245967088Z" level=info msg="StartContainer for \"41d0d3d938140567b3ff4d11c3060fe9158469034785bb1c152e319a980e091a\"" Mar 7 02:05:02.478950 containerd[1585]: time="2026-03-07T02:05:02.478845111Z" level=info msg="StartContainer for \"41d0d3d938140567b3ff4d11c3060fe9158469034785bb1c152e319a980e091a\" returns successfully" Mar 7 02:05:02.583165 containerd[1585]: time="2026-03-07T02:05:02.583021389Z" level=info msg="shim disconnected" id=41d0d3d938140567b3ff4d11c3060fe9158469034785bb1c152e319a980e091a namespace=k8s.io Mar 7 02:05:02.583165 containerd[1585]: time="2026-03-07T02:05:02.583134720Z" level=warning msg="cleaning up after shim disconnected" id=41d0d3d938140567b3ff4d11c3060fe9158469034785bb1c152e319a980e091a namespace=k8s.io Mar 7 02:05:02.583165 containerd[1585]: time="2026-03-07T02:05:02.583152895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:03.105294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41d0d3d938140567b3ff4d11c3060fe9158469034785bb1c152e319a980e091a-rootfs.mount: Deactivated successfully. Mar 7 02:05:03.130265 kubelet[2883]: E0307 02:05:03.126508 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:03.152117 containerd[1585]: time="2026-03-07T02:05:03.151646245Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 02:05:03.198483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439469365.mount: Deactivated successfully. Mar 7 02:05:03.216660 containerd[1585]: time="2026-03-07T02:05:03.216411066Z" level=info msg="CreateContainer within sandbox \"d5c38664314d4ba5c3b279310ae51b00644cd90ef9a86bd677c1e804bd971170\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"30a5aa099f619855f1b40d5412398694d536d082d32246c90d92ef5d1bdddc7b\"" Mar 7 02:05:03.219980 containerd[1585]: time="2026-03-07T02:05:03.218218978Z" level=info msg="StartContainer for \"30a5aa099f619855f1b40d5412398694d536d082d32246c90d92ef5d1bdddc7b\"" Mar 7 02:05:03.529753 containerd[1585]: time="2026-03-07T02:05:03.523652826Z" level=info msg="StartContainer for \"30a5aa099f619855f1b40d5412398694d536d082d32246c90d92ef5d1bdddc7b\" returns successfully" Mar 7 02:05:04.158995 kubelet[2883]: E0307 02:05:04.155496 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:05.173584 kubelet[2883]: E0307 02:05:05.173200 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:05.818302 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 7 02:05:06.184026 kubelet[2883]: E0307 02:05:06.176499 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:09.680236 kubelet[2883]: E0307 02:05:09.680065 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:15.046975 systemd-networkd[1239]: lxc_health: Link UP Mar 7 02:05:15.081982 systemd-networkd[1239]: lxc_health: Gained carrier Mar 7 02:05:16.144969 kubelet[2883]: E0307 02:05:16.144776 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:16.366465 kubelet[2883]: I0307 02:05:16.363079 2883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xgzbw" podStartSLOduration=19.329388498 podStartE2EDuration="19.329388498s" podCreationTimestamp="2026-03-07 02:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:05:04.452382354 +0000 UTC m=+635.254352023" watchObservedRunningTime="2026-03-07 02:05:16.329388498 +0000 UTC m=+647.131358217" Mar 7 02:05:16.380954 kubelet[2883]: E0307 02:05:16.375344 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:16.889349 systemd-networkd[1239]: lxc_health: Gained IPv6LL Mar 7 02:05:17.425810 kubelet[2883]: E0307 02:05:17.425459 2883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:22.462399 sshd[6379]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:22.475042 systemd[1]: sshd@61-10.0.0.122:22-10.0.0.1:40148.service: Deactivated successfully. Mar 7 02:05:22.485261 systemd[1]: session-62.scope: Deactivated successfully. Mar 7 02:05:22.495311 systemd-logind[1559]: Session 62 logged out. Waiting for processes to exit. Mar 7 02:05:22.503633 systemd-logind[1559]: Removed session 62.