Mar 14 00:22:02.803010 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:22:02.803035 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:02.803047 kernel: BIOS-provided physical RAM map: Mar 14 00:22:02.803054 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 14 00:22:02.803060 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 14 00:22:02.803066 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:22:02.803073 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 14 00:22:02.803079 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 14 00:22:02.803084 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:22:02.803093 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:22:02.803099 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:22:02.803105 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:22:02.803127 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:22:02.803133 kernel: NX (Execute Disable) protection: active Mar 14 00:22:02.803140 kernel: APIC: Static calls initialized Mar 14 00:22:02.803193 kernel: SMBIOS 2.8 present. Mar 14 00:22:02.803200 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 14 00:22:02.803206 kernel: Hypervisor detected: KVM Mar 14 00:22:02.803212 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:22:02.803219 kernel: kvm-clock: using sched offset of 10322430594 cycles Mar 14 00:22:02.803225 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:22:02.803232 kernel: tsc: Detected 2445.426 MHz processor Mar 14 00:22:02.803238 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:22:02.803245 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:22:02.803256 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 14 00:22:02.803262 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:22:02.803268 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:22:02.803275 kernel: Using GB pages for direct mapping Mar 14 00:22:02.803281 kernel: ACPI: Early table checksum verification disabled Mar 14 00:22:02.803287 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 14 00:22:02.803293 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:22:02.803299 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:22:02.803306 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:22:02.803315 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 14 00:22:02.803321 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:22:02.803327 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:22:02.803334 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:22:02.803340 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:22:02.803346 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 14 00:22:02.803353 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 14 00:22:02.803363 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 14 00:22:02.803373 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 14 00:22:02.803379 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 14 00:22:02.803386 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 14 00:22:02.803392 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 14 00:22:02.803399 kernel: No NUMA configuration found Mar 14 00:22:02.803405 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 14 00:22:02.803415 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 14 00:22:02.803421 kernel: Zone ranges: Mar 14 00:22:02.803428 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:22:02.803434 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 14 00:22:02.803441 kernel: Normal empty Mar 14 00:22:02.803447 kernel: Movable zone start for each node Mar 14 00:22:02.803453 kernel: Early memory node ranges Mar 14 00:22:02.803460 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:22:02.803467 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 14 00:22:02.803473 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 14 00:22:02.803483 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:22:02.803502 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:22:02.803509 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 14 00:22:02.803515 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:22:02.803522 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:22:02.803529 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:22:02.803535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:22:02.803541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:22:02.803548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:22:02.803558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:22:02.803565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:22:02.803571 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:22:02.803578 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:22:02.803584 kernel: TSC deadline timer available Mar 14 00:22:02.803591 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 14 00:22:02.803597 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:22:02.803604 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:22:02.803621 kernel: kvm-guest: setup PV sched yield Mar 14 00:22:02.803631 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:22:02.803638 kernel: Booting paravirtualized kernel on KVM Mar 14 00:22:02.803645 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:22:02.803656 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 14 00:22:02.803687 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 14 00:22:02.803698 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 14 00:22:02.803711 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 14 00:22:02.803721 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:22:02.803734 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:22:02.803753 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:02.803764 kernel: random: crng init done Mar 14 00:22:02.803776 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:22:02.803787 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:22:02.803799 kernel: Fallback order for Node 0: 0 Mar 14 00:22:02.803810 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 14 00:22:02.803821 kernel: Policy zone: DMA32 Mar 14 00:22:02.803831 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:22:02.803849 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 14 00:22:02.803861 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 14 00:22:02.803873 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:22:02.803980 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:22:02.803986 kernel: Dynamic Preempt: voluntary Mar 14 00:22:02.803993 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:22:02.804001 kernel: rcu: RCU event tracing is enabled. Mar 14 00:22:02.804008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 14 00:22:02.804015 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:22:02.804026 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:22:02.804033 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:22:02.804040 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:22:02.804046 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 14 00:22:02.804069 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 14 00:22:02.804076 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:22:02.804082 kernel: Console: colour VGA+ 80x25 Mar 14 00:22:02.804089 kernel: printk: console [ttyS0] enabled Mar 14 00:22:02.804095 kernel: ACPI: Core revision 20230628 Mar 14 00:22:02.804106 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:22:02.804113 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:22:02.804119 kernel: x2apic enabled Mar 14 00:22:02.804126 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:22:02.804132 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:22:02.804139 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:22:02.804146 kernel: kvm-guest: setup PV IPIs Mar 14 00:22:02.804183 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:22:02.804204 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:22:02.804211 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 14 00:22:02.804217 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:22:02.804224 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:22:02.804234 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:22:02.804241 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:22:02.804248 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:22:02.804256 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:22:02.804263 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:22:02.804272 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:22:02.804293 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:22:02.804301 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:22:02.804308 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:22:02.804314 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:22:02.804321 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:22:02.804328 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:22:02.804335 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:22:02.804345 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:22:02.804352 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:22:02.804359 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 14 00:22:02.804366 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:22:02.804372 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:22:02.804379 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:22:02.804387 kernel: landlock: Up and running. Mar 14 00:22:02.804393 kernel: SELinux: Initializing. Mar 14 00:22:02.804400 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:22:02.804410 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:22:02.804417 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:22:02.804424 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:22:02.804431 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:22:02.804438 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:22:02.804445 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 14 00:22:02.804451 kernel: signal: max sigframe size: 1776 Mar 14 00:22:02.804470 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:22:02.804477 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:22:02.804487 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:22:02.804494 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:22:02.804505 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:22:02.804517 kernel: .... node #0, CPUs: #1 #2 #3 Mar 14 00:22:02.804529 kernel: smp: Brought up 1 node, 4 CPUs Mar 14 00:22:02.804540 kernel: smpboot: Max logical packages: 1 Mar 14 00:22:02.804552 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 14 00:22:02.804563 kernel: devtmpfs: initialized Mar 14 00:22:02.804574 kernel: x86/mm: Memory block size: 128MB Mar 14 00:22:02.804592 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:22:02.804599 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 14 00:22:02.804606 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:22:02.804612 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:22:02.804619 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:22:02.804626 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:22:02.804633 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:22:02.804640 kernel: audit: type=2000 audit(1773447716.992:1): state=initialized audit_enabled=0 res=1 Mar 14 00:22:02.804647 kernel: cpuidle: using governor menu Mar 14 00:22:02.804657 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:22:02.804664 kernel: dca service started, version 1.12.1 Mar 14 00:22:02.804671 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:22:02.804678 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:22:02.804685 kernel: PCI: Using configuration type 1 for base access Mar 14 00:22:02.804692 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:22:02.804699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:22:02.804706 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:22:02.804712 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:22:02.804726 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:22:02.804739 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:22:02.804749 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:22:02.804759 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:22:02.804771 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:22:02.804783 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:22:02.804796 kernel: ACPI: Interpreter enabled Mar 14 00:22:02.804808 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:22:02.804820 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:22:02.804839 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:22:02.804850 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:22:02.804857 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:22:02.804864 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:22:02.805269 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:22:02.805486 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:22:02.805695 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:22:02.805722 kernel: PCI host bridge to bus 0000:00 Mar 14 00:22:02.806060 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:22:02.806306 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:22:02.806501 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:22:02.806684 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 14 00:22:02.806868 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:22:02.807107 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 14 00:22:02.807328 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:22:02.807628 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:22:02.808019 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:22:02.808299 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 14 00:22:02.808533 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 14 00:22:02.808772 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 14 00:22:02.809060 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:22:02.809405 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:22:02.809644 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 14 00:22:02.809857 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 14 00:22:02.810428 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 14 00:22:02.810672 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 14 00:22:02.810823 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 14 00:22:02.811464 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 14 00:22:02.811616 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 14 00:22:02.817828 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:22:02.818145 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 14 00:22:02.818436 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 14 00:22:02.818689 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 14 00:22:02.818976 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 14 00:22:02.819326 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:22:02.819567 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:22:02.819856 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:22:02.820127 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 14 00:22:02.820387 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 14 00:22:02.820674 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:22:02.820943 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:22:02.820971 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:22:02.820986 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:22:02.821000 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:22:02.821019 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:22:02.821032 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:22:02.821044 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:22:02.821055 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:22:02.821066 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:22:02.821083 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:22:02.821097 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:22:02.821110 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:22:02.821124 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:22:02.821138 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:22:02.821177 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:22:02.821189 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:22:02.821201 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:22:02.821214 kernel: iommu: Default domain type: Translated Mar 14 00:22:02.821228 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:22:02.821249 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:22:02.821264 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:22:02.821276 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 14 00:22:02.821287 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 14 00:22:02.821518 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:22:02.821752 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:22:02.822029 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:22:02.822051 kernel: vgaarb: loaded Mar 14 00:22:02.822070 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:22:02.822081 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:22:02.822092 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:22:02.822103 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:22:02.822114 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:22:02.822125 kernel: pnp: PnP ACPI init Mar 14 00:22:02.822496 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:22:02.822521 kernel: pnp: PnP ACPI: found 6 devices Mar 14 00:22:02.822541 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:22:02.822553 kernel: NET: Registered PF_INET protocol family Mar 14 00:22:02.822566 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:22:02.822577 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:22:02.822589 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:22:02.822601 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:22:02.822612 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:22:02.822624 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:22:02.822636 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:22:02.822652 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:22:02.822663 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:22:02.822674 kernel: NET: Registered PF_XDP protocol family Mar 14 00:22:02.822934 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:22:02.823144 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:22:02.823401 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:22:02.823600 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 14 00:22:02.823813 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:22:02.824076 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 14 00:22:02.824098 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:22:02.824112 kernel: Initialise system trusted keyrings Mar 14 00:22:02.824125 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:22:02.824139 kernel: Key type asymmetric registered Mar 14 00:22:02.824183 kernel: Asymmetric key parser 'x509' registered Mar 14 00:22:02.824196 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:22:02.824208 kernel: io scheduler mq-deadline registered Mar 14 00:22:02.824220 kernel: io scheduler kyber registered Mar 14 00:22:02.824241 kernel: io scheduler bfq registered Mar 14 00:22:02.824253 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:22:02.824266 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:22:02.824278 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:22:02.824291 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 00:22:02.824304 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:22:02.824318 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:22:02.824330 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:22:02.824342 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:22:02.824362 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:22:02.824376 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:22:02.824714 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 14 00:22:02.825100 kernel: rtc_cmos 00:04: registered as rtc0 Mar 14 00:22:02.825344 kernel: rtc_cmos 00:04: setting system clock to 2026-03-14T00:22:01 UTC (1773447721) Mar 14 00:22:02.825557 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:22:02.825578 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:22:02.825592 kernel: hpet: Lost 1 RTC interrupts Mar 14 00:22:02.825613 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:22:02.825625 kernel: Segment Routing with IPv6 Mar 14 00:22:02.825636 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:22:02.825647 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:22:02.825658 kernel: Key type dns_resolver registered Mar 14 00:22:02.825669 kernel: IPI shorthand broadcast: enabled Mar 14 00:22:02.825680 kernel: sched_clock: Marking stable (3704050194, 786292917)->(5210182511, -719839400) Mar 14 00:22:02.825690 kernel: registered taskstats version 1 Mar 14 00:22:02.825703 kernel: Loading compiled-in X.509 certificates Mar 14 00:22:02.825721 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:22:02.825733 kernel: Key type .fscrypt registered Mar 14 00:22:02.825743 kernel: Key type fscrypt-provisioning registered Mar 14 00:22:02.825755 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:22:02.825766 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:22:02.825778 kernel: ima: No architecture policies found Mar 14 00:22:02.825790 kernel: clk: Disabling unused clocks Mar 14 00:22:02.825802 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:22:02.825814 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:22:02.825830 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:22:02.825841 kernel: Run /init as init process Mar 14 00:22:02.825852 kernel: with arguments: Mar 14 00:22:02.825864 kernel: /init Mar 14 00:22:02.825938 kernel: with environment: Mar 14 00:22:02.825952 kernel: HOME=/ Mar 14 00:22:02.825963 kernel: TERM=linux Mar 14 00:22:02.825978 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:22:02.825998 systemd[1]: Detected virtualization kvm. Mar 14 00:22:02.826011 systemd[1]: Detected architecture x86-64. Mar 14 00:22:02.826023 systemd[1]: Running in initrd. Mar 14 00:22:02.826034 systemd[1]: No hostname configured, using default hostname. Mar 14 00:22:02.826047 systemd[1]: Hostname set to . Mar 14 00:22:02.826059 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:22:02.826071 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:22:02.826083 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:02.826100 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:02.826113 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:22:02.826125 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:22:02.826138 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:22:02.826187 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:22:02.826204 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:22:02.826217 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:22:02.826235 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:02.826247 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:02.826260 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:22:02.826273 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:22:02.826308 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:22:02.826325 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:22:02.826341 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:02.826353 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:02.826365 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:22:02.826377 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:22:02.826389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:02.826402 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:02.826414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:02.826426 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:22:02.826439 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:22:02.826455 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:22:02.826468 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:22:02.826480 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:22:02.826493 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:22:02.826506 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:22:02.826519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:02.826573 systemd-journald[195]: Collecting audit messages is disabled. Mar 14 00:22:02.826609 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:02.826622 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:02.826634 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:22:02.826651 systemd-journald[195]: Journal started Mar 14 00:22:02.826676 systemd-journald[195]: Runtime Journal (/run/log/journal/57c79be31fcc42b7bbb0502db6a512c1) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:22:02.834243 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:22:02.864335 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:22:03.144292 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:22:03.144486 kernel: Bridge firewalling registered Mar 14 00:22:02.871814 systemd-modules-load[196]: Inserted module 'overlay' Mar 14 00:22:02.975628 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 14 00:22:03.172218 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:22:03.187931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:03.198624 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:03.207974 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:22:03.283599 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:03.287653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:03.297220 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:22:03.357827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:03.358378 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:03.361722 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:03.365969 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:03.391777 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:22:03.402457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:22:03.444747 dracut-cmdline[231]: dracut-dracut-053 Mar 14 00:22:03.451469 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:03.514129 systemd-resolved[232]: Positive Trust Anchors: Mar 14 00:22:03.514234 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:22:03.514277 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:22:03.533692 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 14 00:22:03.538216 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:22:03.560487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:03.679310 kernel: SCSI subsystem initialized Mar 14 00:22:03.694955 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:22:03.724087 kernel: iscsi: registered transport (tcp) Mar 14 00:22:03.754006 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:22:03.754138 kernel: QLogic iSCSI HBA Driver Mar 14 00:22:03.855310 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:03.870214 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:22:03.936662 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:22:03.936757 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:22:03.941941 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:22:04.007226 kernel: raid6: avx2x4 gen() 23134 MB/s Mar 14 00:22:04.028251 kernel: raid6: avx2x2 gen() 20761 MB/s Mar 14 00:22:04.047126 kernel: raid6: avx2x1 gen() 12972 MB/s Mar 14 00:22:04.047229 kernel: raid6: using algorithm avx2x4 gen() 23134 MB/s Mar 14 00:22:04.067573 kernel: raid6: .... xor() 4902 MB/s, rmw enabled Mar 14 00:22:04.067658 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:22:04.107237 kernel: xor: automatically using best checksumming function avx Mar 14 00:22:04.500602 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:22:04.523001 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:04.549819 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:04.571344 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 14 00:22:04.580675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:04.636476 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:22:04.693056 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 14 00:22:04.765956 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:04.788852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:22:04.934334 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:04.955201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:22:04.999528 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 14 00:22:05.000483 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:05.006473 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:05.033245 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 14 00:22:05.013852 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:05.018948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:22:05.044140 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:22:05.069960 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:22:05.070111 kernel: GPT:9289727 != 19775487 Mar 14 00:22:05.070135 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:22:05.070188 kernel: GPT:9289727 != 19775487 Mar 14 00:22:05.070210 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:22:05.070229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:22:05.070078 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:05.085973 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:22:05.088361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:05.088518 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:05.098610 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:05.116744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:05.117202 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:05.144342 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Mar 14 00:22:05.144441 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Mar 14 00:22:05.135748 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:05.152362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:05.172388 kernel: libata version 3.00 loaded. Mar 14 00:22:05.183478 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:22:05.183984 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:22:05.184009 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:22:05.184026 kernel: AES CTR mode by8 optimization enabled Mar 14 00:22:05.193915 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:22:05.194268 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:22:05.194325 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 00:22:05.382697 kernel: scsi host0: ahci Mar 14 00:22:05.383322 kernel: scsi host1: ahci Mar 14 00:22:05.383663 kernel: scsi host2: ahci Mar 14 00:22:05.383984 kernel: scsi host3: ahci Mar 14 00:22:05.384283 kernel: scsi host4: ahci Mar 14 00:22:05.384502 kernel: scsi host5: ahci Mar 14 00:22:05.384752 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 14 00:22:05.384774 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 14 00:22:05.384789 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 14 00:22:05.384806 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 14 00:22:05.384823 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 14 00:22:05.384834 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 14 00:22:05.383091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:05.405082 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 00:22:05.427510 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 00:22:05.435098 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 00:22:05.454404 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:22:05.485530 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:22:05.516783 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:22:05.516941 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:22:05.527617 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:22:05.520320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:05.548278 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:22:05.548310 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:22:05.554949 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:22:05.563925 disk-uuid[555]: Primary Header is updated. Mar 14 00:22:05.563925 disk-uuid[555]: Secondary Entries is updated. Mar 14 00:22:05.563925 disk-uuid[555]: Secondary Header is updated. Mar 14 00:22:05.582312 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:22:05.582338 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:22:05.582355 kernel: ata3.00: applying bridge limits Mar 14 00:22:05.582371 kernel: ata3.00: configured for UDMA/100 Mar 14 00:22:05.603988 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:22:06.675650 disk-uuid[557]: Warning: The kernel is still using the old partition table. Mar 14 00:22:06.675650 disk-uuid[557]: The new table will be used at the next reboot or after you Mar 14 00:22:06.675650 disk-uuid[557]: run partprobe(8) or kpartx(8) Mar 14 00:22:06.675650 disk-uuid[557]: The operation has completed successfully. Mar 14 00:22:07.401528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:07.557300 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:22:07.557746 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:22:07.580150 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:22:07.833247 kernel: hrtimer: interrupt took 6022133 ns Mar 14 00:22:08.807161 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:22:08.807461 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:22:08.942870 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:22:09.164793 sh[593]: Success Mar 14 00:22:09.196009 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:22:09.305534 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:22:09.373840 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:22:09.395417 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:22:09.462209 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:22:09.462327 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:09.462345 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:22:09.465398 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:22:09.469761 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:22:09.502427 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:22:09.503829 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:22:09.530260 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:22:09.538351 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:22:09.563386 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:09.564387 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:09.564409 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:22:09.579361 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:22:10.002743 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:22:10.009695 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:10.052815 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:22:10.067393 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:22:10.468058 ignition[677]: Ignition 2.19.0 Mar 14 00:22:10.468110 ignition[677]: Stage: fetch-offline Mar 14 00:22:10.468430 ignition[677]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:10.468475 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:22:10.469136 ignition[677]: parsed url from cmdline: "" Mar 14 00:22:10.469143 ignition[677]: no config URL provided Mar 14 00:22:10.469154 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:22:10.469224 ignition[677]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:22:10.469455 ignition[677]: op(1): [started] loading QEMU firmware config module Mar 14 00:22:10.469466 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 14 00:22:10.502072 ignition[677]: op(1): [finished] loading QEMU firmware config module Mar 14 00:22:10.510464 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:10.549152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:22:10.617229 systemd-networkd[781]: lo: Link UP Mar 14 00:22:10.617257 systemd-networkd[781]: lo: Gained carrier Mar 14 00:22:10.621777 systemd-networkd[781]: Enumeration completed Mar 14 00:22:10.623049 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:22:10.624271 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:10.624278 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:10.625421 systemd[1]: Reached target network.target - Network. Mar 14 00:22:10.629433 systemd-networkd[781]: eth0: Link UP Mar 14 00:22:10.629439 systemd-networkd[781]: eth0: Gained carrier Mar 14 00:22:10.629452 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:10.676517 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:22:10.789630 ignition[677]: parsing config with SHA512: 7ff3f17c34d9f496555fa280af6fa337595abf1f1bb22ea62c10932c7cf6293212b14edc34f12e4d10d032bdb5bad2e8d8ed868c45d1b14c2ea636ad457f4a00 Mar 14 00:22:10.806644 unknown[677]: fetched base config from "system" Mar 14 00:22:10.806663 unknown[677]: fetched user config from "qemu" Mar 14 00:22:10.807621 ignition[677]: fetch-offline: fetch-offline passed Mar 14 00:22:10.812071 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:10.807821 ignition[677]: Ignition finished successfully Mar 14 00:22:10.813239 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.71 Mar 14 00:22:10.813268 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Mar 14 00:22:10.819853 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 14 00:22:10.835399 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:22:10.878381 ignition[785]: Ignition 2.19.0 Mar 14 00:22:10.878404 ignition[785]: Stage: kargs Mar 14 00:22:10.878678 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:10.878700 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:22:10.888856 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:22:10.881431 ignition[785]: kargs: kargs passed Mar 14 00:22:10.881521 ignition[785]: Ignition finished successfully Mar 14 00:22:10.910716 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:22:10.948817 ignition[795]: Ignition 2.19.0 Mar 14 00:22:10.948850 ignition[795]: Stage: disks Mar 14 00:22:10.949224 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:10.949246 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:22:10.958074 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:22:10.951518 ignition[795]: disks: disks passed Mar 14 00:22:10.968937 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:10.951600 ignition[795]: Ignition finished successfully Mar 14 00:22:10.972110 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:22:10.972217 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:22:10.972288 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:22:10.972335 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:22:11.008405 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:22:11.044666 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:22:11.053812 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:22:11.080180 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:22:11.253959 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:22:11.255865 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:22:11.261741 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:22:11.290331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:11.304795 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:22:11.313453 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:22:11.313551 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:22:11.339224 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 14 00:22:11.313682 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:11.353418 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:11.353554 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:11.353570 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:22:11.370942 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:22:11.374804 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:11.380700 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:22:11.404298 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:22:11.480325 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:22:11.494030 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:22:11.507274 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:22:11.522320 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:22:11.978955 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:12.004256 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:22:12.014605 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:22:12.049251 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:22:12.055490 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:12.089572 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:22:12.252309 ignition[926]: INFO : Ignition 2.19.0 Mar 14 00:22:12.252309 ignition[926]: INFO : Stage: mount Mar 14 00:22:12.257702 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:12.257702 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:22:12.257702 ignition[926]: INFO : mount: mount passed Mar 14 00:22:12.257702 ignition[926]: INFO : Ignition finished successfully Mar 14 00:22:12.260079 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:22:12.281285 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:22:12.299571 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:12.340070 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Mar 14 00:22:12.346913 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:12.346961 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:12.346974 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:22:12.355930 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:22:12.358234 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:12.423819 ignition[956]: INFO : Ignition 2.19.0 Mar 14 00:22:12.423819 ignition[956]: INFO : Stage: files Mar 14 00:22:12.436354 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:12.436354 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:22:12.436354 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:22:12.449648 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:22:12.449648 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:22:12.564774 systemd-networkd[781]: eth0: Gained IPv6LL Mar 14 00:22:12.573144 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:22:12.580421 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:22:12.586831 unknown[956]: wrote ssh authorized keys file for user: core Mar 14 00:22:12.591233 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:22:12.597638 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:22:12.605123 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:22:12.605123 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:12.605123 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:22:12.659103 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:22:12.770355 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:12.770355 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:22:12.783963 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 00:22:12.925232 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 14 00:22:13.549314 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:22:13.549314 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:13.564299 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:13.646851 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:13.646851 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:13.646851 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 14 00:22:13.833125 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 14 00:22:17.165021 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:17.165021 ignition[956]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 14 00:22:17.185487 ignition[956]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Mar 14 00:22:17.505853 ignition[956]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:22:17.564380 ignition[956]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:22:17.564380 ignition[956]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Mar 14 00:22:17.564380 ignition[956]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:17.564380 ignition[956]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:17.605100 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:17.605100 ignition[956]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:17.605100 ignition[956]: INFO : files: files passed Mar 14 00:22:17.605100 ignition[956]: INFO : Ignition finished successfully Mar 14 00:22:17.589866 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:22:17.639375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:22:17.651623 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:22:17.660619 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:22:17.660792 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:22:17.677667 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 14 00:22:17.683031 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:17.683031 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:17.706924 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:17.695147 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:17.701412 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:22:17.735444 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:22:17.840504 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:22:17.840704 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:22:17.854961 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:22:17.861612 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:22:17.870845 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:22:17.886272 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:22:17.937726 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:17.957494 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:22:17.981139 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:17.989480 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:17.997472 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:22:18.004088 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:22:18.008538 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:18.031162 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:22:18.037573 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:22:18.041840 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:22:18.053790 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:18.060069 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:18.067714 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:22:18.090516 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:18.093774 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:22:18.110683 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:22:18.131680 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:22:18.142658 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:22:18.143390 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:18.155759 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:18.156135 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:18.173204 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:22:18.175247 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:18.190146 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:22:18.190618 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:18.243205 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:22:18.243771 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:18.260704 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:22:18.264739 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:22:18.269165 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:18.278814 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:22:18.297175 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:22:18.298752 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:22:18.298950 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:18.326804 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:22:18.327018 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:18.360999 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:22:18.361477 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:18.371444 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:22:18.371721 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:22:18.420203 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:22:18.430565 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:22:18.431843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:18.458550 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:22:18.466589 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:22:18.478793 ignition[1010]: INFO : Ignition 2.19.0 Mar 14 00:22:18.478793 ignition[1010]: INFO : Stage: umount Mar 14 00:22:18.478793 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:18.478793 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:22:18.466932 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:18.545518 ignition[1010]: INFO : umount: umount passed Mar 14 00:22:18.545518 ignition[1010]: INFO : Ignition finished successfully Mar 14 00:22:18.476525 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:22:18.476710 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:18.491266 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:22:18.491471 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:22:18.497953 systemd[1]: Stopped target network.target - Network. Mar 14 00:22:18.501951 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:22:18.502191 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:22:18.504135 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:22:18.504262 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:22:18.504638 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:22:18.504717 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:22:18.505665 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:22:18.505795 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:18.507361 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:22:18.508971 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:22:18.523859 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:22:18.525359 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:22:18.525512 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:22:18.549505 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 14 00:22:18.549590 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:22:18.549963 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:22:18.556962 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:22:18.557191 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:22:18.562849 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:22:18.563039 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:22:18.568626 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:22:18.569199 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:18.577799 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:22:18.577989 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:18.595123 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:22:18.599437 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:22:18.599718 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:18.603017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:22:18.603104 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:18.829078 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 14 00:22:18.604927 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:22:18.605024 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:18.605521 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:22:18.605589 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:18.607119 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:18.650353 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:22:18.650582 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:22:18.656741 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:22:18.657171 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:18.660136 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:22:18.660280 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:18.661800 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:22:18.661975 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:18.662078 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:22:18.662168 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:18.663642 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:22:18.663728 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:18.666594 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:18.666686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:18.671077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:22:18.672584 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:22:18.672677 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:18.673780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:18.673862 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:18.694540 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:22:18.694760 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:22:18.702301 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:22:18.713865 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:22:18.758417 systemd[1]: Switching root. Mar 14 00:22:19.026932 systemd-journald[195]: Journal stopped Mar 14 00:22:21.838487 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:22:21.838610 kernel: SELinux: policy capability open_perms=1 Mar 14 00:22:21.838631 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:22:21.838657 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:22:21.838711 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:22:21.838729 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:22:21.838746 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:22:21.838773 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:22:21.838791 kernel: audit: type=1403 audit(1773447739.292:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:22:21.838809 systemd[1]: Successfully loaded SELinux policy in 75.936ms. Mar 14 00:22:21.841010 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.122ms. Mar 14 00:22:21.841034 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:22:21.841052 systemd[1]: Detected virtualization kvm. Mar 14 00:22:21.841107 systemd[1]: Detected architecture x86-64. Mar 14 00:22:21.841126 systemd[1]: Detected first boot. Mar 14 00:22:21.841144 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:22:21.841161 zram_generator::config[1074]: No configuration found. Mar 14 00:22:21.841180 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:22:21.841199 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:22:21.841216 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 00:22:21.841278 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:22:21.841303 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:22:21.841320 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:22:21.841338 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:22:21.841356 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:22:21.841382 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:22:21.841401 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:22:21.841418 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:22:21.841435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:21.841497 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:21.841517 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:22:21.841565 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:22:21.841582 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:22:21.841600 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:22:21.841635 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:22:21.841667 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:21.841685 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:22:21.841702 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:21.841744 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:22:21.841763 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:22:21.841780 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:22:21.841817 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:22:21.841835 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:22:21.841854 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:22:21.841871 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:22:21.841937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:21.841989 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:21.842012 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:21.842033 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:22:21.842052 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:22:21.842074 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:22:21.842094 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:22:21.842292 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:21.842314 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:22:21.842338 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:22:21.846938 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:22:21.846986 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:22:21.847008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:21.847028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:22:21.847080 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:22:21.847101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:21.847120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:22:21.847138 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:21.847158 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:22:21.847186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:21.847205 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:22:21.849354 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 14 00:22:21.849397 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 14 00:22:21.849420 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:22:21.849440 kernel: fuse: init (API version 7.39) Mar 14 00:22:21.849462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:22:21.849484 kernel: loop: module loaded Mar 14 00:22:21.849503 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:22:21.849741 systemd-journald[1165]: Collecting audit messages is disabled. Mar 14 00:22:21.849778 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:22:21.849798 systemd-journald[1165]: Journal started Mar 14 00:22:21.849828 systemd-journald[1165]: Runtime Journal (/run/log/journal/57c79be31fcc42b7bbb0502db6a512c1) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:22:21.867942 kernel: ACPI: bus type drm_connector registered Mar 14 00:22:21.868008 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:22:21.883964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:21.906255 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:22:21.942022 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:22:21.950679 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:22:21.956535 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:22:21.961054 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:22:21.969103 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:22:21.979205 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:22:21.999217 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:22:22.022977 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:22.033980 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:22:22.036328 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:22:22.044735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:22.045403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:22.058189 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:22:22.066457 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:22:22.105211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:22.105978 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:22.127185 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:22:22.127638 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:22:22.135733 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:22.136196 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:22.142533 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:22.147659 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:22:22.153002 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:22:22.182926 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:22:22.203128 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:22:22.221786 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:22:22.226414 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:22:22.234074 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:22:22.256599 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:22:22.271166 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:22:22.280294 systemd-journald[1165]: Time spent on flushing to /var/log/journal/57c79be31fcc42b7bbb0502db6a512c1 is 88.107ms for 934 entries. Mar 14 00:22:22.280294 systemd-journald[1165]: System Journal (/var/log/journal/57c79be31fcc42b7bbb0502db6a512c1) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:22:22.406830 systemd-journald[1165]: Received client request to flush runtime journal. Mar 14 00:22:22.295702 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:22:22.300807 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:22:22.306016 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:22.363160 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:22:22.374636 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:22.383140 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:22:22.394052 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:22:22.401506 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:22:22.413475 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:22:22.441309 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:22:22.651466 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:22:22.685076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:22.695293 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Mar 14 00:22:22.695967 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Mar 14 00:22:22.697086 udevadm[1221]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:22:22.707779 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:22:22.734568 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:22:22.948410 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:22:22.965268 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:22:23.008317 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Mar 14 00:22:23.008368 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Mar 14 00:22:23.029641 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:25.186376 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:22:25.242001 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:25.513377 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Mar 14 00:22:25.575505 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:25.592129 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:22:25.628112 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:22:25.663629 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1249) Mar 14 00:22:25.895766 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 14 00:22:25.937032 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:22:26.104949 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:22:26.129959 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:22:26.361966 systemd-networkd[1248]: lo: Link UP Mar 14 00:22:26.364574 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:22:26.361998 systemd-networkd[1248]: lo: Gained carrier Mar 14 00:22:26.362311 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:22:26.365384 systemd-networkd[1248]: Enumeration completed Mar 14 00:22:26.366465 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:26.366498 systemd-networkd[1248]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:26.369566 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:22:26.370701 systemd-networkd[1248]: eth0: Link UP Mar 14 00:22:26.370710 systemd-networkd[1248]: eth0: Gained carrier Mar 14 00:22:26.370772 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:26.379435 systemd-networkd[1248]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:26.392987 systemd-networkd[1248]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:22:26.393164 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:22:26.426858 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:22:26.427622 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:22:26.428043 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:22:26.689961 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:22:26.705431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:27.288995 kernel: kvm_amd: TSC scaling supported Mar 14 00:22:27.289377 kernel: kvm_amd: Nested Virtualization enabled Mar 14 00:22:27.289404 kernel: kvm_amd: Nested Paging enabled Mar 14 00:22:27.289423 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 14 00:22:27.289442 kernel: kvm_amd: PMU virtualization is disabled Mar 14 00:22:27.399945 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:22:27.478624 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:22:27.549968 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:22:27.555152 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:27.576656 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:22:27.631010 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:22:27.638039 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:27.660773 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:22:27.670464 lvm[1288]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:22:27.776932 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:22:27.784337 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:22:27.789206 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:22:27.789353 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:22:27.793963 systemd[1]: Reached target machines.target - Containers. Mar 14 00:22:27.800679 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:22:27.825406 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:22:27.835530 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:22:27.839841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:27.842035 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:22:27.853024 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:22:27.862484 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:22:27.869209 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:22:27.884454 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:22:27.903064 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:22:27.923800 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:22:27.925571 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:22:28.074112 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:22:28.100293 systemd-networkd[1248]: eth0: Gained IPv6LL Mar 14 00:22:28.109523 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:22:28.123606 kernel: loop1: detected capacity change from 0 to 228704 Mar 14 00:22:28.171003 kernel: loop2: detected capacity change from 0 to 142488 Mar 14 00:22:28.354953 kernel: loop3: detected capacity change from 0 to 140768 Mar 14 00:22:28.502056 kernel: loop4: detected capacity change from 0 to 228704 Mar 14 00:22:28.644930 kernel: loop5: detected capacity change from 0 to 142488 Mar 14 00:22:28.664817 (sd-merge)[1311]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 14 00:22:28.666057 (sd-merge)[1311]: Merged extensions into '/usr'. Mar 14 00:22:28.672783 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:22:28.672823 systemd[1]: Reloading... Mar 14 00:22:29.049014 zram_generator::config[1341]: No configuration found. Mar 14 00:22:29.680595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:22:29.730358 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:22:29.806968 systemd[1]: Reloading finished in 1133 ms. Mar 14 00:22:29.846812 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:22:29.852738 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:22:29.883692 systemd[1]: Starting ensure-sysext.service... Mar 14 00:22:29.892098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:22:29.903079 systemd[1]: Reloading requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:22:29.903102 systemd[1]: Reloading... Mar 14 00:22:29.976249 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:22:29.977087 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:22:29.979671 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:22:29.980256 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Mar 14 00:22:29.980631 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Mar 14 00:22:29.994200 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:22:29.994224 systemd-tmpfiles[1383]: Skipping /boot Mar 14 00:22:30.060379 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:22:30.060806 systemd-tmpfiles[1383]: Skipping /boot Mar 14 00:22:30.061981 zram_generator::config[1409]: No configuration found. Mar 14 00:22:30.651411 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:22:30.786739 systemd[1]: Reloading finished in 880 ms. Mar 14 00:22:30.843711 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:30.891222 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:22:30.900643 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:22:30.909355 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:22:30.935182 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:22:30.949176 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:22:30.961386 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:30.961757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:30.966417 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:30.975241 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:30.985824 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:30.990615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:30.991155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:30.992602 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:30.996810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:31.014539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:31.015175 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:31.032497 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:22:31.073070 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:31.076096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:31.138475 systemd[1]: Finished ensure-sysext.service. Mar 14 00:22:31.162991 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:22:31.184155 augenrules[1490]: No rules Mar 14 00:22:31.192033 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:22:31.209859 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:31.211420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:31.257525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:31.283333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:22:31.291961 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:31.310120 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:31.322091 systemd-resolved[1460]: Positive Trust Anchors: Mar 14 00:22:31.322107 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:22:31.322155 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:22:31.322167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:31.327713 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:22:31.329075 systemd-resolved[1460]: Defaulting to hostname 'linux'. Mar 14 00:22:31.333652 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:22:31.337124 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:31.338114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:22:31.342344 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:22:31.347388 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:31.347744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:31.352636 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:22:31.353366 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:22:31.358592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:31.359010 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:31.364021 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:31.364689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:31.374466 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:22:31.382633 systemd[1]: Reached target network.target - Network. Mar 14 00:22:31.386462 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:22:31.391362 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:31.396526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:22:31.396699 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:22:31.396755 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:22:31.832605 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:22:31.836657 systemd-timesyncd[1509]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 14 00:22:31.836759 systemd-timesyncd[1509]: Initial clock synchronization to Sat 2026-03-14 00:22:31.861888 UTC. Mar 14 00:22:31.839949 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:22:31.848005 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:22:31.855135 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:22:31.863315 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:22:31.869076 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:22:31.869130 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:22:31.871614 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:22:31.874722 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:22:31.877788 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:22:31.881421 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:22:31.886157 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:22:31.892563 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:22:31.897640 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:22:31.904108 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:22:31.907023 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:22:31.909657 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:22:31.916562 systemd[1]: System is tainted: cgroupsv1 Mar 14 00:22:31.917359 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:22:31.917416 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:22:31.920025 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:22:31.926506 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 14 00:22:31.932622 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:22:31.981367 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:22:32.043248 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:22:32.056812 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:22:32.063740 jq[1527]: false Mar 14 00:22:32.081215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:32.129835 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:22:32.161331 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:22:32.169132 extend-filesystems[1529]: Found loop3 Mar 14 00:22:32.169132 extend-filesystems[1529]: Found loop4 Mar 14 00:22:32.169132 extend-filesystems[1529]: Found loop5 Mar 14 00:22:32.169132 extend-filesystems[1529]: Found sr0 Mar 14 00:22:32.169132 extend-filesystems[1529]: Found vda Mar 14 00:22:32.169132 extend-filesystems[1529]: Found vda1 Mar 14 00:22:32.169132 extend-filesystems[1529]: Found vda2 Mar 14 00:22:32.169132 extend-filesystems[1529]: Found vda3 Mar 14 00:22:32.169132 extend-filesystems[1529]: Found usr Mar 14 00:22:32.169132 extend-filesystems[1529]: Found vda4 Mar 14 00:22:32.169132 extend-filesystems[1529]: Found vda6 Mar 14 00:22:32.169132 extend-filesystems[1529]: Found vda7 Mar 14 00:22:32.266867 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 14 00:22:32.175252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:22:32.189323 dbus-daemon[1525]: [system] SELinux support is enabled Mar 14 00:22:32.275204 extend-filesystems[1529]: Found vda9 Mar 14 00:22:32.275204 extend-filesystems[1529]: Checking size of /dev/vda9 Mar 14 00:22:32.275204 extend-filesystems[1529]: Resized partition /dev/vda9 Mar 14 00:22:32.185438 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:22:32.316260 extend-filesystems[1548]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:22:32.199177 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:22:32.231195 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:22:32.235855 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:22:32.245215 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:22:32.256147 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:22:32.264275 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:22:32.292547 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:22:32.370436 jq[1554]: true Mar 14 00:22:32.293150 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:22:32.298666 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:22:32.317630 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:22:32.318437 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:22:32.371955 jq[1571]: true Mar 14 00:22:32.386463 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:22:32.387093 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:22:32.387972 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1578) Mar 14 00:22:32.389582 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:22:32.425577 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 14 00:22:32.495041 extend-filesystems[1548]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 00:22:32.495041 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 14 00:22:32.495041 extend-filesystems[1548]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 14 00:22:32.543063 update_engine[1551]: I20260314 00:22:32.458269 1551 main.cc:92] Flatcar Update Engine starting Mar 14 00:22:32.543063 update_engine[1551]: I20260314 00:22:32.475697 1551 update_check_scheduler.cc:74] Next update check in 5m40s Mar 14 00:22:32.544370 tar[1569]: linux-amd64/LICENSE Mar 14 00:22:32.544370 tar[1569]: linux-amd64/helm Mar 14 00:22:32.459360 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 14 00:22:32.545110 extend-filesystems[1529]: Resized filesystem in /dev/vda9 Mar 14 00:22:32.459840 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 14 00:22:32.487644 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:22:32.493444 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:22:32.493571 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:22:32.493612 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:22:32.504322 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:22:32.504413 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:22:32.532732 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:22:32.544233 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:22:32.546354 systemd-logind[1547]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:22:32.546397 systemd-logind[1547]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:22:32.552110 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:22:32.552398 systemd-logind[1547]: New seat seat0. Mar 14 00:22:32.562124 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:22:32.566925 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:22:32.661812 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:22:32.824160 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:22:32.832819 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 14 00:22:32.864419 locksmithd[1607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:22:33.231027 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:22:33.584719 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:22:33.601359 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:22:33.632217 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:22:33.632693 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:22:33.891191 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:22:33.963304 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:22:34.136393 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:22:34.164429 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:22:34.173580 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:22:34.650207 containerd[1572]: time="2026-03-14T00:22:34.648552268Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:22:34.785527 containerd[1572]: time="2026-03-14T00:22:34.784471797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.789540095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.789678717Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.789702746Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.790139712Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.790186254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.790284615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.790306247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.790704979Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.790734275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.790756317Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:34.791362 containerd[1572]: time="2026-03-14T00:22:34.790773211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:34.792990 containerd[1572]: time="2026-03-14T00:22:34.790988826Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:34.792990 containerd[1572]: time="2026-03-14T00:22:34.791432766Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:34.792990 containerd[1572]: time="2026-03-14T00:22:34.791727188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:34.792990 containerd[1572]: time="2026-03-14T00:22:34.791751125Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:22:34.792990 containerd[1572]: time="2026-03-14T00:22:34.791941949Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:22:34.792990 containerd[1572]: time="2026-03-14T00:22:34.792056764Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:22:34.813599 containerd[1572]: time="2026-03-14T00:22:34.812260306Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:22:34.813599 containerd[1572]: time="2026-03-14T00:22:34.812576639Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:22:34.817101 containerd[1572]: time="2026-03-14T00:22:34.814194282Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:22:34.817101 containerd[1572]: time="2026-03-14T00:22:34.815844493Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:22:34.817101 containerd[1572]: time="2026-03-14T00:22:34.816082100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:22:34.817101 containerd[1572]: time="2026-03-14T00:22:34.816489028Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:22:34.823285 containerd[1572]: time="2026-03-14T00:22:34.822755597Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:22:34.823639 containerd[1572]: time="2026-03-14T00:22:34.823595289Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:22:34.823639 containerd[1572]: time="2026-03-14T00:22:34.823636805Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:22:34.823639 containerd[1572]: time="2026-03-14T00:22:34.823657382Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:22:34.823639 containerd[1572]: time="2026-03-14T00:22:34.823677267Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:22:34.823852 containerd[1572]: time="2026-03-14T00:22:34.823717760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:22:34.823852 containerd[1572]: time="2026-03-14T00:22:34.823756326Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:22:34.823852 containerd[1572]: time="2026-03-14T00:22:34.823778398Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:22:34.823852 containerd[1572]: time="2026-03-14T00:22:34.823799788Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:22:34.823852 containerd[1572]: time="2026-03-14T00:22:34.823820455Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:22:34.823852 containerd[1572]: time="2026-03-14T00:22:34.823839688Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.823858751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.823933133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.823955416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.823974559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.823992417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.824009643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.824028385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.824044637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.824064201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.824150 containerd[1572]: time="2026-03-14T00:22:34.824121810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.825092 containerd[1572]: time="2026-03-14T00:22:34.824177512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.825092 containerd[1572]: time="2026-03-14T00:22:34.824199794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.825092 containerd[1572]: time="2026-03-14T00:22:34.824217843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.825092 containerd[1572]: time="2026-03-14T00:22:34.824258636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.825092 containerd[1572]: time="2026-03-14T00:22:34.824286286Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:22:34.825092 containerd[1572]: time="2026-03-14T00:22:34.824351700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.825092 containerd[1572]: time="2026-03-14T00:22:34.824372378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.825092 containerd[1572]: time="2026-03-14T00:22:34.824422221Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:22:34.825092 containerd[1572]: time="2026-03-14T00:22:34.824679150Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:22:34.825352 containerd[1572]: time="2026-03-14T00:22:34.825159579Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:22:34.825352 containerd[1572]: time="2026-03-14T00:22:34.825185102Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:22:34.825352 containerd[1572]: time="2026-03-14T00:22:34.825207516Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:22:34.825352 containerd[1572]: time="2026-03-14T00:22:34.825223097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.825352 containerd[1572]: time="2026-03-14T00:22:34.825244607Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:22:34.825352 containerd[1572]: time="2026-03-14T00:22:34.825308425Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:22:34.825352 containerd[1572]: time="2026-03-14T00:22:34.825328912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:22:34.827317 containerd[1572]: time="2026-03-14T00:22:34.825955469Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:22:34.827317 containerd[1572]: time="2026-03-14T00:22:34.826039353Z" level=info msg="Connect containerd service" Mar 14 00:22:34.827317 containerd[1572]: time="2026-03-14T00:22:34.826146493Z" level=info msg="using legacy CRI server" Mar 14 00:22:34.827317 containerd[1572]: time="2026-03-14T00:22:34.826159916Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:22:34.827317 containerd[1572]: time="2026-03-14T00:22:34.826503900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:22:34.830717 containerd[1572]: time="2026-03-14T00:22:34.829808635Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:22:34.830717 containerd[1572]: time="2026-03-14T00:22:34.830218925Z" level=info msg="Start subscribing containerd event" Mar 14 00:22:34.850941 containerd[1572]: time="2026-03-14T00:22:34.849270770Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:22:34.850941 containerd[1572]: time="2026-03-14T00:22:34.849415834Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:22:34.850941 containerd[1572]: time="2026-03-14T00:22:34.850267752Z" level=info msg="Start recovering state" Mar 14 00:22:34.850941 containerd[1572]: time="2026-03-14T00:22:34.850731255Z" level=info msg="Start event monitor" Mar 14 00:22:34.850941 containerd[1572]: time="2026-03-14T00:22:34.850852090Z" level=info msg="Start snapshots syncer" Mar 14 00:22:34.851234 containerd[1572]: time="2026-03-14T00:22:34.850977409Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:22:34.851234 containerd[1572]: time="2026-03-14T00:22:34.851016547Z" level=info msg="Start streaming server" Mar 14 00:22:34.853631 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:22:34.855487 containerd[1572]: time="2026-03-14T00:22:34.855385116Z" level=info msg="containerd successfully booted in 0.213194s" Mar 14 00:22:35.600353 tar[1569]: linux-amd64/README.md Mar 14 00:22:35.644741 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:22:37.642120 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:22:37.656387 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:43362.service - OpenSSH per-connection server daemon (10.0.0.1:43362). Mar 14 00:22:38.039853 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 43362 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:22:38.047291 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:38.072346 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:22:38.097596 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:22:38.105222 systemd-logind[1547]: New session 1 of user core. Mar 14 00:22:38.454762 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:22:38.479203 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:22:38.582265 (systemd)[1667]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:22:39.306999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:39.308012 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:22:39.326560 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:22:39.357757 systemd[1667]: Queued start job for default target default.target. Mar 14 00:22:39.359608 systemd[1667]: Created slice app.slice - User Application Slice. Mar 14 00:22:39.359671 systemd[1667]: Reached target paths.target - Paths. Mar 14 00:22:39.359690 systemd[1667]: Reached target timers.target - Timers. Mar 14 00:22:39.366095 systemd[1667]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:22:39.449027 systemd[1667]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:22:39.449162 systemd[1667]: Reached target sockets.target - Sockets. Mar 14 00:22:39.449187 systemd[1667]: Reached target basic.target - Basic System. Mar 14 00:22:39.449266 systemd[1667]: Reached target default.target - Main User Target. Mar 14 00:22:39.449328 systemd[1667]: Startup finished in 837ms. Mar 14 00:22:39.449537 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:22:39.464669 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:22:39.465546 systemd[1]: Startup finished in 21.245s (kernel) + 20.242s (userspace) = 41.487s. Mar 14 00:22:39.553477 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:43392.service - OpenSSH per-connection server daemon (10.0.0.1:43392). Mar 14 00:22:39.685968 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 43392 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:22:39.689668 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:39.699191 systemd-logind[1547]: New session 2 of user core. Mar 14 00:22:39.710703 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:22:39.793147 sshd[1692]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:39.804294 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:43402.service - OpenSSH per-connection server daemon (10.0.0.1:43402). Mar 14 00:22:39.805129 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:43392.service: Deactivated successfully. Mar 14 00:22:39.825160 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:22:39.833394 systemd-logind[1547]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:22:39.903858 systemd-logind[1547]: Removed session 2. Mar 14 00:22:39.954693 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 43402 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:22:39.958819 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:39.970844 systemd-logind[1547]: New session 3 of user core. Mar 14 00:22:39.981360 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:22:40.264620 sshd[1702]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:40.281745 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:50826.service - OpenSSH per-connection server daemon (10.0.0.1:50826). Mar 14 00:22:40.282751 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:43402.service: Deactivated successfully. Mar 14 00:22:40.286610 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:22:40.290027 systemd-logind[1547]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:22:40.293305 systemd-logind[1547]: Removed session 3. Mar 14 00:22:40.336290 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 50826 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:22:40.340526 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:40.351260 systemd-logind[1547]: New session 4 of user core. Mar 14 00:22:40.377571 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:22:40.511335 sshd[1710]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:40.536549 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:50840.service - OpenSSH per-connection server daemon (10.0.0.1:50840). Mar 14 00:22:40.537625 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:50826.service: Deactivated successfully. Mar 14 00:22:40.543690 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:22:40.546391 systemd-logind[1547]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:22:40.549522 systemd-logind[1547]: Removed session 4. Mar 14 00:22:40.616149 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 50840 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:22:40.618749 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:40.637645 systemd-logind[1547]: New session 5 of user core. Mar 14 00:22:40.650596 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:22:40.866428 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:22:40.867137 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:22:40.916583 sudo[1726]: pam_unix(sudo:session): session closed for user root Mar 14 00:22:40.964157 sshd[1720]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:41.053053 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:50856.service - OpenSSH per-connection server daemon (10.0.0.1:50856). Mar 14 00:22:41.054047 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:50840.service: Deactivated successfully. Mar 14 00:22:41.063384 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:22:41.064744 systemd-logind[1547]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:22:41.068819 systemd-logind[1547]: Removed session 5. Mar 14 00:22:41.110308 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 50856 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:22:41.112995 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:41.132651 systemd-logind[1547]: New session 6 of user core. Mar 14 00:22:41.137412 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:22:41.254474 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:22:41.255278 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:22:41.265464 sudo[1737]: pam_unix(sudo:session): session closed for user root Mar 14 00:22:41.279734 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:22:41.280444 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:22:41.312651 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:22:41.360721 kubelet[1682]: E0314 00:22:41.359492 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:22:41.362143 auditctl[1740]: No rules Mar 14 00:22:41.362593 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:22:41.363724 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:22:41.366536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:22:41.367355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:22:41.389852 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:22:41.462509 augenrules[1763]: No rules Mar 14 00:22:41.464679 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:22:41.470728 sudo[1736]: pam_unix(sudo:session): session closed for user root Mar 14 00:22:41.479225 sshd[1728]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:41.484714 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:50880.service - OpenSSH per-connection server daemon (10.0.0.1:50880). Mar 14 00:22:41.492375 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:50856.service: Deactivated successfully. Mar 14 00:22:41.496129 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:22:41.498360 systemd-logind[1547]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:22:41.507507 systemd-logind[1547]: Removed session 6. Mar 14 00:22:41.578393 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 50880 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:22:41.581205 sshd[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:41.594074 systemd-logind[1547]: New session 7 of user core. Mar 14 00:22:41.604620 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:22:41.705045 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:22:41.705483 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:22:43.550851 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:22:43.551405 (dockerd)[1794]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:22:44.105738 dockerd[1794]: time="2026-03-14T00:22:44.105605311Z" level=info msg="Starting up" Mar 14 00:22:45.690604 dockerd[1794]: time="2026-03-14T00:22:45.689778774Z" level=info msg="Loading containers: start." Mar 14 00:22:45.940983 kernel: Initializing XFRM netlink socket Mar 14 00:22:46.173309 systemd-networkd[1248]: docker0: Link UP Mar 14 00:22:46.210363 dockerd[1794]: time="2026-03-14T00:22:46.210228209Z" level=info msg="Loading containers: done." Mar 14 00:22:46.239068 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck134251690-merged.mount: Deactivated successfully. Mar 14 00:22:46.242198 dockerd[1794]: time="2026-03-14T00:22:46.242078213Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:22:46.242346 dockerd[1794]: time="2026-03-14T00:22:46.242234254Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:22:46.242489 dockerd[1794]: time="2026-03-14T00:22:46.242446899Z" level=info msg="Daemon has completed initialization" Mar 14 00:22:46.324429 dockerd[1794]: time="2026-03-14T00:22:46.323573972Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:22:46.325343 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:22:47.238670 containerd[1572]: time="2026-03-14T00:22:47.238557923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:22:47.935314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655842994.mount: Deactivated successfully. Mar 14 00:22:50.460606 containerd[1572]: time="2026-03-14T00:22:50.459616053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:50.463030 containerd[1572]: time="2026-03-14T00:22:50.462939899Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 14 00:22:50.467112 containerd[1572]: time="2026-03-14T00:22:50.466716082Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:50.474491 containerd[1572]: time="2026-03-14T00:22:50.474327166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:50.476755 containerd[1572]: time="2026-03-14T00:22:50.476662712Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 3.237983952s" Mar 14 00:22:50.476943 containerd[1572]: time="2026-03-14T00:22:50.476792740Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 14 00:22:50.478198 containerd[1572]: time="2026-03-14T00:22:50.478046098Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:22:51.579572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:22:51.654543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:56.216804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:56.306503 (kubelet)[2017]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:01.410266 kubelet[2017]: E0314 00:23:01.409603 2017 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:01.419164 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:01.419695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:05.082725 containerd[1572]: time="2026-03-14T00:23:05.082348403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:05.086275 containerd[1572]: time="2026-03-14T00:23:05.083250059Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 14 00:23:05.089193 containerd[1572]: time="2026-03-14T00:23:05.089108155Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:05.096350 containerd[1572]: time="2026-03-14T00:23:05.096260696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:05.100641 containerd[1572]: time="2026-03-14T00:23:05.100179903Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 14.622043181s" Mar 14 00:23:05.100641 containerd[1572]: time="2026-03-14T00:23:05.100519672Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 14 00:23:05.106124 containerd[1572]: time="2026-03-14T00:23:05.106033189Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:23:10.018371 containerd[1572]: time="2026-03-14T00:23:10.017628634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:10.021307 containerd[1572]: time="2026-03-14T00:23:10.018923799Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 14 00:23:10.021307 containerd[1572]: time="2026-03-14T00:23:10.021097794Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:10.027098 containerd[1572]: time="2026-03-14T00:23:10.027004794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:10.029206 containerd[1572]: time="2026-03-14T00:23:10.028949477Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 4.922817836s" Mar 14 00:23:10.029206 containerd[1572]: time="2026-03-14T00:23:10.029016762Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 14 00:23:10.033206 containerd[1572]: time="2026-03-14T00:23:10.033000140Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:23:11.571538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:23:11.612155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:13.238451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:13.282737 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:13.874501 kubelet[2047]: E0314 00:23:13.874039 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:13.881664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:13.882477 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:14.723202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438470592.mount: Deactivated successfully. Mar 14 00:23:17.810580 containerd[1572]: time="2026-03-14T00:23:17.810367053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:17.814647 containerd[1572]: time="2026-03-14T00:23:17.814509988Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 14 00:23:17.816729 containerd[1572]: time="2026-03-14T00:23:17.816617852Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:17.821781 containerd[1572]: time="2026-03-14T00:23:17.821488392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:17.824839 containerd[1572]: time="2026-03-14T00:23:17.823140275Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 7.790084604s" Mar 14 00:23:17.825706 containerd[1572]: time="2026-03-14T00:23:17.825421253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 14 00:23:17.833999 containerd[1572]: time="2026-03-14T00:23:17.833866547Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:23:18.014685 update_engine[1551]: I20260314 00:23:18.010509 1551 update_attempter.cc:509] Updating boot flags... Mar 14 00:23:18.286942 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2069) Mar 14 00:23:18.390964 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2070) Mar 14 00:23:19.020061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807963615.mount: Deactivated successfully. Mar 14 00:23:20.845979 containerd[1572]: time="2026-03-14T00:23:20.845812527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:20.848394 containerd[1572]: time="2026-03-14T00:23:20.848278419Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 14 00:23:20.850146 containerd[1572]: time="2026-03-14T00:23:20.850075132Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:20.858839 containerd[1572]: time="2026-03-14T00:23:20.858741489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:20.860739 containerd[1572]: time="2026-03-14T00:23:20.860651061Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.026631476s" Mar 14 00:23:20.860739 containerd[1572]: time="2026-03-14T00:23:20.860713984Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 14 00:23:20.864218 containerd[1572]: time="2026-03-14T00:23:20.864166035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:23:21.781397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1181991288.mount: Deactivated successfully. Mar 14 00:23:21.827627 containerd[1572]: time="2026-03-14T00:23:21.826510380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:21.831827 containerd[1572]: time="2026-03-14T00:23:21.831477476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 14 00:23:21.836819 containerd[1572]: time="2026-03-14T00:23:21.835689566Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:21.868566 containerd[1572]: time="2026-03-14T00:23:21.868090011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:21.875397 containerd[1572]: time="2026-03-14T00:23:21.870375427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.006148644s" Mar 14 00:23:21.875397 containerd[1572]: time="2026-03-14T00:23:21.870417448Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 14 00:23:21.875397 containerd[1572]: time="2026-03-14T00:23:21.873325579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:23:22.569582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569355893.mount: Deactivated successfully. Mar 14 00:23:24.064795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:23:24.077468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:24.750745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:24.778660 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:25.047679 kubelet[2192]: E0314 00:23:25.046414 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:25.056487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:25.057095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:26.154864 containerd[1572]: time="2026-03-14T00:23:26.154658419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:26.158025 containerd[1572]: time="2026-03-14T00:23:26.157855238Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 14 00:23:26.159997 containerd[1572]: time="2026-03-14T00:23:26.159791277Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:26.166513 containerd[1572]: time="2026-03-14T00:23:26.166242205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:26.170013 containerd[1572]: time="2026-03-14T00:23:26.169431160Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 4.296065487s" Mar 14 00:23:26.170013 containerd[1572]: time="2026-03-14T00:23:26.169489162Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 14 00:23:30.514443 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:30.529194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:30.604316 systemd[1]: Reloading requested from client PID 2250 ('systemctl') (unit session-7.scope)... Mar 14 00:23:30.604376 systemd[1]: Reloading... Mar 14 00:23:30.830032 zram_generator::config[2289]: No configuration found. Mar 14 00:23:31.117669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:23:31.300002 systemd[1]: Reloading finished in 694 ms. Mar 14 00:23:31.370462 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:23:31.370721 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:23:31.372004 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:31.388932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:31.699982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:31.728671 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:23:31.867715 kubelet[2349]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:31.867715 kubelet[2349]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:23:31.867715 kubelet[2349]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:31.867715 kubelet[2349]: I0314 00:23:31.867786 2349 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:23:32.911199 kubelet[2349]: I0314 00:23:32.910859 2349 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:23:32.914616 kubelet[2349]: I0314 00:23:32.912974 2349 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:23:32.916712 kubelet[2349]: I0314 00:23:32.916646 2349 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:23:32.969182 kubelet[2349]: E0314 00:23:32.969062 2349 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:23:32.972135 kubelet[2349]: I0314 00:23:32.972055 2349 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:23:32.990389 kubelet[2349]: E0314 00:23:32.990234 2349 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:23:32.990389 kubelet[2349]: I0314 00:23:32.990334 2349 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:23:33.002499 kubelet[2349]: I0314 00:23:33.001844 2349 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:23:33.005754 kubelet[2349]: I0314 00:23:33.004238 2349 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:23:33.005754 kubelet[2349]: I0314 00:23:33.004325 2349 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:23:33.005754 kubelet[2349]: I0314 00:23:33.004940 2349 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:23:33.005754 kubelet[2349]: I0314 00:23:33.004954 2349 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:23:33.005754 kubelet[2349]: I0314 00:23:33.005382 2349 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:33.122109 kubelet[2349]: I0314 00:23:33.120360 2349 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:23:33.122109 kubelet[2349]: I0314 00:23:33.121251 2349 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:23:33.125010 kubelet[2349]: I0314 00:23:33.124751 2349 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:23:33.132052 kubelet[2349]: I0314 00:23:33.130952 2349 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:23:33.154327 kubelet[2349]: E0314 00:23:33.153759 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:23:33.154327 kubelet[2349]: E0314 00:23:33.153745 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:23:33.160484 kubelet[2349]: I0314 00:23:33.160376 2349 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:23:33.161991 kubelet[2349]: I0314 00:23:33.161727 2349 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:23:33.165656 kubelet[2349]: W0314 00:23:33.164391 2349 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:23:33.182027 kubelet[2349]: I0314 00:23:33.181971 2349 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:23:33.182164 kubelet[2349]: I0314 00:23:33.182112 2349 server.go:1289] "Started kubelet" Mar 14 00:23:33.183060 kubelet[2349]: I0314 00:23:33.182812 2349 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:23:33.186429 kubelet[2349]: I0314 00:23:33.185799 2349 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:23:33.189536 kubelet[2349]: I0314 00:23:33.189438 2349 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:23:33.191693 kubelet[2349]: I0314 00:23:33.190215 2349 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:23:33.191693 kubelet[2349]: I0314 00:23:33.190702 2349 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:23:33.194958 kubelet[2349]: I0314 00:23:33.194822 2349 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:23:33.197093 kubelet[2349]: I0314 00:23:33.197012 2349 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:23:33.201971 kubelet[2349]: E0314 00:23:33.197548 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:23:33.201971 kubelet[2349]: E0314 00:23:33.191418 2349 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8d63d4688c38 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:23:33.182032952 +0000 UTC m=+1.442888160,LastTimestamp:2026-03-14 00:23:33.182032952 +0000 UTC m=+1.442888160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:23:33.201971 kubelet[2349]: I0314 00:23:33.201432 2349 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:23:33.202629 kubelet[2349]: E0314 00:23:33.202274 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Mar 14 00:23:33.203262 kubelet[2349]: I0314 00:23:33.202694 2349 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:23:33.203614 kubelet[2349]: E0314 00:23:33.203551 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:23:33.204382 kubelet[2349]: I0314 00:23:33.204328 2349 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:23:33.204523 kubelet[2349]: I0314 00:23:33.204491 2349 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:23:33.206955 kubelet[2349]: E0314 00:23:33.206860 2349 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:23:33.208148 kubelet[2349]: I0314 00:23:33.208130 2349 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:23:33.315087 kubelet[2349]: E0314 00:23:33.312195 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:23:33.348201 kubelet[2349]: I0314 00:23:33.348127 2349 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:23:33.348201 kubelet[2349]: I0314 00:23:33.348172 2349 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:23:33.348380 kubelet[2349]: I0314 00:23:33.348242 2349 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:33.350683 kubelet[2349]: I0314 00:23:33.350652 2349 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:23:33.365967 kubelet[2349]: I0314 00:23:33.365799 2349 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:23:33.366124 kubelet[2349]: I0314 00:23:33.366009 2349 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:23:33.366124 kubelet[2349]: I0314 00:23:33.366072 2349 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:23:33.366280 kubelet[2349]: I0314 00:23:33.366132 2349 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:23:33.366326 kubelet[2349]: E0314 00:23:33.366270 2349 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:23:33.368684 kubelet[2349]: E0314 00:23:33.368539 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:23:33.405207 kubelet[2349]: E0314 00:23:33.404754 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Mar 14 00:23:33.413836 kubelet[2349]: E0314 00:23:33.413551 2349 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:23:33.421015 kubelet[2349]: I0314 00:23:33.420058 2349 policy_none.go:49] "None policy: Start" Mar 14 00:23:33.421015 kubelet[2349]: I0314 00:23:33.420164 2349 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:23:33.421015 kubelet[2349]: I0314 00:23:33.420219 2349 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:23:33.436708 kubelet[2349]: E0314 00:23:33.434654 2349 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:23:33.436708 kubelet[2349]: I0314 00:23:33.435153 2349 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:23:33.436708 kubelet[2349]: I0314 00:23:33.435197 2349 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:23:33.440067 kubelet[2349]: I0314 00:23:33.439793 2349 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:23:33.445727 kubelet[2349]: E0314 00:23:33.445442 2349 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:23:33.445727 kubelet[2349]: E0314 00:23:33.445660 2349 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:23:33.485628 kubelet[2349]: E0314 00:23:33.485466 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:33.523542 kubelet[2349]: I0314 00:23:33.518956 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:33.523542 kubelet[2349]: I0314 00:23:33.523379 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:33.528353 kubelet[2349]: I0314 00:23:33.527714 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:33.528353 kubelet[2349]: I0314 00:23:33.527787 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:33.528353 kubelet[2349]: I0314 00:23:33.527817 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:23:33.528353 kubelet[2349]: I0314 00:23:33.527850 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a122388f93222152a312baa29641fcd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a122388f93222152a312baa29641fcd\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:33.528353 kubelet[2349]: I0314 00:23:33.527873 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a122388f93222152a312baa29641fcd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a122388f93222152a312baa29641fcd\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:33.528979 kubelet[2349]: I0314 00:23:33.527965 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a122388f93222152a312baa29641fcd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a122388f93222152a312baa29641fcd\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:33.528979 kubelet[2349]: I0314 00:23:33.528108 2349 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:33.529201 kubelet[2349]: E0314 00:23:33.529131 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:33.530847 kubelet[2349]: E0314 00:23:33.530508 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:33.549687 kubelet[2349]: I0314 00:23:33.549177 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:23:33.550513 kubelet[2349]: E0314 00:23:33.550279 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Mar 14 00:23:33.766726 kubelet[2349]: I0314 00:23:33.765104 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:23:33.766726 kubelet[2349]: E0314 00:23:33.765841 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Mar 14 00:23:33.789221 kubelet[2349]: E0314 00:23:33.788924 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:33.790689 containerd[1572]: time="2026-03-14T00:23:33.790562570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a122388f93222152a312baa29641fcd,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:33.807356 kubelet[2349]: E0314 00:23:33.807272 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Mar 14 00:23:33.831848 kubelet[2349]: E0314 00:23:33.831675 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:33.835815 containerd[1572]: time="2026-03-14T00:23:33.834150384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:33.841967 kubelet[2349]: E0314 00:23:33.836817 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:33.843288 containerd[1572]: time="2026-03-14T00:23:33.843168147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:34.176216 kubelet[2349]: I0314 00:23:34.176031 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:23:34.178757 kubelet[2349]: E0314 00:23:34.177814 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Mar 14 00:23:34.259778 kubelet[2349]: E0314 00:23:34.259425 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:23:34.275097 kubelet[2349]: E0314 00:23:34.275016 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:23:34.394988 kubelet[2349]: E0314 00:23:34.394805 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:23:34.448753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount521197739.mount: Deactivated successfully. Mar 14 00:23:34.558743 containerd[1572]: time="2026-03-14T00:23:34.558240543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:34.575728 containerd[1572]: time="2026-03-14T00:23:34.573721696Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:23:34.579789 containerd[1572]: time="2026-03-14T00:23:34.579327669Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:34.582947 containerd[1572]: time="2026-03-14T00:23:34.582442014Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:34.591205 containerd[1572]: time="2026-03-14T00:23:34.590778065Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:34.591205 containerd[1572]: time="2026-03-14T00:23:34.591070050Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:23:34.594486 containerd[1572]: time="2026-03-14T00:23:34.594421137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:23:34.604283 containerd[1572]: time="2026-03-14T00:23:34.604194746Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 813.311407ms" Mar 14 00:23:34.607789 containerd[1572]: time="2026-03-14T00:23:34.606275681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:34.612116 containerd[1572]: time="2026-03-14T00:23:34.610956865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 767.134828ms" Mar 14 00:23:34.615040 kubelet[2349]: E0314 00:23:34.614319 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" Mar 14 00:23:34.618870 containerd[1572]: time="2026-03-14T00:23:34.618739534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 782.919815ms" Mar 14 00:23:34.857029 kubelet[2349]: E0314 00:23:34.856390 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:23:35.023997 kubelet[2349]: I0314 00:23:35.023235 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:23:35.023997 kubelet[2349]: E0314 00:23:35.024091 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Mar 14 00:23:35.308756 kubelet[2349]: E0314 00:23:35.307164 2349 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:23:35.827777 containerd[1572]: time="2026-03-14T00:23:35.827056194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:35.827777 containerd[1572]: time="2026-03-14T00:23:35.827194025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:35.827777 containerd[1572]: time="2026-03-14T00:23:35.827204956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:35.827777 containerd[1572]: time="2026-03-14T00:23:35.827388976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:35.833400 containerd[1572]: time="2026-03-14T00:23:35.829051354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:35.833400 containerd[1572]: time="2026-03-14T00:23:35.829188364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:35.833400 containerd[1572]: time="2026-03-14T00:23:35.829216307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:35.833400 containerd[1572]: time="2026-03-14T00:23:35.829494514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:35.856731 containerd[1572]: time="2026-03-14T00:23:35.856531320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:35.862989 containerd[1572]: time="2026-03-14T00:23:35.861198912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:35.862989 containerd[1572]: time="2026-03-14T00:23:35.861337003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:35.862989 containerd[1572]: time="2026-03-14T00:23:35.861492658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:36.261157 kubelet[2349]: E0314 00:23:36.259619 2349 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="3.2s" Mar 14 00:23:36.611751 kubelet[2349]: E0314 00:23:36.603080 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:23:36.614508 kubelet[2349]: E0314 00:23:36.603158 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:23:36.627626 kubelet[2349]: I0314 00:23:36.627569 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:23:36.629086 kubelet[2349]: E0314 00:23:36.628201 2349 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Mar 14 00:23:36.646419 containerd[1572]: time="2026-03-14T00:23:36.646359337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"1147e23acbae02ab918731386ce66b1091f81d1766d1b432dad09547a3838b96\"" Mar 14 00:23:36.649091 containerd[1572]: time="2026-03-14T00:23:36.648940717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6a122388f93222152a312baa29641fcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"c195c2b64b07b995f150ce0ecaec6ef9e013a9cae8aa0c7adb90557196eb267c\"" Mar 14 00:23:36.654531 kubelet[2349]: E0314 00:23:36.653744 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:36.657323 kubelet[2349]: E0314 00:23:36.655581 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:36.666624 containerd[1572]: time="2026-03-14T00:23:36.666536706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8960012085e1727fc11abfbd2ab6cfa25284a21e10b24409e63a8c44a2013c8\"" Mar 14 00:23:36.668274 kubelet[2349]: E0314 00:23:36.668199 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:36.669938 containerd[1572]: time="2026-03-14T00:23:36.669782661Z" level=info msg="CreateContainer within sandbox \"c195c2b64b07b995f150ce0ecaec6ef9e013a9cae8aa0c7adb90557196eb267c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:23:36.675870 containerd[1572]: time="2026-03-14T00:23:36.675675959Z" level=info msg="CreateContainer within sandbox \"1147e23acbae02ab918731386ce66b1091f81d1766d1b432dad09547a3838b96\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:23:36.685614 containerd[1572]: time="2026-03-14T00:23:36.685468383Z" level=info msg="CreateContainer within sandbox \"c8960012085e1727fc11abfbd2ab6cfa25284a21e10b24409e63a8c44a2013c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:23:36.734369 containerd[1572]: time="2026-03-14T00:23:36.733450237Z" level=info msg="CreateContainer within sandbox \"1147e23acbae02ab918731386ce66b1091f81d1766d1b432dad09547a3838b96\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7430030447cdfcc0616153020e26cb9bbb33970c022f57aaafe175e93405aacb\"" Mar 14 00:23:36.859236 containerd[1572]: time="2026-03-14T00:23:36.858945251Z" level=info msg="StartContainer for \"7430030447cdfcc0616153020e26cb9bbb33970c022f57aaafe175e93405aacb\"" Mar 14 00:23:36.860440 systemd[1]: run-containerd-runc-k8s.io-c195c2b64b07b995f150ce0ecaec6ef9e013a9cae8aa0c7adb90557196eb267c-runc.FW2gTa.mount: Deactivated successfully. Mar 14 00:23:36.866520 containerd[1572]: time="2026-03-14T00:23:36.866475165Z" level=info msg="CreateContainer within sandbox \"c8960012085e1727fc11abfbd2ab6cfa25284a21e10b24409e63a8c44a2013c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ce98d349d60b67ec92469438da31394a0323a6913d68c3a8ef0006ce8b1da15\"" Mar 14 00:23:36.869825 containerd[1572]: time="2026-03-14T00:23:36.869784408Z" level=info msg="StartContainer for \"5ce98d349d60b67ec92469438da31394a0323a6913d68c3a8ef0006ce8b1da15\"" Mar 14 00:23:36.875113 containerd[1572]: time="2026-03-14T00:23:36.875069937Z" level=info msg="CreateContainer within sandbox \"c195c2b64b07b995f150ce0ecaec6ef9e013a9cae8aa0c7adb90557196eb267c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"36a00d9ac605b65ddee794ad365c29f4f2f2083a0c4e3d1a62d71451040a5211\"" Mar 14 00:23:36.877758 containerd[1572]: time="2026-03-14T00:23:36.877693951Z" level=info msg="StartContainer for \"36a00d9ac605b65ddee794ad365c29f4f2f2083a0c4e3d1a62d71451040a5211\"" Mar 14 00:23:36.930455 kubelet[2349]: E0314 00:23:36.930300 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:23:37.193206 containerd[1572]: time="2026-03-14T00:23:37.191820581Z" level=info msg="StartContainer for \"7430030447cdfcc0616153020e26cb9bbb33970c022f57aaafe175e93405aacb\" returns successfully" Mar 14 00:23:37.445978 kubelet[2349]: E0314 00:23:37.444686 2349 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:23:37.459186 containerd[1572]: time="2026-03-14T00:23:37.456600995Z" level=info msg="StartContainer for \"5ce98d349d60b67ec92469438da31394a0323a6913d68c3a8ef0006ce8b1da15\" returns successfully" Mar 14 00:23:37.486753 containerd[1572]: time="2026-03-14T00:23:37.486470247Z" level=info msg="StartContainer for \"36a00d9ac605b65ddee794ad365c29f4f2f2083a0c4e3d1a62d71451040a5211\" returns successfully" Mar 14 00:23:37.639321 kubelet[2349]: E0314 00:23:37.638450 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:37.662008 kubelet[2349]: E0314 00:23:37.661946 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:37.662225 kubelet[2349]: E0314 00:23:37.662160 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:37.664627 kubelet[2349]: E0314 00:23:37.662633 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:37.668499 kubelet[2349]: E0314 00:23:37.668432 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:37.668954 kubelet[2349]: E0314 00:23:37.668852 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:38.878351 kubelet[2349]: E0314 00:23:38.877954 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:38.888991 kubelet[2349]: E0314 00:23:38.884362 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:38.888991 kubelet[2349]: E0314 00:23:38.887521 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:38.888991 kubelet[2349]: E0314 00:23:38.887696 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:39.867007 kubelet[2349]: I0314 00:23:39.866599 2349 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:23:40.233761 kubelet[2349]: E0314 00:23:40.229375 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:40.252964 kubelet[2349]: E0314 00:23:40.250992 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:40.265014 kubelet[2349]: E0314 00:23:40.262448 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:40.265014 kubelet[2349]: E0314 00:23:40.263632 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:43.586361 kubelet[2349]: E0314 00:23:43.471643 2349 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:23:44.331088 kubelet[2349]: E0314 00:23:44.328083 2349 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:23:44.331088 kubelet[2349]: E0314 00:23:44.328602 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:45.197962 kubelet[2349]: E0314 00:23:45.196507 2349 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 14 00:23:45.264075 kubelet[2349]: I0314 00:23:45.261321 2349 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 14 00:23:45.264075 kubelet[2349]: E0314 00:23:45.261378 2349 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 14 00:23:45.347390 kubelet[2349]: I0314 00:23:45.340863 2349 apiserver.go:52] "Watching apiserver" Mar 14 00:23:45.351009 kubelet[2349]: I0314 00:23:45.348196 2349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:45.374231 kubelet[2349]: E0314 00:23:45.374159 2349 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:45.374231 kubelet[2349]: I0314 00:23:45.374205 2349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:45.377643 kubelet[2349]: E0314 00:23:45.377121 2349 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:45.377643 kubelet[2349]: I0314 00:23:45.377165 2349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:23:45.382457 kubelet[2349]: E0314 00:23:45.382384 2349 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 14 00:23:45.403521 kubelet[2349]: I0314 00:23:45.402312 2349 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:23:48.795543 kubelet[2349]: I0314 00:23:48.795271 2349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:23:48.825461 kubelet[2349]: E0314 00:23:48.825370 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:49.280638 kubelet[2349]: E0314 00:23:49.280278 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:49.852082 systemd[1]: Reloading requested from client PID 2636 ('systemctl') (unit session-7.scope)... Mar 14 00:23:49.852124 systemd[1]: Reloading... Mar 14 00:23:50.064019 zram_generator::config[2678]: No configuration found. Mar 14 00:23:50.167043 kubelet[2349]: I0314 00:23:50.166653 2349 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:50.184961 kubelet[2349]: E0314 00:23:50.184330 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:50.225144 kubelet[2349]: I0314 00:23:50.224975 2349 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.224863556 podStartE2EDuration="2.224863556s" podCreationTimestamp="2026-03-14 00:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:50.224565651 +0000 UTC m=+18.485420829" watchObservedRunningTime="2026-03-14 00:23:50.224863556 +0000 UTC m=+18.485718755" Mar 14 00:23:50.282686 kubelet[2349]: E0314 00:23:50.282652 2349 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:50.292313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:23:50.481521 systemd[1]: Reloading finished in 628 ms. Mar 14 00:23:50.553991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:50.575367 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:23:50.576102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:50.591662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:51.075537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:51.085251 (kubelet)[2729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:23:51.584644 kubelet[2729]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:51.590349 kubelet[2729]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:23:51.590349 kubelet[2729]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:51.590349 kubelet[2729]: I0314 00:23:51.585163 2729 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:23:51.613292 kubelet[2729]: I0314 00:23:51.613136 2729 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:23:51.613292 kubelet[2729]: I0314 00:23:51.613176 2729 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:23:51.622370 kubelet[2729]: I0314 00:23:51.613432 2729 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:23:51.625934 kubelet[2729]: I0314 00:23:51.625626 2729 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:23:51.667273 kubelet[2729]: I0314 00:23:51.666009 2729 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:23:51.679982 kubelet[2729]: E0314 00:23:51.678602 2729 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:23:51.679982 kubelet[2729]: I0314 00:23:51.678647 2729 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:23:51.694547 kubelet[2729]: I0314 00:23:51.694451 2729 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:23:51.695668 kubelet[2729]: I0314 00:23:51.695562 2729 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:23:51.696088 kubelet[2729]: I0314 00:23:51.695617 2729 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:23:51.696521 kubelet[2729]: I0314 00:23:51.696189 2729 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:23:51.696521 kubelet[2729]: I0314 00:23:51.696206 2729 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:23:51.696521 kubelet[2729]: I0314 00:23:51.696324 2729 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:51.696776 kubelet[2729]: I0314 00:23:51.696746 2729 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:23:51.696776 kubelet[2729]: I0314 00:23:51.696777 2729 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:23:51.696864 kubelet[2729]: I0314 00:23:51.696819 2729 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:23:51.696864 kubelet[2729]: I0314 00:23:51.696842 2729 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:23:51.730962 kubelet[2729]: I0314 00:23:51.730661 2729 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:23:51.734161 kubelet[2729]: I0314 00:23:51.734023 2729 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:23:51.774272 kubelet[2729]: I0314 00:23:51.774162 2729 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:23:51.774272 kubelet[2729]: I0314 00:23:51.774248 2729 server.go:1289] "Started kubelet" Mar 14 00:23:51.775370 kubelet[2729]: I0314 00:23:51.775280 2729 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:23:51.776364 kubelet[2729]: I0314 00:23:51.775425 2729 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:23:51.776364 kubelet[2729]: I0314 00:23:51.776219 2729 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:23:51.781163 kubelet[2729]: E0314 00:23:51.781081 2729 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:23:51.782754 sudo[2746]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:23:51.783608 sudo[2746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:23:51.787598 kubelet[2729]: I0314 00:23:51.785469 2729 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:23:51.787598 kubelet[2729]: I0314 00:23:51.787459 2729 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:23:51.789843 kubelet[2729]: I0314 00:23:51.789787 2729 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:23:51.794384 kubelet[2729]: I0314 00:23:51.793738 2729 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:23:51.794384 kubelet[2729]: I0314 00:23:51.793976 2729 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:23:51.794384 kubelet[2729]: I0314 00:23:51.794186 2729 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:23:51.795390 kubelet[2729]: I0314 00:23:51.795365 2729 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:23:51.795937 kubelet[2729]: I0314 00:23:51.795871 2729 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:23:51.798446 kubelet[2729]: I0314 00:23:51.798374 2729 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:23:51.830148 kubelet[2729]: I0314 00:23:51.830019 2729 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:23:51.849187 kubelet[2729]: I0314 00:23:51.841072 2729 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:23:51.849187 kubelet[2729]: I0314 00:23:51.841565 2729 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:23:51.849187 kubelet[2729]: I0314 00:23:51.847991 2729 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:23:51.849187 kubelet[2729]: I0314 00:23:51.848016 2729 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:23:51.849187 kubelet[2729]: E0314 00:23:51.848191 2729 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:23:51.930132 kubelet[2729]: I0314 00:23:51.930031 2729 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:23:51.930132 kubelet[2729]: I0314 00:23:51.930078 2729 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:23:51.930132 kubelet[2729]: I0314 00:23:51.930140 2729 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:51.930789 kubelet[2729]: I0314 00:23:51.930732 2729 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:23:51.930983 kubelet[2729]: I0314 00:23:51.930828 2729 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:23:51.930983 kubelet[2729]: I0314 00:23:51.930979 2729 policy_none.go:49] "None policy: Start" Mar 14 00:23:51.931071 kubelet[2729]: I0314 00:23:51.931005 2729 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:23:51.931156 kubelet[2729]: I0314 00:23:51.931096 2729 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:23:51.931637 kubelet[2729]: I0314 00:23:51.931594 2729 state_mem.go:75] "Updated machine memory state" Mar 14 00:23:51.934627 kubelet[2729]: E0314 00:23:51.934519 2729 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:23:51.935074 kubelet[2729]: I0314 00:23:51.934988 2729 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:23:51.937641 kubelet[2729]: I0314 00:23:51.935077 2729 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:23:51.937641 kubelet[2729]: I0314 00:23:51.936379 2729 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:23:51.946483 kubelet[2729]: E0314 00:23:51.945593 2729 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:23:51.957163 kubelet[2729]: I0314 00:23:51.957138 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:51.957745 kubelet[2729]: I0314 00:23:51.957728 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:51.958617 kubelet[2729]: I0314 00:23:51.958600 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:23:51.984342 kubelet[2729]: E0314 00:23:51.984246 2729 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:51.985506 kubelet[2729]: E0314 00:23:51.985471 2729 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 14 00:23:52.071304 kubelet[2729]: I0314 00:23:52.070994 2729 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:23:52.095234 kubelet[2729]: I0314 00:23:52.095169 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a122388f93222152a312baa29641fcd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a122388f93222152a312baa29641fcd\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:52.095234 kubelet[2729]: I0314 00:23:52.095247 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a122388f93222152a312baa29641fcd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6a122388f93222152a312baa29641fcd\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:52.095435 kubelet[2729]: I0314 00:23:52.095277 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a122388f93222152a312baa29641fcd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6a122388f93222152a312baa29641fcd\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:52.095435 kubelet[2729]: I0314 00:23:52.095308 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:52.095435 kubelet[2729]: I0314 00:23:52.095338 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:52.095435 kubelet[2729]: I0314 00:23:52.095367 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:52.095435 kubelet[2729]: I0314 00:23:52.095395 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:52.095548 kubelet[2729]: I0314 00:23:52.095420 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:23:52.095548 kubelet[2729]: I0314 00:23:52.095446 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:23:52.100022 kubelet[2729]: I0314 00:23:52.099565 2729 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 14 00:23:52.100022 kubelet[2729]: I0314 00:23:52.099665 2729 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 14 00:23:52.345502 kubelet[2729]: E0314 00:23:52.302135 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:52.345502 kubelet[2729]: E0314 00:23:52.345722 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:52.349397 kubelet[2729]: E0314 00:23:52.349276 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:52.700467 kubelet[2729]: I0314 00:23:52.700057 2729 apiserver.go:52] "Watching apiserver" Mar 14 00:23:52.794474 kubelet[2729]: I0314 00:23:52.794408 2729 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:23:52.881593 kubelet[2729]: E0314 00:23:52.881193 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:52.882794 kubelet[2729]: I0314 00:23:52.882354 2729 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:52.882984 kubelet[2729]: E0314 00:23:52.882871 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:53.010087 kubelet[2729]: E0314 00:23:53.003762 2729 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:23:53.010087 kubelet[2729]: E0314 00:23:53.007459 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:53.030376 kubelet[2729]: I0314 00:23:53.029634 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.029607167 podStartE2EDuration="2.029607167s" podCreationTimestamp="2026-03-14 00:23:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:52.819693363 +0000 UTC m=+1.713062132" watchObservedRunningTime="2026-03-14 00:23:53.029607167 +0000 UTC m=+1.922975927" Mar 14 00:23:53.630217 sudo[2746]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:53.883000 kubelet[2729]: E0314 00:23:53.882606 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:53.883000 kubelet[2729]: E0314 00:23:53.882622 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:55.062501 kubelet[2729]: I0314 00:23:55.061702 2729 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:23:55.063526 kubelet[2729]: I0314 00:23:55.062771 2729 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:23:55.063582 containerd[1572]: time="2026-03-14T00:23:55.062410009Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:23:55.538957 kubelet[2729]: E0314 00:23:55.536523 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:55.963579 kubelet[2729]: E0314 00:23:55.963494 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:56.355399 kubelet[2729]: E0314 00:23:56.352497 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:56.405561 kubelet[2729]: I0314 00:23:56.405424 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-xtables-lock\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.405561 kubelet[2729]: I0314 00:23:56.405565 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/059172d6-24cc-4084-81a2-afa42bd19317-xtables-lock\") pod \"kube-proxy-zg9tn\" (UID: \"059172d6-24cc-4084-81a2-afa42bd19317\") " pod="kube-system/kube-proxy-zg9tn" Mar 14 00:23:56.405561 kubelet[2729]: I0314 00:23:56.405599 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/059172d6-24cc-4084-81a2-afa42bd19317-lib-modules\") pod \"kube-proxy-zg9tn\" (UID: \"059172d6-24cc-4084-81a2-afa42bd19317\") " pod="kube-system/kube-proxy-zg9tn" Mar 14 00:23:56.406730 kubelet[2729]: I0314 00:23:56.405629 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cni-path\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.406730 kubelet[2729]: I0314 00:23:56.405674 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e079af1-1818-42c7-a3ae-a18e69f43681-clustermesh-secrets\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.406730 kubelet[2729]: I0314 00:23:56.405705 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7tl\" (UniqueName: \"kubernetes.io/projected/059172d6-24cc-4084-81a2-afa42bd19317-kube-api-access-wn7tl\") pod \"kube-proxy-zg9tn\" (UID: \"059172d6-24cc-4084-81a2-afa42bd19317\") " pod="kube-system/kube-proxy-zg9tn" Mar 14 00:23:56.406730 kubelet[2729]: I0314 00:23:56.405731 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-run\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.406730 kubelet[2729]: I0314 00:23:56.405760 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/059172d6-24cc-4084-81a2-afa42bd19317-kube-proxy\") pod \"kube-proxy-zg9tn\" (UID: \"059172d6-24cc-4084-81a2-afa42bd19317\") " pod="kube-system/kube-proxy-zg9tn" Mar 14 00:23:56.406730 kubelet[2729]: I0314 00:23:56.405784 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-cgroup\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.407032 kubelet[2729]: I0314 00:23:56.405809 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-etc-cni-netd\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.407032 kubelet[2729]: I0314 00:23:56.405841 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-bpf-maps\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.407032 kubelet[2729]: I0314 00:23:56.405943 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-config-path\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.407032 kubelet[2729]: I0314 00:23:56.405980 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-hostproc\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.407032 kubelet[2729]: I0314 00:23:56.406007 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-lib-modules\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.507956 kubelet[2729]: I0314 00:23:56.507793 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88d7e42a-234e-4a2c-9bd2-7502a7baa60e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-d6nhm\" (UID: \"88d7e42a-234e-4a2c-9bd2-7502a7baa60e\") " pod="kube-system/cilium-operator-6c4d7847fc-d6nhm" Mar 14 00:23:56.507956 kubelet[2729]: I0314 00:23:56.507962 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-host-proc-sys-kernel\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.508240 kubelet[2729]: I0314 00:23:56.507998 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e079af1-1818-42c7-a3ae-a18e69f43681-hubble-tls\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.573779 kubelet[2729]: I0314 00:23:56.573660 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-host-proc-sys-net\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.573779 kubelet[2729]: I0314 00:23:56.573785 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4g6g\" (UniqueName: \"kubernetes.io/projected/88d7e42a-234e-4a2c-9bd2-7502a7baa60e-kube-api-access-m4g6g\") pod \"cilium-operator-6c4d7847fc-d6nhm\" (UID: \"88d7e42a-234e-4a2c-9bd2-7502a7baa60e\") " pod="kube-system/cilium-operator-6c4d7847fc-d6nhm" Mar 14 00:23:56.578597 kubelet[2729]: I0314 00:23:56.573845 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf9mn\" (UniqueName: \"kubernetes.io/projected/1e079af1-1818-42c7-a3ae-a18e69f43681-kube-api-access-vf9mn\") pod \"cilium-5mz8g\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " pod="kube-system/cilium-5mz8g" Mar 14 00:23:56.840577 sudo[1776]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:56.852733 sshd[1769]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:56.860762 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:50880.service: Deactivated successfully. Mar 14 00:23:56.871141 systemd-logind[1547]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:23:56.871750 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:23:56.879603 systemd-logind[1547]: Removed session 7. Mar 14 00:23:56.881265 kubelet[2729]: E0314 00:23:56.880487 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:56.882770 containerd[1572]: time="2026-03-14T00:23:56.882536711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zg9tn,Uid:059172d6-24cc-4084-81a2-afa42bd19317,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:56.983113 kubelet[2729]: E0314 00:23:56.982381 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:57.009766 kubelet[2729]: E0314 00:23:57.005513 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:57.009766 kubelet[2729]: E0314 00:23:57.007541 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:57.013063 containerd[1572]: time="2026-03-14T00:23:57.009759950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5mz8g,Uid:1e079af1-1818-42c7-a3ae-a18e69f43681,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:57.013063 containerd[1572]: time="2026-03-14T00:23:57.010033661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-d6nhm,Uid:88d7e42a-234e-4a2c-9bd2-7502a7baa60e,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:57.016156 kubelet[2729]: E0314 00:23:57.013525 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:57.230529 containerd[1572]: time="2026-03-14T00:23:57.227787725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:57.231422 containerd[1572]: time="2026-03-14T00:23:57.230998189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:57.231422 containerd[1572]: time="2026-03-14T00:23:57.231065686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:57.231754 containerd[1572]: time="2026-03-14T00:23:57.231705341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:57.250782 containerd[1572]: time="2026-03-14T00:23:57.250612457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:57.252966 containerd[1572]: time="2026-03-14T00:23:57.251132427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:57.253127 containerd[1572]: time="2026-03-14T00:23:57.252974915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:57.253331 containerd[1572]: time="2026-03-14T00:23:57.253165544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:57.277607 containerd[1572]: time="2026-03-14T00:23:57.276983282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:57.277607 containerd[1572]: time="2026-03-14T00:23:57.277061189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:57.277607 containerd[1572]: time="2026-03-14T00:23:57.277117645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:57.279593 containerd[1572]: time="2026-03-14T00:23:57.278688893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:57.356355 containerd[1572]: time="2026-03-14T00:23:57.356298894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5mz8g,Uid:1e079af1-1818-42c7-a3ae-a18e69f43681,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\"" Mar 14 00:23:57.364773 kubelet[2729]: E0314 00:23:57.362324 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:57.367979 containerd[1572]: time="2026-03-14T00:23:57.367933747Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:23:57.547828 containerd[1572]: time="2026-03-14T00:23:57.547547626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zg9tn,Uid:059172d6-24cc-4084-81a2-afa42bd19317,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0ddda759e37c6339b48c73e5bae9ef6000dd6252ec0696c36031b46e6e88f9b\"" Mar 14 00:23:57.549014 kubelet[2729]: E0314 00:23:57.548867 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:57.559172 containerd[1572]: time="2026-03-14T00:23:57.559069712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-d6nhm,Uid:88d7e42a-234e-4a2c-9bd2-7502a7baa60e,Namespace:kube-system,Attempt:0,} returns sandbox id \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\"" Mar 14 00:23:57.560299 kubelet[2729]: E0314 00:23:57.560207 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:57.562005 containerd[1572]: time="2026-03-14T00:23:57.561958098Z" level=info msg="CreateContainer within sandbox \"c0ddda759e37c6339b48c73e5bae9ef6000dd6252ec0696c36031b46e6e88f9b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:23:57.630834 containerd[1572]: time="2026-03-14T00:23:57.630291881Z" level=info msg="CreateContainer within sandbox \"c0ddda759e37c6339b48c73e5bae9ef6000dd6252ec0696c36031b46e6e88f9b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5de4ed1c06d0906ab9530b9c1ed23abeda86750bba9d0319b8133a6f19e02d2f\"" Mar 14 00:23:57.633960 containerd[1572]: time="2026-03-14T00:23:57.631289600Z" level=info msg="StartContainer for \"5de4ed1c06d0906ab9530b9c1ed23abeda86750bba9d0319b8133a6f19e02d2f\"" Mar 14 00:23:57.856792 containerd[1572]: time="2026-03-14T00:23:57.856712665Z" level=info msg="StartContainer for \"5de4ed1c06d0906ab9530b9c1ed23abeda86750bba9d0319b8133a6f19e02d2f\" returns successfully" Mar 14 00:23:57.998480 kubelet[2729]: E0314 00:23:57.998395 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:58.020521 kubelet[2729]: E0314 00:23:58.020469 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:23:58.020521 kubelet[2729]: E0314 00:23:58.020473 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:06.666146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729896489.mount: Deactivated successfully. Mar 14 00:24:11.363785 containerd[1572]: time="2026-03-14T00:24:11.363438444Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:11.365859 containerd[1572]: time="2026-03-14T00:24:11.365431769Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 00:24:11.367347 containerd[1572]: time="2026-03-14T00:24:11.367273166Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:11.369431 containerd[1572]: time="2026-03-14T00:24:11.369352532Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.001245651s" Mar 14 00:24:11.369431 containerd[1572]: time="2026-03-14T00:24:11.369417624Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 00:24:11.371534 containerd[1572]: time="2026-03-14T00:24:11.371423692Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:24:11.380025 containerd[1572]: time="2026-03-14T00:24:11.379953592Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:24:11.401927 containerd[1572]: time="2026-03-14T00:24:11.401713690Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\"" Mar 14 00:24:11.402592 containerd[1572]: time="2026-03-14T00:24:11.402548158Z" level=info msg="StartContainer for \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\"" Mar 14 00:24:11.503858 containerd[1572]: time="2026-03-14T00:24:11.503749747Z" level=info msg="StartContainer for \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\" returns successfully" Mar 14 00:24:11.545658 kubelet[2729]: E0314 00:24:11.544745 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:11.581420 kubelet[2729]: I0314 00:24:11.581363 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zg9tn" podStartSLOduration=15.581347166 podStartE2EDuration="15.581347166s" podCreationTimestamp="2026-03-14 00:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:58.092018319 +0000 UTC m=+6.985387099" watchObservedRunningTime="2026-03-14 00:24:11.581347166 +0000 UTC m=+20.474715926" Mar 14 00:24:11.797867 containerd[1572]: time="2026-03-14T00:24:11.795230351Z" level=info msg="shim disconnected" id=954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6 namespace=k8s.io Mar 14 00:24:11.797867 containerd[1572]: time="2026-03-14T00:24:11.797716350Z" level=warning msg="cleaning up after shim disconnected" id=954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6 namespace=k8s.io Mar 14 00:24:11.797867 containerd[1572]: time="2026-03-14T00:24:11.797740705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:12.398284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6-rootfs.mount: Deactivated successfully. Mar 14 00:24:12.554579 kubelet[2729]: E0314 00:24:12.554323 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:12.568066 containerd[1572]: time="2026-03-14T00:24:12.567838936Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:24:12.654437 containerd[1572]: time="2026-03-14T00:24:12.652475037Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\"" Mar 14 00:24:12.655791 containerd[1572]: time="2026-03-14T00:24:12.655395085Z" level=info msg="StartContainer for \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\"" Mar 14 00:24:12.778157 containerd[1572]: time="2026-03-14T00:24:12.778081545Z" level=info msg="StartContainer for \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\" returns successfully" Mar 14 00:24:12.801049 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:24:12.801495 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:24:12.801690 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:24:12.815856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:24:12.852092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:24:12.894472 containerd[1572]: time="2026-03-14T00:24:12.894354197Z" level=info msg="shim disconnected" id=5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d namespace=k8s.io Mar 14 00:24:12.894817 containerd[1572]: time="2026-03-14T00:24:12.894454565Z" level=warning msg="cleaning up after shim disconnected" id=5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d namespace=k8s.io Mar 14 00:24:12.894817 containerd[1572]: time="2026-03-14T00:24:12.894501564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:13.139367 containerd[1572]: time="2026-03-14T00:24:13.137677283Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:13.141690 containerd[1572]: time="2026-03-14T00:24:13.141467653Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 00:24:13.145581 containerd[1572]: time="2026-03-14T00:24:13.144093183Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:13.146687 containerd[1572]: time="2026-03-14T00:24:13.146406136Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.774936257s" Mar 14 00:24:13.146687 containerd[1572]: time="2026-03-14T00:24:13.146461930Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 00:24:13.159654 containerd[1572]: time="2026-03-14T00:24:13.159295427Z" level=info msg="CreateContainer within sandbox \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:24:13.214486 containerd[1572]: time="2026-03-14T00:24:13.213965063Z" level=info msg="CreateContainer within sandbox \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\"" Mar 14 00:24:13.218563 containerd[1572]: time="2026-03-14T00:24:13.216928147Z" level=info msg="StartContainer for \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\"" Mar 14 00:24:13.401392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d-rootfs.mount: Deactivated successfully. Mar 14 00:24:13.407604 containerd[1572]: time="2026-03-14T00:24:13.402209956Z" level=info msg="StartContainer for \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\" returns successfully" Mar 14 00:24:13.565730 kubelet[2729]: E0314 00:24:13.564792 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:13.577459 kubelet[2729]: E0314 00:24:13.575043 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:13.587669 containerd[1572]: time="2026-03-14T00:24:13.587487510Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:24:13.769120 kubelet[2729]: I0314 00:24:13.768679 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-d6nhm" podStartSLOduration=2.180686521 podStartE2EDuration="17.768621949s" podCreationTimestamp="2026-03-14 00:23:56 +0000 UTC" firstStartedPulling="2026-03-14 00:23:57.56134944 +0000 UTC m=+6.454718199" lastFinishedPulling="2026-03-14 00:24:13.149284868 +0000 UTC m=+22.042653627" observedRunningTime="2026-03-14 00:24:13.614710126 +0000 UTC m=+22.508078905" watchObservedRunningTime="2026-03-14 00:24:13.768621949 +0000 UTC m=+22.661990708" Mar 14 00:24:13.773619 containerd[1572]: time="2026-03-14T00:24:13.772841111Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\"" Mar 14 00:24:13.775841 containerd[1572]: time="2026-03-14T00:24:13.775783576Z" level=info msg="StartContainer for \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\"" Mar 14 00:24:14.230663 containerd[1572]: time="2026-03-14T00:24:14.230300266Z" level=info msg="StartContainer for \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\" returns successfully" Mar 14 00:24:14.723668 kubelet[2729]: E0314 00:24:14.722965 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:14.727395 kubelet[2729]: E0314 00:24:14.726073 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:14.797226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d-rootfs.mount: Deactivated successfully. Mar 14 00:24:14.962983 containerd[1572]: time="2026-03-14T00:24:14.955989360Z" level=info msg="shim disconnected" id=98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d namespace=k8s.io Mar 14 00:24:14.964210 containerd[1572]: time="2026-03-14T00:24:14.962175965Z" level=warning msg="cleaning up after shim disconnected" id=98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d namespace=k8s.io Mar 14 00:24:14.964210 containerd[1572]: time="2026-03-14T00:24:14.963474823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:15.774089 kubelet[2729]: E0314 00:24:15.773169 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:15.803706 containerd[1572]: time="2026-03-14T00:24:15.803360333Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:24:15.847132 containerd[1572]: time="2026-03-14T00:24:15.847059713Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\"" Mar 14 00:24:15.848304 containerd[1572]: time="2026-03-14T00:24:15.848180366Z" level=info msg="StartContainer for \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\"" Mar 14 00:24:16.000512 containerd[1572]: time="2026-03-14T00:24:16.000396434Z" level=info msg="StartContainer for \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\" returns successfully" Mar 14 00:24:16.064743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a-rootfs.mount: Deactivated successfully. Mar 14 00:24:16.088998 containerd[1572]: time="2026-03-14T00:24:16.088091441Z" level=info msg="shim disconnected" id=b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a namespace=k8s.io Mar 14 00:24:16.088998 containerd[1572]: time="2026-03-14T00:24:16.088161993Z" level=warning msg="cleaning up after shim disconnected" id=b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a namespace=k8s.io Mar 14 00:24:16.088998 containerd[1572]: time="2026-03-14T00:24:16.088176200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:16.791621 kubelet[2729]: E0314 00:24:16.787118 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:16.801845 containerd[1572]: time="2026-03-14T00:24:16.801787693Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:24:16.847657 containerd[1572]: time="2026-03-14T00:24:16.847243354Z" level=info msg="CreateContainer within sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\"" Mar 14 00:24:16.849517 containerd[1572]: time="2026-03-14T00:24:16.849429249Z" level=info msg="StartContainer for \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\"" Mar 14 00:24:16.976448 containerd[1572]: time="2026-03-14T00:24:16.976371139Z" level=info msg="StartContainer for \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\" returns successfully" Mar 14 00:24:17.314245 kubelet[2729]: I0314 00:24:17.313456 2729 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:24:17.576796 kubelet[2729]: I0314 00:24:17.576550 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x9jz\" (UniqueName: \"kubernetes.io/projected/cf83dc11-6572-4a69-b26e-351abb3be7fd-kube-api-access-6x9jz\") pod \"coredns-674b8bbfcf-9qx9f\" (UID: \"cf83dc11-6572-4a69-b26e-351abb3be7fd\") " pod="kube-system/coredns-674b8bbfcf-9qx9f" Mar 14 00:24:17.576796 kubelet[2729]: I0314 00:24:17.576666 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf83dc11-6572-4a69-b26e-351abb3be7fd-config-volume\") pod \"coredns-674b8bbfcf-9qx9f\" (UID: \"cf83dc11-6572-4a69-b26e-351abb3be7fd\") " pod="kube-system/coredns-674b8bbfcf-9qx9f" Mar 14 00:24:17.576796 kubelet[2729]: I0314 00:24:17.576718 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ac8e924-91ac-4213-8f8a-b2cb44534689-config-volume\") pod \"coredns-674b8bbfcf-lnd4h\" (UID: \"4ac8e924-91ac-4213-8f8a-b2cb44534689\") " pod="kube-system/coredns-674b8bbfcf-lnd4h" Mar 14 00:24:17.576796 kubelet[2729]: I0314 00:24:17.576751 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8lsg\" (UniqueName: \"kubernetes.io/projected/4ac8e924-91ac-4213-8f8a-b2cb44534689-kube-api-access-l8lsg\") pod \"coredns-674b8bbfcf-lnd4h\" (UID: \"4ac8e924-91ac-4213-8f8a-b2cb44534689\") " pod="kube-system/coredns-674b8bbfcf-lnd4h" Mar 14 00:24:17.793093 kubelet[2729]: E0314 00:24:17.786821 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:17.797416 kubelet[2729]: E0314 00:24:17.794281 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:17.801004 containerd[1572]: time="2026-03-14T00:24:17.799237540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lnd4h,Uid:4ac8e924-91ac-4213-8f8a-b2cb44534689,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:17.801004 containerd[1572]: time="2026-03-14T00:24:17.799823013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qx9f,Uid:cf83dc11-6572-4a69-b26e-351abb3be7fd,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:17.812466 kubelet[2729]: E0314 00:24:17.812405 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:17.875853 kubelet[2729]: I0314 00:24:17.875280 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5mz8g" podStartSLOduration=7.871113178 podStartE2EDuration="21.875254952s" podCreationTimestamp="2026-03-14 00:23:56 +0000 UTC" firstStartedPulling="2026-03-14 00:23:57.367054841 +0000 UTC m=+6.260423610" lastFinishedPulling="2026-03-14 00:24:11.371196615 +0000 UTC m=+20.264565384" observedRunningTime="2026-03-14 00:24:17.874366805 +0000 UTC m=+26.767735595" watchObservedRunningTime="2026-03-14 00:24:17.875254952 +0000 UTC m=+26.768623712" Mar 14 00:24:18.823183 kubelet[2729]: E0314 00:24:18.822519 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:19.830066 kubelet[2729]: E0314 00:24:19.829684 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:19.918657 systemd-networkd[1248]: cilium_host: Link UP Mar 14 00:24:19.925359 systemd-networkd[1248]: cilium_net: Link UP Mar 14 00:24:19.926163 systemd-networkd[1248]: cilium_net: Gained carrier Mar 14 00:24:19.926761 systemd-networkd[1248]: cilium_host: Gained carrier Mar 14 00:24:19.929941 systemd-networkd[1248]: cilium_net: Gained IPv6LL Mar 14 00:24:19.930395 systemd-networkd[1248]: cilium_host: Gained IPv6LL Mar 14 00:24:20.894275 kubelet[2729]: E0314 00:24:20.894198 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:20.902395 systemd-networkd[1248]: cilium_vxlan: Link UP Mar 14 00:24:20.902405 systemd-networkd[1248]: cilium_vxlan: Gained carrier Mar 14 00:24:21.674064 kernel: NET: Registered PF_ALG protocol family Mar 14 00:24:22.920316 systemd-networkd[1248]: cilium_vxlan: Gained IPv6LL Mar 14 00:24:23.664014 systemd-networkd[1248]: lxc_health: Link UP Mar 14 00:24:23.699789 systemd-networkd[1248]: lxc_health: Gained carrier Mar 14 00:24:24.273235 systemd-networkd[1248]: lxccdd006af274f: Link UP Mar 14 00:24:24.286978 kernel: eth0: renamed from tmp4d479 Mar 14 00:24:24.304023 systemd-networkd[1248]: lxccdd006af274f: Gained carrier Mar 14 00:24:24.388077 systemd-networkd[1248]: lxc98b77023fcec: Link UP Mar 14 00:24:24.394073 kernel: eth0: renamed from tmp46818 Mar 14 00:24:24.403048 systemd-networkd[1248]: lxc98b77023fcec: Gained carrier Mar 14 00:24:24.838640 systemd-networkd[1248]: lxc_health: Gained IPv6LL Mar 14 00:24:25.019552 kubelet[2729]: E0314 00:24:25.019331 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:25.606229 systemd-networkd[1248]: lxccdd006af274f: Gained IPv6LL Mar 14 00:24:25.732729 systemd-networkd[1248]: lxc98b77023fcec: Gained IPv6LL Mar 14 00:24:25.953390 kubelet[2729]: E0314 00:24:25.953176 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:33.713818 containerd[1572]: time="2026-03-14T00:24:33.713506205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:33.713818 containerd[1572]: time="2026-03-14T00:24:33.713699316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:33.718347 containerd[1572]: time="2026-03-14T00:24:33.713789696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:33.718347 containerd[1572]: time="2026-03-14T00:24:33.718146431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:33.765136 containerd[1572]: time="2026-03-14T00:24:33.760041475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:33.765136 containerd[1572]: time="2026-03-14T00:24:33.760140351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:33.765136 containerd[1572]: time="2026-03-14T00:24:33.760162523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:33.765136 containerd[1572]: time="2026-03-14T00:24:33.760296102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:33.772726 systemd[1]: run-containerd-runc-k8s.io-4d47962e1f8cbfceb961ed4e1dd8130f57a6b00a6e6b795c39e071655dd5f435-runc.6swK1D.mount: Deactivated successfully. Mar 14 00:24:33.795364 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:24:33.858183 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:24:33.874018 containerd[1572]: time="2026-03-14T00:24:33.873936431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lnd4h,Uid:4ac8e924-91ac-4213-8f8a-b2cb44534689,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d47962e1f8cbfceb961ed4e1dd8130f57a6b00a6e6b795c39e071655dd5f435\"" Mar 14 00:24:33.878136 kubelet[2729]: E0314 00:24:33.876619 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:33.893721 containerd[1572]: time="2026-03-14T00:24:33.892230211Z" level=info msg="CreateContainer within sandbox \"4d47962e1f8cbfceb961ed4e1dd8130f57a6b00a6e6b795c39e071655dd5f435\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:24:33.931956 containerd[1572]: time="2026-03-14T00:24:33.929787904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9qx9f,Uid:cf83dc11-6572-4a69-b26e-351abb3be7fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"468186793c886c042489e3132b07350d2cd71f0c23f094a45c4ba30a361a0b5b\"" Mar 14 00:24:33.933270 kubelet[2729]: E0314 00:24:33.931024 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:33.960051 containerd[1572]: time="2026-03-14T00:24:33.957402523Z" level=info msg="CreateContainer within sandbox \"468186793c886c042489e3132b07350d2cd71f0c23f094a45c4ba30a361a0b5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:24:34.000832 containerd[1572]: time="2026-03-14T00:24:34.000528071Z" level=info msg="CreateContainer within sandbox \"4d47962e1f8cbfceb961ed4e1dd8130f57a6b00a6e6b795c39e071655dd5f435\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f19bcee7fbf2b0502be74365baee8ba5cff9250525a19084e61e0a01636897e1\"" Mar 14 00:24:34.001605 containerd[1572]: time="2026-03-14T00:24:34.001571036Z" level=info msg="StartContainer for \"f19bcee7fbf2b0502be74365baee8ba5cff9250525a19084e61e0a01636897e1\"" Mar 14 00:24:34.021101 containerd[1572]: time="2026-03-14T00:24:34.020705764Z" level=info msg="CreateContainer within sandbox \"468186793c886c042489e3132b07350d2cd71f0c23f094a45c4ba30a361a0b5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d5f57c6ff9cce50587b7f73eabd5403aa00a6395a8dd4ebcf1bf4c1727f395b\"" Mar 14 00:24:34.025810 containerd[1572]: time="2026-03-14T00:24:34.022343031Z" level=info msg="StartContainer for \"1d5f57c6ff9cce50587b7f73eabd5403aa00a6395a8dd4ebcf1bf4c1727f395b\"" Mar 14 00:24:34.205189 containerd[1572]: time="2026-03-14T00:24:34.205010200Z" level=info msg="StartContainer for \"1d5f57c6ff9cce50587b7f73eabd5403aa00a6395a8dd4ebcf1bf4c1727f395b\" returns successfully" Mar 14 00:24:34.206125 containerd[1572]: time="2026-03-14T00:24:34.206004924Z" level=info msg="StartContainer for \"f19bcee7fbf2b0502be74365baee8ba5cff9250525a19084e61e0a01636897e1\" returns successfully" Mar 14 00:24:35.066002 kubelet[2729]: E0314 00:24:35.064610 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:35.077065 kubelet[2729]: E0314 00:24:35.076793 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:35.208441 kubelet[2729]: I0314 00:24:35.208245 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9qx9f" podStartSLOduration=39.208219397 podStartE2EDuration="39.208219397s" podCreationTimestamp="2026-03-14 00:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:35.129789587 +0000 UTC m=+44.023158366" watchObservedRunningTime="2026-03-14 00:24:35.208219397 +0000 UTC m=+44.101588185" Mar 14 00:24:35.277120 kubelet[2729]: I0314 00:24:35.276831 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lnd4h" podStartSLOduration=39.276698149 podStartE2EDuration="39.276698149s" podCreationTimestamp="2026-03-14 00:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:35.276425218 +0000 UTC m=+44.169794016" watchObservedRunningTime="2026-03-14 00:24:35.276698149 +0000 UTC m=+44.170066908" Mar 14 00:24:36.098113 kubelet[2729]: E0314 00:24:36.096025 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:36.102720 kubelet[2729]: E0314 00:24:36.102635 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:37.102328 kubelet[2729]: E0314 00:24:37.099360 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:24:51.218663 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:55204.service - OpenSSH per-connection server daemon (10.0.0.1:55204). Mar 14 00:24:51.417536 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 55204 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:51.421607 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:51.445448 systemd-logind[1547]: New session 8 of user core. Mar 14 00:24:51.459551 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:24:52.454458 sshd[4139]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:52.477214 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:55204.service: Deactivated successfully. Mar 14 00:24:52.493341 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:24:52.502023 systemd-logind[1547]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:24:52.512726 systemd-logind[1547]: Removed session 8. Mar 14 00:24:57.488464 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:55214.service - OpenSSH per-connection server daemon (10.0.0.1:55214). Mar 14 00:24:57.651287 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 55214 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:24:57.663657 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:57.688495 systemd-logind[1547]: New session 9 of user core. Mar 14 00:24:57.708612 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:24:58.011308 sshd[4164]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:58.016432 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:55214.service: Deactivated successfully. Mar 14 00:24:58.021094 systemd-logind[1547]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:24:58.021534 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:24:58.023484 systemd-logind[1547]: Removed session 9. Mar 14 00:25:03.040488 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:59762.service - OpenSSH per-connection server daemon (10.0.0.1:59762). Mar 14 00:25:03.122174 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 59762 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:03.125358 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:03.162014 systemd-logind[1547]: New session 10 of user core. Mar 14 00:25:03.180559 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:25:03.495795 sshd[4183]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:03.506257 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:59762.service: Deactivated successfully. Mar 14 00:25:03.516426 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:25:03.516747 systemd-logind[1547]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:25:03.519807 systemd-logind[1547]: Removed session 10. Mar 14 00:25:05.855299 kubelet[2729]: E0314 00:25:05.853342 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:25:08.527413 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:59778.service - OpenSSH per-connection server daemon (10.0.0.1:59778). Mar 14 00:25:08.836964 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 59778 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:08.852012 sshd[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:08.882043 systemd-logind[1547]: New session 11 of user core. Mar 14 00:25:08.890614 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:25:09.173052 sshd[4199]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:09.182201 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:59778.service: Deactivated successfully. Mar 14 00:25:09.187720 systemd-logind[1547]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:25:09.187858 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:25:09.191591 systemd-logind[1547]: Removed session 11. Mar 14 00:25:10.850021 kubelet[2729]: E0314 00:25:10.849760 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:25:14.206616 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:46546.service - OpenSSH per-connection server daemon (10.0.0.1:46546). Mar 14 00:25:14.292500 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 46546 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:14.296497 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:14.315082 systemd-logind[1547]: New session 12 of user core. Mar 14 00:25:14.323416 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:25:14.696550 sshd[4216]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:14.705819 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:46546.service: Deactivated successfully. Mar 14 00:25:14.716125 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:25:14.718116 systemd-logind[1547]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:25:14.725102 systemd-logind[1547]: Removed session 12. Mar 14 00:25:17.931446 kubelet[2729]: E0314 00:25:17.931236 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:25:21.428758 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:46562.service - OpenSSH per-connection server daemon (10.0.0.1:46562). Mar 14 00:25:21.593664 kubelet[2729]: E0314 00:25:21.589168 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:25:21.631156 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 46562 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:21.634368 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:21.685669 systemd-logind[1547]: New session 13 of user core. Mar 14 00:25:21.694747 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:25:22.071353 sshd[4233]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:22.082664 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:46562.service: Deactivated successfully. Mar 14 00:25:22.091115 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:25:22.096441 systemd-logind[1547]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:25:22.099730 systemd-logind[1547]: Removed session 13. Mar 14 00:25:24.851092 kubelet[2729]: E0314 00:25:24.850557 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:25:27.089660 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:58012.service - OpenSSH per-connection server daemon (10.0.0.1:58012). Mar 14 00:25:27.224632 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 58012 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:27.231293 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:27.252780 systemd-logind[1547]: New session 14 of user core. Mar 14 00:25:27.266030 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:25:27.642557 sshd[4251]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:27.679469 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:58012.service: Deactivated successfully. Mar 14 00:25:27.687517 systemd-logind[1547]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:25:27.689711 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:25:27.695295 systemd-logind[1547]: Removed session 14. Mar 14 00:25:32.664793 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:55138.service - OpenSSH per-connection server daemon (10.0.0.1:55138). Mar 14 00:25:32.711428 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 55138 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:32.713673 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:32.726770 systemd-logind[1547]: New session 15 of user core. Mar 14 00:25:32.739749 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:25:32.989235 sshd[4270]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:33.000779 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:55152.service - OpenSSH per-connection server daemon (10.0.0.1:55152). Mar 14 00:25:33.003430 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:55138.service: Deactivated successfully. Mar 14 00:25:33.014175 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:25:33.021349 systemd-logind[1547]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:25:33.027811 systemd-logind[1547]: Removed session 15. Mar 14 00:25:33.049419 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 55152 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:33.055629 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:33.066794 systemd-logind[1547]: New session 16 of user core. Mar 14 00:25:33.077704 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:25:33.419548 sshd[4284]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:33.468273 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:55154.service - OpenSSH per-connection server daemon (10.0.0.1:55154). Mar 14 00:25:33.469218 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:55152.service: Deactivated successfully. Mar 14 00:25:33.491013 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:25:33.500862 systemd-logind[1547]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:25:33.503189 systemd-logind[1547]: Removed session 16. Mar 14 00:25:33.569491 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 55154 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:33.574855 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:33.588745 systemd-logind[1547]: New session 17 of user core. Mar 14 00:25:33.597675 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:25:33.926016 sshd[4298]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:33.942555 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:55154.service: Deactivated successfully. Mar 14 00:25:33.960529 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:25:33.961387 systemd-logind[1547]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:25:33.967567 systemd-logind[1547]: Removed session 17. Mar 14 00:25:38.984641 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:55170.service - OpenSSH per-connection server daemon (10.0.0.1:55170). Mar 14 00:25:39.062569 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 55170 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:39.069433 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:39.084778 systemd-logind[1547]: New session 18 of user core. Mar 14 00:25:39.093419 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:25:39.580839 sshd[4318]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:39.587554 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:55170.service: Deactivated successfully. Mar 14 00:25:39.599489 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:25:39.601641 systemd-logind[1547]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:25:39.603469 systemd-logind[1547]: Removed session 18. Mar 14 00:25:40.856585 kubelet[2729]: E0314 00:25:40.855183 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:25:44.593425 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:38592.service - OpenSSH per-connection server daemon (10.0.0.1:38592). Mar 14 00:25:44.678424 sshd[4333]: Accepted publickey for core from 10.0.0.1 port 38592 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:44.680979 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:44.693080 systemd-logind[1547]: New session 19 of user core. Mar 14 00:25:44.698636 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:25:45.032281 sshd[4333]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:45.039741 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:38592.service: Deactivated successfully. Mar 14 00:25:45.045596 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:25:45.046840 systemd-logind[1547]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:25:45.053711 systemd-logind[1547]: Removed session 19. Mar 14 00:25:48.854623 kubelet[2729]: E0314 00:25:48.852245 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:25:50.090743 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:45092.service - OpenSSH per-connection server daemon (10.0.0.1:45092). Mar 14 00:25:50.210092 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 45092 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:50.227758 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:50.288305 systemd-logind[1547]: New session 20 of user core. Mar 14 00:25:50.296497 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:25:50.665224 sshd[4349]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:50.678688 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:45092.service: Deactivated successfully. Mar 14 00:25:50.694619 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:25:50.702300 systemd-logind[1547]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:25:50.707944 systemd-logind[1547]: Removed session 20. Mar 14 00:25:55.692766 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:45096.service - OpenSSH per-connection server daemon (10.0.0.1:45096). Mar 14 00:25:55.780398 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 45096 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:25:55.783185 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:55.802662 systemd-logind[1547]: New session 21 of user core. Mar 14 00:25:55.810779 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:25:56.115614 sshd[4366]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:56.125662 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:45096.service: Deactivated successfully. Mar 14 00:25:56.163132 systemd-logind[1547]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:25:56.164206 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:25:56.172713 systemd-logind[1547]: Removed session 21. Mar 14 00:25:56.850508 kubelet[2729]: E0314 00:25:56.850392 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:01.128343 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:39964.service - OpenSSH per-connection server daemon (10.0.0.1:39964). Mar 14 00:26:01.168462 sshd[4384]: Accepted publickey for core from 10.0.0.1 port 39964 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:01.171968 sshd[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:01.187620 systemd-logind[1547]: New session 22 of user core. Mar 14 00:26:01.207799 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:26:01.355014 sshd[4384]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:01.364523 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:39978.service - OpenSSH per-connection server daemon (10.0.0.1:39978). Mar 14 00:26:01.365689 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:39964.service: Deactivated successfully. Mar 14 00:26:01.384414 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:26:01.387679 systemd-logind[1547]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:26:01.390853 systemd-logind[1547]: Removed session 22. Mar 14 00:26:01.428470 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 39978 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:01.431203 sshd[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:01.440739 systemd-logind[1547]: New session 23 of user core. Mar 14 00:26:01.453533 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:26:01.991468 sshd[4398]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:02.004377 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:39994.service - OpenSSH per-connection server daemon (10.0.0.1:39994). Mar 14 00:26:02.005128 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:39978.service: Deactivated successfully. Mar 14 00:26:02.010829 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:26:02.012762 systemd-logind[1547]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:26:02.015705 systemd-logind[1547]: Removed session 23. Mar 14 00:26:02.078607 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 39994 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:02.082297 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:02.104313 systemd-logind[1547]: New session 24 of user core. Mar 14 00:26:02.119515 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:26:03.007203 sshd[4411]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:03.030777 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:39996.service - OpenSSH per-connection server daemon (10.0.0.1:39996). Mar 14 00:26:03.031700 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:39994.service: Deactivated successfully. Mar 14 00:26:03.037453 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:26:03.041436 systemd-logind[1547]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:26:03.043968 systemd-logind[1547]: Removed session 24. Mar 14 00:26:03.079133 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 39996 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:03.081729 sshd[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:03.096440 systemd-logind[1547]: New session 25 of user core. Mar 14 00:26:03.110813 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:26:03.697838 sshd[4431]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:03.710509 systemd[1]: Started sshd@25-10.0.0.71:22-10.0.0.1:39998.service - OpenSSH per-connection server daemon (10.0.0.1:39998). Mar 14 00:26:03.711416 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:39996.service: Deactivated successfully. Mar 14 00:26:03.717571 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:26:03.719079 systemd-logind[1547]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:26:03.722232 systemd-logind[1547]: Removed session 25. Mar 14 00:26:03.763809 sshd[4444]: Accepted publickey for core from 10.0.0.1 port 39998 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:03.766597 sshd[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:03.780206 systemd-logind[1547]: New session 26 of user core. Mar 14 00:26:03.790480 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:26:03.968401 sshd[4444]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:03.977667 systemd[1]: sshd@25-10.0.0.71:22-10.0.0.1:39998.service: Deactivated successfully. Mar 14 00:26:03.984344 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:26:03.984521 systemd-logind[1547]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:26:03.989822 systemd-logind[1547]: Removed session 26. Mar 14 00:26:08.983674 systemd[1]: Started sshd@26-10.0.0.71:22-10.0.0.1:40006.service - OpenSSH per-connection server daemon (10.0.0.1:40006). Mar 14 00:26:09.189207 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 40006 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:09.191861 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:09.216457 systemd-logind[1547]: New session 27 of user core. Mar 14 00:26:09.231825 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 14 00:26:09.483711 sshd[4463]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:09.491554 systemd[1]: sshd@26-10.0.0.71:22-10.0.0.1:40006.service: Deactivated successfully. Mar 14 00:26:09.496151 systemd-logind[1547]: Session 27 logged out. Waiting for processes to exit. Mar 14 00:26:09.498709 systemd[1]: session-27.scope: Deactivated successfully. Mar 14 00:26:09.500708 systemd-logind[1547]: Removed session 27. Mar 14 00:26:14.508246 systemd[1]: Started sshd@27-10.0.0.71:22-10.0.0.1:55776.service - OpenSSH per-connection server daemon (10.0.0.1:55776). Mar 14 00:26:14.585285 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 55776 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:14.587573 sshd[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:14.617990 systemd-logind[1547]: New session 28 of user core. Mar 14 00:26:14.628388 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 14 00:26:14.860169 sshd[4482]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:14.867802 systemd[1]: sshd@27-10.0.0.71:22-10.0.0.1:55776.service: Deactivated successfully. Mar 14 00:26:14.872517 systemd-logind[1547]: Session 28 logged out. Waiting for processes to exit. Mar 14 00:26:14.872537 systemd[1]: session-28.scope: Deactivated successfully. Mar 14 00:26:14.874565 systemd-logind[1547]: Removed session 28. Mar 14 00:26:19.872370 systemd[1]: Started sshd@28-10.0.0.71:22-10.0.0.1:55792.service - OpenSSH per-connection server daemon (10.0.0.1:55792). Mar 14 00:26:19.915739 sshd[4500]: Accepted publickey for core from 10.0.0.1 port 55792 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:19.918241 sshd[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:19.925387 systemd-logind[1547]: New session 29 of user core. Mar 14 00:26:19.936536 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 14 00:26:20.100108 sshd[4500]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:20.106687 systemd[1]: sshd@28-10.0.0.71:22-10.0.0.1:55792.service: Deactivated successfully. Mar 14 00:26:20.110409 systemd-logind[1547]: Session 29 logged out. Waiting for processes to exit. Mar 14 00:26:20.110434 systemd[1]: session-29.scope: Deactivated successfully. Mar 14 00:26:20.112722 systemd-logind[1547]: Removed session 29. Mar 14 00:26:22.850000 kubelet[2729]: E0314 00:26:22.849581 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:24.849558 kubelet[2729]: E0314 00:26:24.849428 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:25.119443 systemd[1]: Started sshd@29-10.0.0.71:22-10.0.0.1:53264.service - OpenSSH per-connection server daemon (10.0.0.1:53264). Mar 14 00:26:25.202097 sshd[4515]: Accepted publickey for core from 10.0.0.1 port 53264 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:25.205246 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:25.215655 systemd-logind[1547]: New session 30 of user core. Mar 14 00:26:25.227733 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 14 00:26:25.396616 sshd[4515]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:25.405135 systemd[1]: sshd@29-10.0.0.71:22-10.0.0.1:53264.service: Deactivated successfully. Mar 14 00:26:25.409212 systemd-logind[1547]: Session 30 logged out. Waiting for processes to exit. Mar 14 00:26:25.409547 systemd[1]: session-30.scope: Deactivated successfully. Mar 14 00:26:25.411837 systemd-logind[1547]: Removed session 30. Mar 14 00:26:30.413379 systemd[1]: Started sshd@30-10.0.0.71:22-10.0.0.1:53108.service - OpenSSH per-connection server daemon (10.0.0.1:53108). Mar 14 00:26:30.470449 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 53108 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:30.473261 sshd[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:30.484865 systemd-logind[1547]: New session 31 of user core. Mar 14 00:26:30.491561 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 14 00:26:30.644422 sshd[4533]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:30.652378 systemd[1]: Started sshd@31-10.0.0.71:22-10.0.0.1:53110.service - OpenSSH per-connection server daemon (10.0.0.1:53110). Mar 14 00:26:30.653292 systemd[1]: sshd@30-10.0.0.71:22-10.0.0.1:53108.service: Deactivated successfully. Mar 14 00:26:30.660271 systemd[1]: session-31.scope: Deactivated successfully. Mar 14 00:26:30.662399 systemd-logind[1547]: Session 31 logged out. Waiting for processes to exit. Mar 14 00:26:30.665112 systemd-logind[1547]: Removed session 31. Mar 14 00:26:30.724231 sshd[4546]: Accepted publickey for core from 10.0.0.1 port 53110 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:30.727564 sshd[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:30.735852 systemd-logind[1547]: New session 32 of user core. Mar 14 00:26:30.749006 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 14 00:26:32.330656 containerd[1572]: time="2026-03-14T00:26:32.330274840Z" level=info msg="StopContainer for \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\" with timeout 30 (s)" Mar 14 00:26:32.333375 containerd[1572]: time="2026-03-14T00:26:32.333257081Z" level=info msg="Stop container \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\" with signal terminated" Mar 14 00:26:32.351665 systemd[1]: run-containerd-runc-k8s.io-7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91-runc.B05l7N.mount: Deactivated successfully. Mar 14 00:26:32.400681 containerd[1572]: time="2026-03-14T00:26:32.400218530Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:26:32.417373 containerd[1572]: time="2026-03-14T00:26:32.415816120Z" level=info msg="StopContainer for \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\" with timeout 2 (s)" Mar 14 00:26:32.418577 containerd[1572]: time="2026-03-14T00:26:32.418501018Z" level=info msg="Stop container \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\" with signal terminated" Mar 14 00:26:32.421224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11-rootfs.mount: Deactivated successfully. Mar 14 00:26:32.451211 systemd-networkd[1248]: lxc_health: Link DOWN Mar 14 00:26:32.451793 systemd-networkd[1248]: lxc_health: Lost carrier Mar 14 00:26:32.468840 containerd[1572]: time="2026-03-14T00:26:32.468667435Z" level=info msg="shim disconnected" id=6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11 namespace=k8s.io Mar 14 00:26:32.468840 containerd[1572]: time="2026-03-14T00:26:32.468744058Z" level=warning msg="cleaning up after shim disconnected" id=6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11 namespace=k8s.io Mar 14 00:26:32.468840 containerd[1572]: time="2026-03-14T00:26:32.468760388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:32.525088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91-rootfs.mount: Deactivated successfully. Mar 14 00:26:32.529654 containerd[1572]: time="2026-03-14T00:26:32.526663275Z" level=info msg="StopContainer for \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\" returns successfully" Mar 14 00:26:32.533327 containerd[1572]: time="2026-03-14T00:26:32.533149943Z" level=info msg="StopPodSandbox for \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\"" Mar 14 00:26:32.533327 containerd[1572]: time="2026-03-14T00:26:32.533285988Z" level=info msg="Container to stop \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:26:32.548699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4-shm.mount: Deactivated successfully. Mar 14 00:26:32.550124 containerd[1572]: time="2026-03-14T00:26:32.548992668Z" level=info msg="shim disconnected" id=7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91 namespace=k8s.io Mar 14 00:26:32.550124 containerd[1572]: time="2026-03-14T00:26:32.549050857Z" level=warning msg="cleaning up after shim disconnected" id=7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91 namespace=k8s.io Mar 14 00:26:32.550124 containerd[1572]: time="2026-03-14T00:26:32.549062128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:32.583858 containerd[1572]: time="2026-03-14T00:26:32.583633477Z" level=info msg="StopContainer for \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\" returns successfully" Mar 14 00:26:32.586739 containerd[1572]: time="2026-03-14T00:26:32.586644341Z" level=info msg="StopPodSandbox for \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\"" Mar 14 00:26:32.586836 containerd[1572]: time="2026-03-14T00:26:32.586732726Z" level=info msg="Container to stop \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:26:32.586946 containerd[1572]: time="2026-03-14T00:26:32.586841107Z" level=info msg="Container to stop \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:26:32.587015 containerd[1572]: time="2026-03-14T00:26:32.586864591Z" level=info msg="Container to stop \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:26:32.587015 containerd[1572]: time="2026-03-14T00:26:32.586963184Z" level=info msg="Container to stop \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:26:32.587095 containerd[1572]: time="2026-03-14T00:26:32.586983331Z" level=info msg="Container to stop \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:26:32.616956 containerd[1572]: time="2026-03-14T00:26:32.616686383Z" level=info msg="shim disconnected" id=850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4 namespace=k8s.io Mar 14 00:26:32.616956 containerd[1572]: time="2026-03-14T00:26:32.616854526Z" level=warning msg="cleaning up after shim disconnected" id=850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4 namespace=k8s.io Mar 14 00:26:32.616956 containerd[1572]: time="2026-03-14T00:26:32.616871447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:32.663640 containerd[1572]: time="2026-03-14T00:26:32.663322690Z" level=info msg="shim disconnected" id=fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec namespace=k8s.io Mar 14 00:26:32.663640 containerd[1572]: time="2026-03-14T00:26:32.663397278Z" level=warning msg="cleaning up after shim disconnected" id=fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec namespace=k8s.io Mar 14 00:26:32.663640 containerd[1572]: time="2026-03-14T00:26:32.663413528Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:32.664122 containerd[1572]: time="2026-03-14T00:26:32.663750551Z" level=info msg="TearDown network for sandbox \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\" successfully" Mar 14 00:26:32.664122 containerd[1572]: time="2026-03-14T00:26:32.663793300Z" level=info msg="StopPodSandbox for \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\" returns successfully" Mar 14 00:26:32.698667 containerd[1572]: time="2026-03-14T00:26:32.698541981Z" level=info msg="TearDown network for sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" successfully" Mar 14 00:26:32.698667 containerd[1572]: time="2026-03-14T00:26:32.698599097Z" level=info msg="StopPodSandbox for \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" returns successfully" Mar 14 00:26:32.758625 kubelet[2729]: I0314 00:26:32.758261 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.758625 kubelet[2729]: I0314 00:26:32.758294 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-host-proc-sys-net\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.758625 kubelet[2729]: I0314 00:26:32.758337 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-cgroup\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.758625 kubelet[2729]: I0314 00:26:32.758365 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-hostproc\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.758625 kubelet[2729]: I0314 00:26:32.758382 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.764812 kubelet[2729]: I0314 00:26:32.758392 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-hostproc" (OuterVolumeSpecName: "hostproc") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.764812 kubelet[2729]: I0314 00:26:32.758387 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-run\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.764812 kubelet[2729]: I0314 00:26:32.758408 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.764812 kubelet[2729]: I0314 00:26:32.758425 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-lib-modules\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.764812 kubelet[2729]: I0314 00:26:32.758442 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-xtables-lock\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.765119 kubelet[2729]: I0314 00:26:32.758473 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.765119 kubelet[2729]: I0314 00:26:32.758512 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e079af1-1818-42c7-a3ae-a18e69f43681-clustermesh-secrets\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.765119 kubelet[2729]: I0314 00:26:32.758547 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e079af1-1818-42c7-a3ae-a18e69f43681-hubble-tls\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.765119 kubelet[2729]: I0314 00:26:32.758575 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf9mn\" (UniqueName: \"kubernetes.io/projected/1e079af1-1818-42c7-a3ae-a18e69f43681-kube-api-access-vf9mn\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.765119 kubelet[2729]: I0314 00:26:32.758607 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-host-proc-sys-kernel\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.765119 kubelet[2729]: I0314 00:26:32.758626 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-bpf-maps\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.765504 kubelet[2729]: I0314 00:26:32.758643 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-config-path\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.765504 kubelet[2729]: I0314 00:26:32.758655 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cni-path\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.765504 kubelet[2729]: I0314 00:26:32.758670 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88d7e42a-234e-4a2c-9bd2-7502a7baa60e-cilium-config-path\") pod \"88d7e42a-234e-4a2c-9bd2-7502a7baa60e\" (UID: \"88d7e42a-234e-4a2c-9bd2-7502a7baa60e\") " Mar 14 00:26:32.765504 kubelet[2729]: I0314 00:26:32.758685 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4g6g\" (UniqueName: \"kubernetes.io/projected/88d7e42a-234e-4a2c-9bd2-7502a7baa60e-kube-api-access-m4g6g\") pod \"88d7e42a-234e-4a2c-9bd2-7502a7baa60e\" (UID: \"88d7e42a-234e-4a2c-9bd2-7502a7baa60e\") " Mar 14 00:26:32.765504 kubelet[2729]: I0314 00:26:32.758698 2729 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-etc-cni-netd\") pod \"1e079af1-1818-42c7-a3ae-a18e69f43681\" (UID: \"1e079af1-1818-42c7-a3ae-a18e69f43681\") " Mar 14 00:26:32.765504 kubelet[2729]: I0314 00:26:32.758728 2729 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.765504 kubelet[2729]: I0314 00:26:32.758738 2729 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.765857 kubelet[2729]: I0314 00:26:32.758746 2729 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.765857 kubelet[2729]: I0314 00:26:32.758755 2729 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.765857 kubelet[2729]: I0314 00:26:32.758763 2729 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.765857 kubelet[2729]: I0314 00:26:32.758790 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.765857 kubelet[2729]: I0314 00:26:32.759321 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.765857 kubelet[2729]: I0314 00:26:32.764207 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.767119 kubelet[2729]: I0314 00:26:32.766990 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:26:32.767119 kubelet[2729]: I0314 00:26:32.767057 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.767119 kubelet[2729]: I0314 00:26:32.767090 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cni-path" (OuterVolumeSpecName: "cni-path") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:26:32.773285 kubelet[2729]: I0314 00:26:32.773209 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e079af1-1818-42c7-a3ae-a18e69f43681-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:26:32.773552 kubelet[2729]: I0314 00:26:32.773401 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e079af1-1818-42c7-a3ae-a18e69f43681-kube-api-access-vf9mn" (OuterVolumeSpecName: "kube-api-access-vf9mn") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "kube-api-access-vf9mn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:26:32.774082 kubelet[2729]: I0314 00:26:32.774053 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d7e42a-234e-4a2c-9bd2-7502a7baa60e-kube-api-access-m4g6g" (OuterVolumeSpecName: "kube-api-access-m4g6g") pod "88d7e42a-234e-4a2c-9bd2-7502a7baa60e" (UID: "88d7e42a-234e-4a2c-9bd2-7502a7baa60e"). InnerVolumeSpecName "kube-api-access-m4g6g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:26:32.774480 kubelet[2729]: I0314 00:26:32.774456 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d7e42a-234e-4a2c-9bd2-7502a7baa60e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "88d7e42a-234e-4a2c-9bd2-7502a7baa60e" (UID: "88d7e42a-234e-4a2c-9bd2-7502a7baa60e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:26:32.775762 kubelet[2729]: I0314 00:26:32.775461 2729 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e079af1-1818-42c7-a3ae-a18e69f43681-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1e079af1-1818-42c7-a3ae-a18e69f43681" (UID: "1e079af1-1818-42c7-a3ae-a18e69f43681"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:26:32.860121 kubelet[2729]: I0314 00:26:32.859772 2729 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860121 kubelet[2729]: I0314 00:26:32.859873 2729 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e079af1-1818-42c7-a3ae-a18e69f43681-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860121 kubelet[2729]: I0314 00:26:32.859968 2729 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e079af1-1818-42c7-a3ae-a18e69f43681-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860121 kubelet[2729]: I0314 00:26:32.859985 2729 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vf9mn\" (UniqueName: \"kubernetes.io/projected/1e079af1-1818-42c7-a3ae-a18e69f43681-kube-api-access-vf9mn\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860121 kubelet[2729]: I0314 00:26:32.860005 2729 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860121 kubelet[2729]: I0314 00:26:32.860022 2729 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860121 kubelet[2729]: I0314 00:26:32.860038 2729 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e079af1-1818-42c7-a3ae-a18e69f43681-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860121 kubelet[2729]: I0314 00:26:32.860052 2729 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860703 kubelet[2729]: I0314 00:26:32.860068 2729 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88d7e42a-234e-4a2c-9bd2-7502a7baa60e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860703 kubelet[2729]: I0314 00:26:32.860083 2729 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m4g6g\" (UniqueName: \"kubernetes.io/projected/88d7e42a-234e-4a2c-9bd2-7502a7baa60e-kube-api-access-m4g6g\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.860703 kubelet[2729]: I0314 00:26:32.860096 2729 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e079af1-1818-42c7-a3ae-a18e69f43681-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 14 00:26:32.974859 kubelet[2729]: I0314 00:26:32.974749 2729 scope.go:117] "RemoveContainer" containerID="6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11" Mar 14 00:26:32.977303 containerd[1572]: time="2026-03-14T00:26:32.977205082Z" level=info msg="RemoveContainer for \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\"" Mar 14 00:26:33.008436 containerd[1572]: time="2026-03-14T00:26:33.007554745Z" level=info msg="RemoveContainer for \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\" returns successfully" Mar 14 00:26:33.008993 kubelet[2729]: I0314 00:26:33.008191 2729 scope.go:117] "RemoveContainer" containerID="6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11" Mar 14 00:26:33.009356 containerd[1572]: time="2026-03-14T00:26:33.008658469Z" level=error msg="ContainerStatus for \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\": not found" Mar 14 00:26:33.024284 kubelet[2729]: E0314 00:26:33.024229 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\": not found" containerID="6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11" Mar 14 00:26:33.024480 kubelet[2729]: I0314 00:26:33.024289 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11"} err="failed to get container status \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b48b67b3502a0ecd1a2cf404538c7cab1b87ddb19164da5cdcfe8a12ac9fe11\": not found" Mar 14 00:26:33.024480 kubelet[2729]: I0314 00:26:33.024339 2729 scope.go:117] "RemoveContainer" containerID="7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91" Mar 14 00:26:33.027213 containerd[1572]: time="2026-03-14T00:26:33.027015783Z" level=info msg="RemoveContainer for \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\"" Mar 14 00:26:33.035951 containerd[1572]: time="2026-03-14T00:26:33.035750590Z" level=info msg="RemoveContainer for \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\" returns successfully" Mar 14 00:26:33.036388 kubelet[2729]: I0314 00:26:33.036344 2729 scope.go:117] "RemoveContainer" containerID="b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a" Mar 14 00:26:33.042350 containerd[1572]: time="2026-03-14T00:26:33.042196760Z" level=info msg="RemoveContainer for \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\"" Mar 14 00:26:33.051702 containerd[1572]: time="2026-03-14T00:26:33.051581414Z" level=info msg="RemoveContainer for \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\" returns successfully" Mar 14 00:26:33.052588 kubelet[2729]: I0314 00:26:33.052147 2729 scope.go:117] "RemoveContainer" containerID="98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d" Mar 14 00:26:33.054821 containerd[1572]: time="2026-03-14T00:26:33.054742058Z" level=info msg="RemoveContainer for \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\"" Mar 14 00:26:33.061726 containerd[1572]: time="2026-03-14T00:26:33.061603806Z" level=info msg="RemoveContainer for \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\" returns successfully" Mar 14 00:26:33.062145 kubelet[2729]: I0314 00:26:33.062028 2729 scope.go:117] "RemoveContainer" containerID="5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d" Mar 14 00:26:33.064379 containerd[1572]: time="2026-03-14T00:26:33.064308571Z" level=info msg="RemoveContainer for \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\"" Mar 14 00:26:33.075731 containerd[1572]: time="2026-03-14T00:26:33.075608491Z" level=info msg="RemoveContainer for \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\" returns successfully" Mar 14 00:26:33.076451 kubelet[2729]: I0314 00:26:33.076335 2729 scope.go:117] "RemoveContainer" containerID="954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6" Mar 14 00:26:33.078336 containerd[1572]: time="2026-03-14T00:26:33.078288590Z" level=info msg="RemoveContainer for \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\"" Mar 14 00:26:33.086123 containerd[1572]: time="2026-03-14T00:26:33.085754678Z" level=info msg="RemoveContainer for \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\" returns successfully" Mar 14 00:26:33.089066 kubelet[2729]: I0314 00:26:33.088323 2729 scope.go:117] "RemoveContainer" containerID="7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91" Mar 14 00:26:33.090406 containerd[1572]: time="2026-03-14T00:26:33.089213955Z" level=error msg="ContainerStatus for \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\": not found" Mar 14 00:26:33.091449 kubelet[2729]: E0314 00:26:33.090688 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\": not found" containerID="7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91" Mar 14 00:26:33.091449 kubelet[2729]: I0314 00:26:33.090733 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91"} err="failed to get container status \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c63f985bdf7b313ab193f956c5ae1a9dd9114e280cd6788f2ac799f94df2d91\": not found" Mar 14 00:26:33.091449 kubelet[2729]: I0314 00:26:33.090757 2729 scope.go:117] "RemoveContainer" containerID="b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a" Mar 14 00:26:33.092289 containerd[1572]: time="2026-03-14T00:26:33.092194201Z" level=error msg="ContainerStatus for \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\": not found" Mar 14 00:26:33.093011 kubelet[2729]: E0314 00:26:33.092947 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\": not found" containerID="b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a" Mar 14 00:26:33.093142 kubelet[2729]: I0314 00:26:33.093025 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a"} err="failed to get container status \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7c03a7c8254d60d7596b7139f2e32b5e9af3c3122e8c8a23cbc44d591a9ad6a\": not found" Mar 14 00:26:33.093142 kubelet[2729]: I0314 00:26:33.093065 2729 scope.go:117] "RemoveContainer" containerID="98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d" Mar 14 00:26:33.093551 containerd[1572]: time="2026-03-14T00:26:33.093493160Z" level=error msg="ContainerStatus for \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\": not found" Mar 14 00:26:33.094529 kubelet[2729]: E0314 00:26:33.093848 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\": not found" containerID="98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d" Mar 14 00:26:33.094529 kubelet[2729]: I0314 00:26:33.093989 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d"} err="failed to get container status \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\": rpc error: code = NotFound desc = an error occurred when try to find container \"98a09621a3f348a271c9b08ebbb836446bdee2563939cbd05df9077a778fee8d\": not found" Mar 14 00:26:33.094529 kubelet[2729]: I0314 00:26:33.094023 2729 scope.go:117] "RemoveContainer" containerID="5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d" Mar 14 00:26:33.094697 containerd[1572]: time="2026-03-14T00:26:33.094354975Z" level=error msg="ContainerStatus for \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\": not found" Mar 14 00:26:33.094748 kubelet[2729]: E0314 00:26:33.094726 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\": not found" containerID="5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d" Mar 14 00:26:33.094785 kubelet[2729]: I0314 00:26:33.094756 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d"} err="failed to get container status \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b8a9a5f7be7c2fd322bf016c092b769a87ada914fcafafdf53c5e76295e580d\": not found" Mar 14 00:26:33.094785 kubelet[2729]: I0314 00:26:33.094782 2729 scope.go:117] "RemoveContainer" containerID="954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6" Mar 14 00:26:33.095267 containerd[1572]: time="2026-03-14T00:26:33.095100693Z" level=error msg="ContainerStatus for \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\": not found" Mar 14 00:26:33.095535 kubelet[2729]: E0314 00:26:33.095493 2729 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\": not found" containerID="954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6" Mar 14 00:26:33.095600 kubelet[2729]: I0314 00:26:33.095543 2729 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6"} err="failed to get container status \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"954e6daeefef115116689516b9fde4babfe95ede4700dd872d39831feee7bdc6\": not found" Mar 14 00:26:33.343325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4-rootfs.mount: Deactivated successfully. Mar 14 00:26:33.343670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec-rootfs.mount: Deactivated successfully. Mar 14 00:26:33.343950 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec-shm.mount: Deactivated successfully. Mar 14 00:26:33.344201 systemd[1]: var-lib-kubelet-pods-88d7e42a\x2d234e\x2d4a2c\x2d9bd2\x2d7502a7baa60e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm4g6g.mount: Deactivated successfully. Mar 14 00:26:33.344421 systemd[1]: var-lib-kubelet-pods-1e079af1\x2d1818\x2d42c7\x2da3ae\x2da18e69f43681-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvf9mn.mount: Deactivated successfully. Mar 14 00:26:33.344769 systemd[1]: var-lib-kubelet-pods-1e079af1\x2d1818\x2d42c7\x2da3ae\x2da18e69f43681-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:26:33.345087 systemd[1]: var-lib-kubelet-pods-1e079af1\x2d1818\x2d42c7\x2da3ae\x2da18e69f43681-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:26:33.854841 kubelet[2729]: I0314 00:26:33.852942 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e079af1-1818-42c7-a3ae-a18e69f43681" path="/var/lib/kubelet/pods/1e079af1-1818-42c7-a3ae-a18e69f43681/volumes" Mar 14 00:26:33.854841 kubelet[2729]: I0314 00:26:33.854276 2729 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d7e42a-234e-4a2c-9bd2-7502a7baa60e" path="/var/lib/kubelet/pods/88d7e42a-234e-4a2c-9bd2-7502a7baa60e/volumes" Mar 14 00:26:34.140343 sshd[4546]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:34.145772 systemd[1]: sshd@31-10.0.0.71:22-10.0.0.1:53110.service: Deactivated successfully. Mar 14 00:26:34.154510 systemd-logind[1547]: Session 32 logged out. Waiting for processes to exit. Mar 14 00:26:34.162783 systemd[1]: Started sshd@32-10.0.0.71:22-10.0.0.1:53124.service - OpenSSH per-connection server daemon (10.0.0.1:53124). Mar 14 00:26:34.163445 systemd[1]: session-32.scope: Deactivated successfully. Mar 14 00:26:34.168256 systemd-logind[1547]: Removed session 32. Mar 14 00:26:34.227526 sshd[4718]: Accepted publickey for core from 10.0.0.1 port 53124 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:34.230840 sshd[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:34.245288 systemd-logind[1547]: New session 33 of user core. Mar 14 00:26:34.255588 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 14 00:26:35.776069 sshd[4718]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:35.803931 systemd[1]: Started sshd@33-10.0.0.71:22-10.0.0.1:53130.service - OpenSSH per-connection server daemon (10.0.0.1:53130). Mar 14 00:26:35.809484 systemd[1]: sshd@32-10.0.0.71:22-10.0.0.1:53124.service: Deactivated successfully. Mar 14 00:26:35.813508 systemd[1]: session-33.scope: Deactivated successfully. Mar 14 00:26:35.816714 systemd-logind[1547]: Session 33 logged out. Waiting for processes to exit. Mar 14 00:26:35.819643 systemd-logind[1547]: Removed session 33. Mar 14 00:26:35.857658 sshd[4730]: Accepted publickey for core from 10.0.0.1 port 53130 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:35.860815 sshd[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:35.875327 systemd-logind[1547]: New session 34 of user core. Mar 14 00:26:35.884663 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 14 00:26:35.956540 sshd[4730]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:35.977294 systemd[1]: Started sshd@34-10.0.0.71:22-10.0.0.1:53132.service - OpenSSH per-connection server daemon (10.0.0.1:53132). Mar 14 00:26:35.979936 systemd[1]: sshd@33-10.0.0.71:22-10.0.0.1:53130.service: Deactivated successfully. Mar 14 00:26:35.990753 systemd[1]: session-34.scope: Deactivated successfully. Mar 14 00:26:35.994043 systemd-logind[1547]: Session 34 logged out. Waiting for processes to exit. Mar 14 00:26:35.997309 systemd-logind[1547]: Removed session 34. Mar 14 00:26:36.036310 sshd[4738]: Accepted publickey for core from 10.0.0.1 port 53132 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:26:36.046444 sshd[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:26:36.064011 systemd-logind[1547]: New session 35 of user core. Mar 14 00:26:36.077295 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 14 00:26:36.196436 kubelet[2729]: I0314 00:26:36.196248 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-cilium-cgroup\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.197187 kubelet[2729]: I0314 00:26:36.196360 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-cilium-run\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.197187 kubelet[2729]: I0314 00:26:36.196597 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-cilium-ipsec-secrets\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.197187 kubelet[2729]: I0314 00:26:36.196771 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-host-proc-sys-kernel\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.197187 kubelet[2729]: I0314 00:26:36.196948 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-bpf-maps\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.197187 kubelet[2729]: I0314 00:26:36.197074 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-clustermesh-secrets\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.197516 kubelet[2729]: I0314 00:26:36.197221 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-host-proc-sys-net\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.197516 kubelet[2729]: I0314 00:26:36.197375 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-cilium-config-path\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.197516 kubelet[2729]: I0314 00:26:36.197504 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srvgw\" (UniqueName: \"kubernetes.io/projected/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-kube-api-access-srvgw\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.200927 kubelet[2729]: I0314 00:26:36.197632 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-lib-modules\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.200927 kubelet[2729]: I0314 00:26:36.197804 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-cni-path\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.200927 kubelet[2729]: I0314 00:26:36.198008 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-etc-cni-netd\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.200927 kubelet[2729]: I0314 00:26:36.200674 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-xtables-lock\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.200927 kubelet[2729]: I0314 00:26:36.200767 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-hubble-tls\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.200927 kubelet[2729]: I0314 00:26:36.200793 2729 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9ef6f4e1-ab34-4c98-ab3f-c48f83deada6-hostproc\") pod \"cilium-xw7j7\" (UID: \"9ef6f4e1-ab34-4c98-ab3f-c48f83deada6\") " pod="kube-system/cilium-xw7j7" Mar 14 00:26:36.424936 kubelet[2729]: E0314 00:26:36.424814 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:36.429282 containerd[1572]: time="2026-03-14T00:26:36.425637487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xw7j7,Uid:9ef6f4e1-ab34-4c98-ab3f-c48f83deada6,Namespace:kube-system,Attempt:0,}" Mar 14 00:26:36.518265 containerd[1572]: time="2026-03-14T00:26:36.515547234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:26:36.518265 containerd[1572]: time="2026-03-14T00:26:36.515764348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:26:36.518265 containerd[1572]: time="2026-03-14T00:26:36.515801067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:26:36.518265 containerd[1572]: time="2026-03-14T00:26:36.516054688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:26:36.638795 containerd[1572]: time="2026-03-14T00:26:36.637797951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xw7j7,Uid:9ef6f4e1-ab34-4c98-ab3f-c48f83deada6,Namespace:kube-system,Attempt:0,} returns sandbox id \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\"" Mar 14 00:26:36.642653 kubelet[2729]: E0314 00:26:36.641004 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:36.660546 containerd[1572]: time="2026-03-14T00:26:36.660424644Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:26:36.696821 containerd[1572]: time="2026-03-14T00:26:36.696533422Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0056837d31543e8bba4edbda6c755f88ddea04a4adcbd996bcdd9371b70d1d94\"" Mar 14 00:26:36.701122 containerd[1572]: time="2026-03-14T00:26:36.698728329Z" level=info msg="StartContainer for \"0056837d31543e8bba4edbda6c755f88ddea04a4adcbd996bcdd9371b70d1d94\"" Mar 14 00:26:36.870551 containerd[1572]: time="2026-03-14T00:26:36.870373691Z" level=info msg="StartContainer for \"0056837d31543e8bba4edbda6c755f88ddea04a4adcbd996bcdd9371b70d1d94\" returns successfully" Mar 14 00:26:37.016647 kubelet[2729]: E0314 00:26:37.016469 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:37.020953 containerd[1572]: time="2026-03-14T00:26:37.020710250Z" level=info msg="shim disconnected" id=0056837d31543e8bba4edbda6c755f88ddea04a4adcbd996bcdd9371b70d1d94 namespace=k8s.io Mar 14 00:26:37.020953 containerd[1572]: time="2026-03-14T00:26:37.020771694Z" level=warning msg="cleaning up after shim disconnected" id=0056837d31543e8bba4edbda6c755f88ddea04a4adcbd996bcdd9371b70d1d94 namespace=k8s.io Mar 14 00:26:37.020953 containerd[1572]: time="2026-03-14T00:26:37.020786842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:37.279341 kubelet[2729]: E0314 00:26:37.278763 2729 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:26:38.038621 kubelet[2729]: E0314 00:26:38.034579 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:38.081613 containerd[1572]: time="2026-03-14T00:26:38.081355121Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:26:38.138820 containerd[1572]: time="2026-03-14T00:26:38.138558383Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c6c87ab9511aee0b21b967f0f581c92febbded17dcecea3151b728cd17abe678\"" Mar 14 00:26:38.152037 containerd[1572]: time="2026-03-14T00:26:38.151973562Z" level=info msg="StartContainer for \"c6c87ab9511aee0b21b967f0f581c92febbded17dcecea3151b728cd17abe678\"" Mar 14 00:26:38.276789 containerd[1572]: time="2026-03-14T00:26:38.276727925Z" level=info msg="StartContainer for \"c6c87ab9511aee0b21b967f0f581c92febbded17dcecea3151b728cd17abe678\" returns successfully" Mar 14 00:26:38.335298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6c87ab9511aee0b21b967f0f581c92febbded17dcecea3151b728cd17abe678-rootfs.mount: Deactivated successfully. Mar 14 00:26:38.359635 containerd[1572]: time="2026-03-14T00:26:38.359548054Z" level=info msg="shim disconnected" id=c6c87ab9511aee0b21b967f0f581c92febbded17dcecea3151b728cd17abe678 namespace=k8s.io Mar 14 00:26:38.359635 containerd[1572]: time="2026-03-14T00:26:38.359630087Z" level=warning msg="cleaning up after shim disconnected" id=c6c87ab9511aee0b21b967f0f581c92febbded17dcecea3151b728cd17abe678 namespace=k8s.io Mar 14 00:26:38.359635 containerd[1572]: time="2026-03-14T00:26:38.359651607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:38.404050 containerd[1572]: time="2026-03-14T00:26:38.403853586Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:26:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:26:39.053194 kubelet[2729]: E0314 00:26:39.053073 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:39.067588 containerd[1572]: time="2026-03-14T00:26:39.067391229Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:26:39.109015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1016633280.mount: Deactivated successfully. Mar 14 00:26:39.111417 containerd[1572]: time="2026-03-14T00:26:39.111043065Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6a49139cfb67e42538fafa6b5b896f54b88a0bd81c99013ff65ae49024bc940\"" Mar 14 00:26:39.114956 containerd[1572]: time="2026-03-14T00:26:39.113758031Z" level=info msg="StartContainer for \"b6a49139cfb67e42538fafa6b5b896f54b88a0bd81c99013ff65ae49024bc940\"" Mar 14 00:26:39.349293 containerd[1572]: time="2026-03-14T00:26:39.346723618Z" level=info msg="StartContainer for \"b6a49139cfb67e42538fafa6b5b896f54b88a0bd81c99013ff65ae49024bc940\" returns successfully" Mar 14 00:26:39.436135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6a49139cfb67e42538fafa6b5b896f54b88a0bd81c99013ff65ae49024bc940-rootfs.mount: Deactivated successfully. Mar 14 00:26:39.473348 containerd[1572]: time="2026-03-14T00:26:39.472988455Z" level=info msg="shim disconnected" id=b6a49139cfb67e42538fafa6b5b896f54b88a0bd81c99013ff65ae49024bc940 namespace=k8s.io Mar 14 00:26:39.473348 containerd[1572]: time="2026-03-14T00:26:39.473054297Z" level=warning msg="cleaning up after shim disconnected" id=b6a49139cfb67e42538fafa6b5b896f54b88a0bd81c99013ff65ae49024bc940 namespace=k8s.io Mar 14 00:26:39.473348 containerd[1572]: time="2026-03-14T00:26:39.473065818Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:39.563256 containerd[1572]: time="2026-03-14T00:26:39.563183847Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:26:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:26:40.063696 kubelet[2729]: E0314 00:26:40.063177 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:40.090087 containerd[1572]: time="2026-03-14T00:26:40.089417463Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:26:40.148504 containerd[1572]: time="2026-03-14T00:26:40.148408071Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c9d8478654c7d8b035c1d074d41177e59d10ef31f5354502a8bd0c6abee335ec\"" Mar 14 00:26:40.151107 containerd[1572]: time="2026-03-14T00:26:40.150993306Z" level=info msg="StartContainer for \"c9d8478654c7d8b035c1d074d41177e59d10ef31f5354502a8bd0c6abee335ec\"" Mar 14 00:26:40.288213 containerd[1572]: time="2026-03-14T00:26:40.287664498Z" level=info msg="StartContainer for \"c9d8478654c7d8b035c1d074d41177e59d10ef31f5354502a8bd0c6abee335ec\" returns successfully" Mar 14 00:26:40.342194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9d8478654c7d8b035c1d074d41177e59d10ef31f5354502a8bd0c6abee335ec-rootfs.mount: Deactivated successfully. Mar 14 00:26:40.364432 containerd[1572]: time="2026-03-14T00:26:40.364279038Z" level=info msg="shim disconnected" id=c9d8478654c7d8b035c1d074d41177e59d10ef31f5354502a8bd0c6abee335ec namespace=k8s.io Mar 14 00:26:40.364432 containerd[1572]: time="2026-03-14T00:26:40.364380777Z" level=warning msg="cleaning up after shim disconnected" id=c9d8478654c7d8b035c1d074d41177e59d10ef31f5354502a8bd0c6abee335ec namespace=k8s.io Mar 14 00:26:40.364432 containerd[1572]: time="2026-03-14T00:26:40.364402327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:40.398136 containerd[1572]: time="2026-03-14T00:26:40.398024786Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:26:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:26:41.079011 kubelet[2729]: E0314 00:26:41.078971 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:41.107043 containerd[1572]: time="2026-03-14T00:26:41.106931128Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:26:41.161658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009273870.mount: Deactivated successfully. Mar 14 00:26:41.203499 containerd[1572]: time="2026-03-14T00:26:41.203375554Z" level=info msg="CreateContainer within sandbox \"20d93d4b8480f8fd9bfaa3779a92c0e01edffc13832c4513669c47ff05c4998a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a1ba6f150c8611e1510839c5ab3a591b3562242e7daa11860ab8e8b5dc8322d\"" Mar 14 00:26:41.205639 containerd[1572]: time="2026-03-14T00:26:41.205462171Z" level=info msg="StartContainer for \"1a1ba6f150c8611e1510839c5ab3a591b3562242e7daa11860ab8e8b5dc8322d\"" Mar 14 00:26:41.371018 containerd[1572]: time="2026-03-14T00:26:41.368600671Z" level=info msg="StartContainer for \"1a1ba6f150c8611e1510839c5ab3a591b3562242e7daa11860ab8e8b5dc8322d\" returns successfully" Mar 14 00:26:42.093922 kubelet[2729]: E0314 00:26:42.092735 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:42.134845 kubelet[2729]: I0314 00:26:42.134583 2729 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xw7j7" podStartSLOduration=7.1345631449999996 podStartE2EDuration="7.134563145s" podCreationTimestamp="2026-03-14 00:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:26:42.132224039 +0000 UTC m=+171.025592808" watchObservedRunningTime="2026-03-14 00:26:42.134563145 +0000 UTC m=+171.027931903" Mar 14 00:26:42.372052 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 00:26:42.854786 kubelet[2729]: E0314 00:26:42.852194 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:43.097057 kubelet[2729]: E0314 00:26:43.096821 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:47.383099 systemd[1]: run-containerd-runc-k8s.io-1a1ba6f150c8611e1510839c5ab3a591b3562242e7daa11860ab8e8b5dc8322d-runc.pPOUem.mount: Deactivated successfully. Mar 14 00:26:48.223508 systemd-networkd[1248]: lxc_health: Link UP Mar 14 00:26:48.283825 systemd-networkd[1248]: lxc_health: Gained carrier Mar 14 00:26:48.434667 kubelet[2729]: E0314 00:26:48.434467 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:49.150268 kubelet[2729]: E0314 00:26:49.145066 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:49.927671 systemd-networkd[1248]: lxc_health: Gained IPv6LL Mar 14 00:26:50.073443 kubelet[2729]: E0314 00:26:50.071113 2729 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:34304->127.0.0.1:37905: read tcp 127.0.0.1:34304->127.0.0.1:37905: read: connection reset by peer Mar 14 00:26:50.074632 kubelet[2729]: E0314 00:26:50.073282 2729 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34304->127.0.0.1:37905: write tcp 127.0.0.1:34304->127.0.0.1:37905: write: broken pipe Mar 14 00:26:50.145296 kubelet[2729]: E0314 00:26:50.145225 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:51.840568 containerd[1572]: time="2026-03-14T00:26:51.840455631Z" level=info msg="StopPodSandbox for \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\"" Mar 14 00:26:51.841702 containerd[1572]: time="2026-03-14T00:26:51.841674011Z" level=info msg="TearDown network for sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" successfully" Mar 14 00:26:51.841849 containerd[1572]: time="2026-03-14T00:26:51.841768678Z" level=info msg="StopPodSandbox for \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" returns successfully" Mar 14 00:26:51.847394 containerd[1572]: time="2026-03-14T00:26:51.842646674Z" level=info msg="RemovePodSandbox for \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\"" Mar 14 00:26:51.847394 containerd[1572]: time="2026-03-14T00:26:51.842706114Z" level=info msg="Forcibly stopping sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\"" Mar 14 00:26:51.847394 containerd[1572]: time="2026-03-14T00:26:51.842791413Z" level=info msg="TearDown network for sandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" successfully" Mar 14 00:26:51.918485 containerd[1572]: time="2026-03-14T00:26:51.913610289Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:26:51.918485 containerd[1572]: time="2026-03-14T00:26:51.913704063Z" level=info msg="RemovePodSandbox \"fb48592e017fe914e8f6e4087fb67b2ee055576739be4153b26ebba1b04a8dec\" returns successfully" Mar 14 00:26:51.918485 containerd[1572]: time="2026-03-14T00:26:51.914492853Z" level=info msg="StopPodSandbox for \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\"" Mar 14 00:26:51.918485 containerd[1572]: time="2026-03-14T00:26:51.914598650Z" level=info msg="TearDown network for sandbox \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\" successfully" Mar 14 00:26:51.918485 containerd[1572]: time="2026-03-14T00:26:51.914615352Z" level=info msg="StopPodSandbox for \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\" returns successfully" Mar 14 00:26:51.918485 containerd[1572]: time="2026-03-14T00:26:51.915277795Z" level=info msg="RemovePodSandbox for \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\"" Mar 14 00:26:51.918485 containerd[1572]: time="2026-03-14T00:26:51.915307120Z" level=info msg="Forcibly stopping sandbox \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\"" Mar 14 00:26:51.918485 containerd[1572]: time="2026-03-14T00:26:51.915380918Z" level=info msg="TearDown network for sandbox \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\" successfully" Mar 14 00:26:51.939176 containerd[1572]: time="2026-03-14T00:26:51.937080759Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:26:51.945640 containerd[1572]: time="2026-03-14T00:26:51.940821898Z" level=info msg="RemovePodSandbox \"850cbe24bfd5a3502b957ef8d909dd2a23436d84573f43bd6cb196df7621fab4\" returns successfully" Mar 14 00:26:52.857303 kubelet[2729]: E0314 00:26:52.855837 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:54.839298 systemd[1]: run-containerd-runc-k8s.io-1a1ba6f150c8611e1510839c5ab3a591b3562242e7daa11860ab8e8b5dc8322d-runc.YvYKKL.mount: Deactivated successfully. Mar 14 00:26:54.851482 kubelet[2729]: E0314 00:26:54.850129 2729 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:26:54.954394 sshd[4738]: pam_unix(sshd:session): session closed for user core Mar 14 00:26:54.964542 systemd[1]: sshd@34-10.0.0.71:22-10.0.0.1:53132.service: Deactivated successfully. Mar 14 00:26:54.973982 systemd[1]: session-35.scope: Deactivated successfully. Mar 14 00:26:54.974419 systemd-logind[1547]: Session 35 logged out. Waiting for processes to exit. Mar 14 00:26:54.977520 systemd-logind[1547]: Removed session 35.