Mar 10 01:00:04.223575 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 22:55:40 -00 2026 Mar 10 01:00:04.223605 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:00:04.223622 kernel: BIOS-provided physical RAM map: Mar 10 01:00:04.223630 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 10 01:00:04.223638 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 10 01:00:04.223646 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 10 01:00:04.223657 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 10 01:00:04.223666 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 10 01:00:04.223674 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 01:00:04.223687 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 10 01:00:04.223696 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 01:00:04.223704 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 10 01:00:04.223836 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 01:00:04.223847 kernel: NX (Execute Disable) protection: active Mar 10 01:00:04.223857 kernel: APIC: Static calls initialized Mar 10 01:00:04.223982 kernel: SMBIOS 2.8 present. Mar 10 01:00:04.223994 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 10 01:00:04.224003 kernel: Hypervisor detected: KVM Mar 10 01:00:04.224012 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 01:00:04.224021 kernel: kvm-clock: using sched offset of 17184412044 cycles Mar 10 01:00:04.224031 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 01:00:04.224041 kernel: tsc: Detected 2445.424 MHz processor Mar 10 01:00:04.224051 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 01:00:04.224060 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 01:00:04.224421 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 01:00:04.224431 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 10 01:00:04.224440 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 01:00:04.224449 kernel: Using GB pages for direct mapping Mar 10 01:00:04.224458 kernel: ACPI: Early table checksum verification disabled Mar 10 01:00:04.224467 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 10 01:00:04.224476 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:00:04.224485 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:00:04.224494 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:00:04.224509 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 10 01:00:04.224518 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:00:04.224526 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:00:04.224535 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:00:04.224544 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:00:04.224552 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 10 01:00:04.224561 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 10 01:00:04.224575 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 10 01:00:04.224588 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 10 01:00:04.224597 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 10 01:00:04.224607 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 10 01:00:04.224617 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 10 01:00:04.224627 kernel: No NUMA configuration found Mar 10 01:00:04.224636 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 10 01:00:04.224649 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 10 01:00:04.224658 kernel: Zone ranges: Mar 10 01:00:04.224668 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 01:00:04.224677 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 10 01:00:04.224686 kernel: Normal empty Mar 10 01:00:04.224695 kernel: Movable zone start for each node Mar 10 01:00:04.224704 kernel: Early memory node ranges Mar 10 01:00:04.224713 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 10 01:00:04.224723 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 10 01:00:04.224737 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 10 01:00:04.224746 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 01:00:04.224864 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 10 01:00:04.224876 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 10 01:00:04.224885 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 01:00:04.224895 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 01:00:04.224904 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 01:00:04.224913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 01:00:04.224922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 01:00:04.224936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 01:00:04.224946 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 01:00:04.224956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 01:00:04.224965 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 01:00:04.224974 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 01:00:04.224983 kernel: TSC deadline timer available Mar 10 01:00:04.224992 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 10 01:00:04.225001 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 01:00:04.225010 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 01:00:04.225636 kernel: kvm-guest: setup PV sched yield Mar 10 01:00:04.225650 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 10 01:00:04.225660 kernel: Booting paravirtualized kernel on KVM Mar 10 01:00:04.225669 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 01:00:04.225678 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 01:00:04.225687 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 10 01:00:04.225696 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 10 01:00:04.225705 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 01:00:04.225714 kernel: kvm-guest: PV spinlocks enabled Mar 10 01:00:04.225729 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 01:00:04.225740 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:00:04.225750 kernel: random: crng init done Mar 10 01:00:04.225760 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 01:00:04.225769 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 01:00:04.225778 kernel: Fallback order for Node 0: 0 Mar 10 01:00:04.225787 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 10 01:00:04.225796 kernel: Policy zone: DMA32 Mar 10 01:00:04.225805 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 01:00:04.225819 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 136884K reserved, 0K cma-reserved) Mar 10 01:00:04.225828 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 01:00:04.225837 kernel: ftrace: allocating 37996 entries in 149 pages Mar 10 01:00:04.225847 kernel: ftrace: allocated 149 pages with 4 groups Mar 10 01:00:04.225856 kernel: Dynamic Preempt: voluntary Mar 10 01:00:04.225867 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 01:00:04.225878 kernel: rcu: RCU event tracing is enabled. Mar 10 01:00:04.225890 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 01:00:04.225901 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 01:00:04.225918 kernel: Rude variant of Tasks RCU enabled. Mar 10 01:00:04.225928 kernel: Tracing variant of Tasks RCU enabled. Mar 10 01:00:04.225937 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 01:00:04.225946 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 01:00:04.226928 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 01:00:04.227049 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 01:00:04.227061 kernel: Console: colour VGA+ 80x25 Mar 10 01:00:04.228528 kernel: printk: console [ttyS0] enabled Mar 10 01:00:04.228637 kernel: ACPI: Core revision 20230628 Mar 10 01:00:04.228695 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 01:00:04.228706 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 01:00:04.228716 kernel: x2apic enabled Mar 10 01:00:04.228726 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 01:00:04.228736 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 01:00:04.228746 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 01:00:04.228756 kernel: kvm-guest: setup PV IPIs Mar 10 01:00:04.228766 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 01:00:04.228791 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 10 01:00:04.228803 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 10 01:00:04.228812 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 01:00:04.228822 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 01:00:04.228835 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 01:00:04.228845 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 01:00:04.228855 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 01:00:04.228866 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 01:00:04.228880 kernel: Speculative Store Bypass: Vulnerable Mar 10 01:00:04.228890 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 01:00:04.229029 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 01:00:04.229043 kernel: active return thunk: srso_alias_return_thunk Mar 10 01:00:04.229054 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 01:00:04.229064 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 01:00:04.230755 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 01:00:04.231853 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 01:00:04.232022 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 01:00:04.232280 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 01:00:04.232413 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 01:00:04.232427 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 01:00:04.232437 kernel: Freeing SMP alternatives memory: 32K Mar 10 01:00:04.232446 kernel: pid_max: default: 32768 minimum: 301 Mar 10 01:00:04.232456 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 01:00:04.232466 kernel: landlock: Up and running. Mar 10 01:00:04.232476 kernel: SELinux: Initializing. Mar 10 01:00:04.232486 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:00:04.232502 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:00:04.232512 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 01:00:04.232523 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:00:04.232534 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:00:04.232544 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:00:04.232555 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 01:00:04.232565 kernel: signal: max sigframe size: 1776 Mar 10 01:00:04.232688 kernel: rcu: Hierarchical SRCU implementation. Mar 10 01:00:04.232701 kernel: rcu: Max phase no-delay instances is 400. Mar 10 01:00:04.232717 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 01:00:04.232727 kernel: smp: Bringing up secondary CPUs ... Mar 10 01:00:04.232737 kernel: smpboot: x86: Booting SMP configuration: Mar 10 01:00:04.232746 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 01:00:04.232756 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 01:00:04.232766 kernel: smpboot: Max logical packages: 1 Mar 10 01:00:04.232775 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 10 01:00:04.232785 kernel: devtmpfs: initialized Mar 10 01:00:04.232796 kernel: x86/mm: Memory block size: 128MB Mar 10 01:00:04.232810 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 01:00:04.232820 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 01:00:04.232829 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 01:00:04.232839 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 01:00:04.232849 kernel: audit: initializing netlink subsys (disabled) Mar 10 01:00:04.232859 kernel: audit: type=2000 audit(1773104381.335:1): state=initialized audit_enabled=0 res=1 Mar 10 01:00:04.232870 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 01:00:04.232879 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 01:00:04.232889 kernel: cpuidle: using governor menu Mar 10 01:00:04.232903 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 01:00:04.232913 kernel: dca service started, version 1.12.1 Mar 10 01:00:04.232922 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 10 01:00:04.232934 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 01:00:04.232943 kernel: PCI: Using configuration type 1 for base access Mar 10 01:00:04.232953 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 01:00:04.232963 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 01:00:04.232973 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 01:00:04.232982 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 01:00:04.232997 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 01:00:04.233007 kernel: ACPI: Added _OSI(Module Device) Mar 10 01:00:04.233017 kernel: ACPI: Added _OSI(Processor Device) Mar 10 01:00:04.233027 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 01:00:04.233037 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 01:00:04.233047 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 10 01:00:04.233056 kernel: ACPI: Interpreter enabled Mar 10 01:00:04.233067 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 01:00:04.233523 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 01:00:04.233540 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 01:00:04.233551 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 01:00:04.233561 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 01:00:04.233571 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 01:00:04.246827 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 01:00:04.247055 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 01:00:04.247627 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 01:00:04.247652 kernel: PCI host bridge to bus 0000:00 Mar 10 01:00:04.250703 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 01:00:04.250887 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 01:00:04.255777 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 01:00:04.264048 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 10 01:00:04.264715 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 01:00:04.265010 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 10 01:00:04.265665 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 01:00:04.266814 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 10 01:00:04.267844 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 10 01:00:04.268041 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 10 01:00:04.268576 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 10 01:00:04.268774 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 10 01:00:04.268966 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 01:00:04.270605 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 10 01:00:04.270806 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 10 01:00:04.271006 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 10 01:00:04.271790 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 10 01:00:04.272681 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 10 01:00:04.272878 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 10 01:00:04.274444 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 10 01:00:04.274642 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 10 01:00:04.274961 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 10 01:00:04.275540 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 10 01:00:04.275726 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 10 01:00:04.275909 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 10 01:00:04.278037 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 10 01:00:04.278906 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 10 01:00:04.286912 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 01:00:04.287592 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 16601 usecs Mar 10 01:00:04.290490 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 10 01:00:04.290689 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 10 01:00:04.291738 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 10 01:00:04.294436 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 10 01:00:04.298497 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 10 01:00:04.298517 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 01:00:04.298528 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 01:00:04.298538 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 01:00:04.298549 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 01:00:04.298921 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 01:00:04.298965 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 01:00:04.298975 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 01:00:04.299009 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 01:00:04.299019 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 01:00:04.299028 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 01:00:04.299039 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 01:00:04.299049 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 01:00:04.299059 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 01:00:04.299394 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 01:00:04.299410 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 01:00:04.299421 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 01:00:04.299436 kernel: iommu: Default domain type: Translated Mar 10 01:00:04.299446 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 01:00:04.299456 kernel: PCI: Using ACPI for IRQ routing Mar 10 01:00:04.299466 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 01:00:04.299476 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 10 01:00:04.299486 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 10 01:00:04.299681 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 01:00:04.299867 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 01:00:04.300746 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 01:00:04.300770 kernel: vgaarb: loaded Mar 10 01:00:04.300781 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 01:00:04.300791 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 01:00:04.300801 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 01:00:04.300811 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 01:00:04.300822 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 01:00:04.300832 kernel: pnp: PnP ACPI init Mar 10 01:00:04.303972 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 01:00:04.304049 kernel: pnp: PnP ACPI: found 6 devices Mar 10 01:00:04.304062 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 01:00:04.304405 kernel: NET: Registered PF_INET protocol family Mar 10 01:00:04.304419 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 01:00:04.304430 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 01:00:04.304441 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 01:00:04.304451 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 01:00:04.304462 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 01:00:04.304472 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 01:00:04.304488 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:00:04.304498 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:00:04.304508 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 01:00:04.304517 kernel: NET: Registered PF_XDP protocol family Mar 10 01:00:04.304700 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 01:00:04.304873 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 01:00:04.305043 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 01:00:04.313560 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 10 01:00:04.313820 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 01:00:04.313988 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 10 01:00:04.314004 kernel: PCI: CLS 0 bytes, default 64 Mar 10 01:00:04.314015 kernel: Initialise system trusted keyrings Mar 10 01:00:04.314026 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 01:00:04.314037 kernel: Key type asymmetric registered Mar 10 01:00:04.314048 kernel: Asymmetric key parser 'x509' registered Mar 10 01:00:04.314058 kernel: hrtimer: interrupt took 15221143 ns Mar 10 01:00:04.315524 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 10 01:00:04.315704 kernel: io scheduler mq-deadline registered Mar 10 01:00:04.315719 kernel: io scheduler kyber registered Mar 10 01:00:04.315729 kernel: io scheduler bfq registered Mar 10 01:00:04.315739 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 01:00:04.315750 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 01:00:04.315760 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 01:00:04.315769 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 01:00:04.315780 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 01:00:04.315790 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 01:00:04.315805 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 01:00:04.315816 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 01:00:04.315827 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 01:00:04.326955 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 01:00:04.327013 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 01:00:04.327575 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 01:00:04.327595 kernel: hpet: Lost 1 RTC interrupts Mar 10 01:00:04.327789 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T01:00:00 UTC (1773104400) Mar 10 01:00:04.332648 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 10 01:00:04.332707 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 01:00:04.332720 kernel: NET: Registered PF_INET6 protocol family Mar 10 01:00:04.332732 kernel: Segment Routing with IPv6 Mar 10 01:00:04.332744 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 01:00:04.332756 kernel: NET: Registered PF_PACKET protocol family Mar 10 01:00:04.332769 kernel: Key type dns_resolver registered Mar 10 01:00:04.332782 kernel: IPI shorthand broadcast: enabled Mar 10 01:00:04.332792 kernel: sched_clock: Marking stable (15739065998, 1530094996)->(20451238785, -3182077791) Mar 10 01:00:04.332839 kernel: registered taskstats version 1 Mar 10 01:00:04.332853 kernel: Loading compiled-in X.509 certificates Mar 10 01:00:04.332864 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 611e035accba842cc9fafb5ced2ca41a603067aa' Mar 10 01:00:04.332876 kernel: Key type .fscrypt registered Mar 10 01:00:04.332888 kernel: Key type fscrypt-provisioning registered Mar 10 01:00:04.332901 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 01:00:04.332914 kernel: ima: Allocated hash algorithm: sha1 Mar 10 01:00:04.332927 kernel: ima: No architecture policies found Mar 10 01:00:04.332940 kernel: clk: Disabling unused clocks Mar 10 01:00:04.332956 kernel: Freeing unused kernel image (initmem) memory: 42896K Mar 10 01:00:04.332966 kernel: Write protecting the kernel read-only data: 36864k Mar 10 01:00:04.332977 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 10 01:00:04.332986 kernel: Run /init as init process Mar 10 01:00:04.332996 kernel: with arguments: Mar 10 01:00:04.333008 kernel: /init Mar 10 01:00:04.333021 kernel: with environment: Mar 10 01:00:04.333033 kernel: HOME=/ Mar 10 01:00:04.333043 kernel: TERM=linux Mar 10 01:00:04.333065 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:00:04.333412 systemd[1]: Detected virtualization kvm. Mar 10 01:00:04.333429 systemd[1]: Detected architecture x86-64. Mar 10 01:00:04.333442 systemd[1]: Running in initrd. Mar 10 01:00:04.333455 systemd[1]: No hostname configured, using default hostname. Mar 10 01:00:04.333468 systemd[1]: Hostname set to . Mar 10 01:00:04.333483 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:00:04.333503 systemd[1]: Queued start job for default target initrd.target. Mar 10 01:00:04.333517 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:00:04.333530 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:00:04.333546 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 01:00:04.333560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:00:04.333574 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 01:00:04.333585 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 01:00:04.333603 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 01:00:04.333613 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 01:00:04.333627 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:00:04.333642 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:00:04.333677 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:00:04.333696 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:00:04.333714 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:00:04.333728 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:00:04.333741 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:00:04.333755 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:00:04.333769 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 01:00:04.333780 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 01:00:04.333791 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:00:04.333802 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:00:04.333815 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:00:04.333834 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:00:04.333845 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 01:00:04.333856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:00:04.333866 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 01:00:04.333879 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 01:00:04.333891 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:00:04.333904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:00:04.333956 systemd-journald[194]: Collecting audit messages is disabled. Mar 10 01:00:04.333995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:00:04.334009 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 01:00:04.334023 systemd-journald[194]: Journal started Mar 10 01:00:04.334053 systemd-journald[194]: Runtime Journal (/run/log/journal/5cc7ca1b9b524019906b539021b3cad5) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:00:04.274666 systemd-modules-load[195]: Inserted module 'overlay' Mar 10 01:00:04.388676 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:00:04.420906 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:00:04.465698 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 01:00:04.601007 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 01:00:04.607645 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 01:00:06.026982 kernel: Bridge firewalling registered Mar 10 01:00:04.610856 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 10 01:00:06.071671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:00:06.104946 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:00:06.167605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:00:06.190408 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 01:00:06.211056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:00:06.318468 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:00:06.378988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:00:06.409965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:00:06.445454 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:00:06.494728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 01:00:06.585830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:00:06.661620 dracut-cmdline[228]: dracut-dracut-053 Mar 10 01:00:06.616631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:00:06.698700 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:00:06.849754 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:00:07.077688 systemd-resolved[265]: Positive Trust Anchors: Mar 10 01:00:07.077824 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:00:07.077870 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:00:07.277680 systemd-resolved[265]: Defaulting to hostname 'linux'. Mar 10 01:00:07.289876 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:00:07.319783 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:00:07.411511 kernel: SCSI subsystem initialized Mar 10 01:00:07.515743 kernel: Loading iSCSI transport class v2.0-870. Mar 10 01:00:07.614614 kernel: iscsi: registered transport (tcp) Mar 10 01:00:07.703710 kernel: iscsi: registered transport (qla4xxx) Mar 10 01:00:07.703839 kernel: QLogic iSCSI HBA Driver Mar 10 01:00:07.890067 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 01:00:07.931891 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 01:00:08.084620 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 01:00:08.085838 kernel: device-mapper: uevent: version 1.0.3 Mar 10 01:00:08.104993 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 01:00:08.297871 kernel: raid6: avx2x4 gen() 17197 MB/s Mar 10 01:00:08.322867 kernel: raid6: avx2x2 gen() 15645 MB/s Mar 10 01:00:08.357678 kernel: raid6: avx2x1 gen() 7617 MB/s Mar 10 01:00:08.357767 kernel: raid6: using algorithm avx2x4 gen() 17197 MB/s Mar 10 01:00:08.394433 kernel: raid6: .... xor() 3715 MB/s, rmw enabled Mar 10 01:00:08.394513 kernel: raid6: using avx2x2 recovery algorithm Mar 10 01:00:08.491536 kernel: xor: automatically using best checksumming function avx Mar 10 01:00:10.102916 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 01:00:10.200952 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:00:10.285572 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:00:10.324698 systemd-udevd[418]: Using default interface naming scheme 'v255'. Mar 10 01:00:10.356865 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:00:10.393700 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 01:00:10.517685 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Mar 10 01:00:10.697901 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:00:10.755820 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:00:11.081702 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:00:11.150539 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 01:00:11.251055 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 01:00:11.280523 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:00:11.344786 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:00:11.398976 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:00:11.452973 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 01:00:11.511637 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 01:00:11.587986 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 01:00:11.676673 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 01:00:11.677004 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 01:00:11.677025 kernel: GPT:9289727 != 19775487 Mar 10 01:00:11.641991 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:00:11.767548 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 01:00:11.767588 kernel: GPT:9289727 != 19775487 Mar 10 01:00:11.767608 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 01:00:11.767626 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:00:11.755550 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:00:11.755744 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:00:11.805490 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:00:11.816599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:00:11.816931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:00:11.884512 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:00:12.102960 kernel: libata version 3.00 loaded. Mar 10 01:00:12.104572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:00:12.242747 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 01:00:12.243740 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 01:00:12.268994 kernel: AVX2 version of gcm_enc/dec engaged. Mar 10 01:00:12.295933 kernel: AES CTR mode by8 optimization enabled Mar 10 01:00:12.424691 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 10 01:00:12.425596 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 01:00:12.659999 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 01:00:15.086426 kernel: BTRFS: device fsid a7ce059b-f34b-4785-93b9-44632d452486 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (479) Mar 10 01:00:15.086472 kernel: scsi host0: ahci Mar 10 01:00:15.087722 kernel: scsi host1: ahci Mar 10 01:00:15.088038 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Mar 10 01:00:15.088051 kernel: scsi host2: ahci Mar 10 01:00:15.088646 kernel: scsi host3: ahci Mar 10 01:00:15.089001 kernel: scsi host4: ahci Mar 10 01:00:15.089821 kernel: scsi host5: ahci Mar 10 01:00:15.090712 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 10 01:00:15.090734 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 10 01:00:15.090749 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 10 01:00:15.090763 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 10 01:00:15.090778 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 10 01:00:15.090793 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 10 01:00:15.090808 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 01:00:15.090822 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 01:00:15.090838 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 01:00:15.090853 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 01:00:15.090875 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 01:00:15.090892 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 01:00:15.090906 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 01:00:15.090920 kernel: ata3.00: applying bridge limits Mar 10 01:00:15.090934 kernel: ata3.00: configured for UDMA/100 Mar 10 01:00:15.090947 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 01:00:15.091757 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 01:00:15.092753 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 01:00:15.092780 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 01:00:14.996936 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 01:00:15.033567 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:00:15.121864 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:00:15.182021 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 01:00:15.207605 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 01:00:15.347644 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 01:00:15.415513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:00:15.481978 disk-uuid[572]: Primary Header is updated. Mar 10 01:00:15.481978 disk-uuid[572]: Secondary Entries is updated. Mar 10 01:00:15.481978 disk-uuid[572]: Secondary Header is updated. Mar 10 01:00:15.561714 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:00:15.586895 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:00:15.628712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:00:15.655991 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:00:16.647501 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:00:16.656551 disk-uuid[573]: The operation has completed successfully. Mar 10 01:00:16.850711 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 01:00:16.851470 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 01:00:16.912526 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 01:00:16.947439 sh[594]: Success Mar 10 01:00:17.070653 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 10 01:00:17.386567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 01:00:17.519984 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 01:00:17.565040 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 01:00:17.723600 kernel: BTRFS info (device dm-0): first mount of filesystem a7ce059b-f34b-4785-93b9-44632d452486 Mar 10 01:00:17.724557 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:00:17.724575 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 01:00:17.748010 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 01:00:17.758950 kernel: BTRFS info (device dm-0): using free space tree Mar 10 01:00:17.885783 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 01:00:17.918475 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 01:00:17.986032 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 01:00:18.020878 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 01:00:18.110573 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:00:18.110628 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:00:18.110654 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:00:18.170766 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:00:18.249826 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 01:00:18.290561 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:00:18.318735 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 01:00:18.389640 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 01:00:20.185982 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:00:20.246684 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:00:20.443559 systemd-networkd[782]: lo: Link UP Mar 10 01:00:20.443695 systemd-networkd[782]: lo: Gained carrier Mar 10 01:00:20.448730 systemd-networkd[782]: Enumeration completed Mar 10 01:00:20.449737 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:00:20.464544 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:00:20.464551 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:00:20.470474 systemd-networkd[782]: eth0: Link UP Mar 10 01:00:20.470483 systemd-networkd[782]: eth0: Gained carrier Mar 10 01:00:20.470497 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:00:20.495049 systemd[1]: Reached target network.target - Network. Mar 10 01:00:20.669813 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:00:20.995062 ignition[678]: Ignition 2.19.0 Mar 10 01:00:20.995578 ignition[678]: Stage: fetch-offline Mar 10 01:00:20.996605 ignition[678]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:00:20.996738 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:00:20.998939 ignition[678]: parsed url from cmdline: "" Mar 10 01:00:20.998949 ignition[678]: no config URL provided Mar 10 01:00:20.998958 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 01:00:20.998974 ignition[678]: no config at "/usr/lib/ignition/user.ign" Mar 10 01:00:20.999059 ignition[678]: op(1): [started] loading QEMU firmware config module Mar 10 01:00:20.999318 ignition[678]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 01:00:21.152796 ignition[678]: op(1): [finished] loading QEMU firmware config module Mar 10 01:00:22.209922 systemd-networkd[782]: eth0: Gained IPv6LL Mar 10 01:00:23.761518 ignition[678]: parsing config with SHA512: a48f8a59e7e3186011e07c77b32a57e31835f63adf2cca66aed43bb714be513822011d01ec603beceb10f19470077cab4b5c96b705721a17cd06c4628c71d9b9 Mar 10 01:00:23.879005 unknown[678]: fetched base config from "system" Mar 10 01:00:23.879507 unknown[678]: fetched user config from "qemu" Mar 10 01:00:23.906892 ignition[678]: fetch-offline: fetch-offline passed Mar 10 01:00:23.922012 ignition[678]: Ignition finished successfully Mar 10 01:00:23.950831 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:00:23.976914 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 01:00:24.040612 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 01:00:24.373859 ignition[788]: Ignition 2.19.0 Mar 10 01:00:24.373995 ignition[788]: Stage: kargs Mar 10 01:00:24.394846 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:00:24.394864 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:00:24.440966 ignition[788]: kargs: kargs passed Mar 10 01:00:24.445786 ignition[788]: Ignition finished successfully Mar 10 01:00:24.474318 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 01:00:24.526642 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 01:00:24.857815 ignition[797]: Ignition 2.19.0 Mar 10 01:00:24.857834 ignition[797]: Stage: disks Mar 10 01:00:24.858846 ignition[797]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:00:24.858867 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:00:24.873628 ignition[797]: disks: disks passed Mar 10 01:00:24.873717 ignition[797]: Ignition finished successfully Mar 10 01:00:24.944634 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 01:00:24.958017 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 01:00:24.998744 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 01:00:24.999732 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:00:24.999782 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:00:24.999813 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:00:25.210000 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 01:00:25.361553 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 10 01:00:25.404756 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 01:00:25.498818 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 01:00:27.386804 kernel: EXT4-fs (vda9): mounted filesystem 8ab7565f-94b4-4514-a19e-abd5bcc78da1 r/w with ordered data mode. Quota mode: none. Mar 10 01:00:27.396551 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 01:00:27.421963 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 01:00:27.546984 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:00:27.598040 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 01:00:27.633961 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Mar 10 01:00:27.706825 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:00:27.706867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:00:27.706900 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:00:27.709899 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 01:00:27.711853 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 01:00:27.711902 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:00:27.735833 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 01:00:27.762912 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 01:00:28.112866 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:00:28.141862 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:00:28.580937 initrd-setup-root[843]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 01:00:28.719958 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Mar 10 01:00:28.812057 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 01:00:28.876870 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 01:00:30.113971 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 01:00:30.184328 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 01:00:30.205551 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 01:00:30.277836 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:00:30.242784 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 01:00:30.427994 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 01:00:31.004882 ignition[932]: INFO : Ignition 2.19.0 Mar 10 01:00:31.004882 ignition[932]: INFO : Stage: mount Mar 10 01:00:31.035782 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:00:31.035782 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:00:31.106601 ignition[932]: INFO : mount: mount passed Mar 10 01:00:31.106601 ignition[932]: INFO : Ignition finished successfully Mar 10 01:00:31.119534 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 01:00:31.227293 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 01:00:31.323869 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:00:31.472757 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (945) Mar 10 01:00:31.500317 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:00:31.528809 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:00:31.528893 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:00:31.667560 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:00:31.688014 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:00:32.163592 ignition[962]: INFO : Ignition 2.19.0 Mar 10 01:00:32.163592 ignition[962]: INFO : Stage: files Mar 10 01:00:32.203023 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:00:32.203023 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:00:32.203023 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Mar 10 01:00:32.203023 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 01:00:32.203023 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 01:00:32.322827 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 01:00:32.322827 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 01:00:32.322827 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 01:00:32.322827 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 10 01:00:32.322827 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 10 01:00:32.322827 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:00:32.322827 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 01:00:32.273592 unknown[962]: wrote ssh authorized keys file for user: core Mar 10 01:00:32.729019 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 10 01:00:34.492698 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:00:34.492698 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:00:34.681762 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 10 01:00:34.833534 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 10 01:00:37.345644 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:00:37.345644 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:00:37.406542 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 10 01:00:37.979868 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 10 01:00:51.663835 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:00:51.663835 ignition[962]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 10 01:00:51.744341 ignition[962]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 01:00:52.359051 ignition[962]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:00:52.412049 ignition[962]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:00:52.467562 ignition[962]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 01:00:52.467562 ignition[962]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Mar 10 01:00:52.467562 ignition[962]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 01:00:52.467562 ignition[962]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:00:52.467562 ignition[962]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:00:52.467562 ignition[962]: INFO : files: files passed Mar 10 01:00:52.467562 ignition[962]: INFO : Ignition finished successfully Mar 10 01:00:52.446059 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 01:00:52.648770 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 01:00:52.684064 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 01:00:52.723852 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 01:00:52.724788 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 01:00:52.907564 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 01:00:52.942701 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:00:52.942701 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:00:53.032061 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:00:53.005891 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:00:53.043033 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 01:00:53.176760 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 01:00:54.111305 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 01:00:54.111793 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 01:00:54.187773 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 01:00:54.214697 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 01:00:54.215556 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 01:00:54.340016 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 01:00:54.824694 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:00:54.999950 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 01:00:55.504844 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:00:55.596035 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:00:55.728645 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 01:00:55.819891 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 01:00:55.823017 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:00:55.914006 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 01:00:55.984911 systemd[1]: Stopped target basic.target - Basic System. Mar 10 01:00:56.020535 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 01:00:56.119970 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:00:56.164726 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 01:00:56.208992 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 01:00:56.215888 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:00:56.216055 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 01:00:56.304942 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 01:00:56.320902 systemd[1]: Stopped target swap.target - Swaps. Mar 10 01:00:56.377383 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 01:00:56.381567 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:00:56.519981 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:00:56.583993 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:00:56.628979 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 01:00:56.633679 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:00:56.771869 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 01:00:56.782712 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 01:00:57.113593 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 01:00:57.151060 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:00:57.271348 systemd[1]: Stopped target paths.target - Path Units. Mar 10 01:00:57.359952 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 01:00:57.365701 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:00:57.627817 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 01:00:57.681546 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 01:00:57.705995 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 01:00:57.706682 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:00:57.827713 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 01:00:57.829054 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:00:57.886029 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 01:00:57.886696 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:00:57.969374 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 01:00:57.969788 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 01:00:58.179047 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 01:00:58.180035 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 01:00:58.180818 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:00:58.369782 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 01:00:58.395825 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 01:00:58.396902 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:00:58.457971 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 01:00:58.458570 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:00:58.772017 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 01:00:58.772947 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 01:00:59.003067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 01:00:59.086063 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 01:00:59.088641 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 01:00:59.303769 ignition[1016]: INFO : Ignition 2.19.0 Mar 10 01:00:59.303769 ignition[1016]: INFO : Stage: umount Mar 10 01:00:59.400648 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:00:59.400648 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:00:59.400648 ignition[1016]: INFO : umount: umount passed Mar 10 01:00:59.400648 ignition[1016]: INFO : Ignition finished successfully Mar 10 01:00:59.325041 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 01:00:59.361821 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 01:00:59.407582 systemd[1]: Stopped target network.target - Network. Mar 10 01:00:59.461558 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 01:00:59.462885 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 01:00:59.510312 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 01:00:59.510558 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 01:00:59.594998 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 01:00:59.596546 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 01:00:59.637870 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 01:00:59.639061 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 01:00:59.696005 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 01:00:59.697054 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 01:00:59.895748 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 01:01:00.073615 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 01:01:00.112907 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 10 01:01:00.358929 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 01:01:00.366581 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 01:01:00.672952 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 01:01:00.673825 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 01:01:00.827758 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 01:01:00.827883 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:01:01.279364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 01:01:01.341027 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 01:01:01.341687 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:01:01.343309 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:01:01.343387 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:01:01.450364 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 01:01:01.450753 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 01:01:01.539789 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 01:01:01.540023 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:01:01.609861 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:01:01.683826 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 01:01:01.684633 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:01:01.730936 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 01:01:01.731061 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 01:01:01.749551 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 01:01:01.749630 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:01:01.827774 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 01:01:01.827874 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:01:01.834627 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 01:01:01.834704 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 01:01:01.849618 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:01:01.849705 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:01:01.853995 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 01:01:01.877619 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 01:01:01.878036 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:01:01.896063 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:01:01.896403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:01:01.915635 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 01:01:01.916638 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 01:01:02.446618 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 10 01:01:02.027982 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 01:01:02.028768 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 01:01:02.076858 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 01:01:02.120911 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 01:01:02.196649 systemd[1]: Switching root. Mar 10 01:01:02.644699 systemd-journald[194]: Journal stopped Mar 10 01:01:13.578044 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 01:01:13.578317 kernel: SELinux: policy capability open_perms=1 Mar 10 01:01:13.578344 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 01:01:13.578364 kernel: SELinux: policy capability always_check_network=0 Mar 10 01:01:13.578385 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 01:01:13.578404 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 01:01:13.578424 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 01:01:13.578527 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 01:01:13.578550 kernel: audit: type=1403 audit(1773104466.425:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 01:01:13.578573 systemd[1]: Successfully loaded SELinux policy in 399.052ms. Mar 10 01:01:13.578620 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 165.958ms. Mar 10 01:01:13.578642 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:01:13.578663 systemd[1]: Detected virtualization kvm. Mar 10 01:01:13.578780 systemd[1]: Detected architecture x86-64. Mar 10 01:01:13.578803 systemd[1]: Detected first boot. Mar 10 01:01:13.578824 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:01:13.578844 zram_generator::config[1077]: No configuration found. Mar 10 01:01:13.578874 systemd[1]: Populated /etc with preset unit settings. Mar 10 01:01:13.578895 systemd[1]: Queued start job for default target multi-user.target. Mar 10 01:01:13.578916 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 01:01:13.578947 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 01:01:13.578969 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 01:01:13.578990 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 01:01:13.579021 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 01:01:13.579042 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 01:01:13.579067 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 01:01:13.579241 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 01:01:13.579261 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 01:01:13.579281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:01:13.579303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:01:13.579324 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 01:01:13.579344 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 01:01:13.579365 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 01:01:13.579385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:01:13.579575 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 01:01:13.579598 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:01:13.579619 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 01:01:13.579640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:01:13.579660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:01:13.579681 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:01:13.579701 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:01:13.579721 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 01:01:13.579748 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 01:01:13.579769 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 01:01:13.579788 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 01:01:13.579809 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:01:13.579829 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:01:13.579849 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:01:13.579871 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 01:01:13.579891 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 01:01:13.579911 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 01:01:13.579936 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 01:01:13.579957 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:01:13.579977 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 01:01:13.579997 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 01:01:13.580233 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 01:01:13.580256 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 01:01:13.580278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:01:13.580297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:01:13.580319 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 01:01:13.580346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:01:13.580368 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:01:13.580388 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:01:13.580408 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 01:01:13.580429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:01:13.580530 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 01:01:13.580554 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 10 01:01:13.580575 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 10 01:01:13.580603 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:01:13.580624 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:01:13.580644 kernel: ACPI: bus type drm_connector registered Mar 10 01:01:13.580664 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 01:01:13.580686 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 01:01:13.580705 kernel: loop: module loaded Mar 10 01:01:13.580725 kernel: fuse: init (API version 7.39) Mar 10 01:01:13.580745 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:01:13.580852 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:01:13.580914 systemd-journald[1177]: Collecting audit messages is disabled. Mar 10 01:01:13.580953 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 01:01:13.580974 systemd-journald[1177]: Journal started Mar 10 01:01:13.581005 systemd-journald[1177]: Runtime Journal (/run/log/journal/5cc7ca1b9b524019906b539021b3cad5) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:01:13.603394 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:01:13.616676 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 01:01:13.627426 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 01:01:13.635535 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 01:01:13.649221 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 01:01:13.659539 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 01:01:13.669620 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 01:01:13.682785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:01:13.694923 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 01:01:13.695704 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 01:01:13.704858 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:01:13.705421 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:01:13.713840 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:01:13.714567 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:01:13.725313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:01:13.725823 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:01:13.735797 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 01:01:13.736587 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 01:01:13.745626 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:01:13.746034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:01:13.757275 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:01:13.765982 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:01:13.780291 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 01:01:13.824947 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 01:01:14.005270 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 01:01:14.027538 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 01:01:14.042260 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 01:01:14.047644 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 01:01:14.065385 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 01:01:14.081966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:01:14.089416 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 01:01:14.099024 systemd-journald[1177]: Time spent on flushing to /var/log/journal/5cc7ca1b9b524019906b539021b3cad5 is 1.845228s for 934 entries. Mar 10 01:01:14.099024 systemd-journald[1177]: System Journal (/var/log/journal/5cc7ca1b9b524019906b539021b3cad5) is 8.0M, max 195.6M, 187.6M free. Mar 10 01:01:15.981551 systemd-journald[1177]: Received client request to flush runtime journal. Mar 10 01:01:14.099606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:01:14.128648 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:01:14.173742 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 01:01:14.188979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:01:14.199909 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 01:01:14.245425 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 10 01:01:15.960683 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 01:01:15.990870 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 01:01:16.022908 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 01:01:16.052720 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 01:01:16.139988 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 10 01:01:17.424012 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:01:17.581859 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Mar 10 01:01:17.581951 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Mar 10 01:01:17.601237 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 01:01:17.639674 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 01:01:17.815275 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 01:01:17.854861 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:01:18.039460 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Mar 10 01:01:18.039549 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Mar 10 01:01:18.061068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:01:21.401495 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 01:01:21.473613 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:01:21.625289 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Mar 10 01:01:21.761507 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:01:21.797365 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:01:21.879848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 01:01:22.135523 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 10 01:01:22.164487 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1246) Mar 10 01:01:22.183812 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 01:01:22.998223 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 10 01:01:23.073834 systemd-networkd[1247]: lo: Link UP Mar 10 01:01:23.073921 systemd-networkd[1247]: lo: Gained carrier Mar 10 01:01:23.079607 systemd-networkd[1247]: Enumeration completed Mar 10 01:01:23.079885 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:01:23.130742 kernel: ACPI: button: Power Button [PWRF] Mar 10 01:01:23.084976 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:01:23.084983 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:01:23.096832 systemd-networkd[1247]: eth0: Link UP Mar 10 01:01:23.096841 systemd-networkd[1247]: eth0: Gained carrier Mar 10 01:01:23.096907 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:01:23.103779 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:01:23.117684 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 01:01:23.137694 systemd-networkd[1247]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:01:23.768376 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 01:01:23.789365 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 10 01:01:23.789951 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 01:01:23.790505 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 10 01:01:23.873889 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:01:23.901685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:01:23.908268 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 01:01:24.686917 systemd-networkd[1247]: eth0: Gained IPv6LL Mar 10 01:01:24.711839 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 01:01:26.112473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:01:26.564592 kernel: kvm_amd: TSC scaling supported Mar 10 01:01:26.565891 kernel: kvm_amd: Nested Virtualization enabled Mar 10 01:01:26.565926 kernel: kvm_amd: Nested Paging enabled Mar 10 01:01:26.576612 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 01:01:26.591714 kernel: kvm_amd: PMU virtualization is disabled Mar 10 01:01:27.807338 kernel: EDAC MC: Ver: 3.0.0 Mar 10 01:01:27.893065 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 10 01:01:27.927789 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 10 01:01:28.012940 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:01:28.075773 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 10 01:01:28.097899 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:01:28.123899 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 10 01:01:28.142719 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:01:28.198033 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 10 01:01:28.220783 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 01:01:28.236606 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 01:01:28.236818 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:01:28.248259 systemd[1]: Reached target machines.target - Containers. Mar 10 01:01:28.278917 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 10 01:01:28.324652 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 01:01:28.370763 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 01:01:28.403481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:01:28.413738 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 01:01:28.434731 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 10 01:01:28.452724 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 01:01:28.468854 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 01:01:28.517956 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 01:01:28.554608 kernel: loop0: detected capacity change from 0 to 142488 Mar 10 01:01:28.573326 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 01:01:28.576038 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 10 01:01:28.706815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 01:01:28.958609 kernel: loop1: detected capacity change from 0 to 140768 Mar 10 01:01:29.209825 kernel: loop2: detected capacity change from 0 to 228704 Mar 10 01:01:29.420960 kernel: loop3: detected capacity change from 0 to 142488 Mar 10 01:01:29.663808 kernel: loop4: detected capacity change from 0 to 140768 Mar 10 01:01:29.861706 kernel: loop5: detected capacity change from 0 to 228704 Mar 10 01:01:30.119648 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 01:01:30.121592 (sd-merge)[1315]: Merged extensions into '/usr'. Mar 10 01:01:30.177511 systemd[1]: Reloading requested from client PID 1303 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 01:01:30.177682 systemd[1]: Reloading... Mar 10 01:01:30.556316 zram_generator::config[1352]: No configuration found. Mar 10 01:01:33.369788 ldconfig[1299]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 01:01:33.968765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:01:37.177372 systemd[1]: Reloading finished in 6996 ms. Mar 10 01:01:37.461367 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 01:01:37.582697 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 01:01:37.708678 systemd[1]: Starting ensure-sysext.service... Mar 10 01:01:37.779947 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:01:37.835053 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Mar 10 01:01:37.836848 systemd[1]: Reloading... Mar 10 01:01:38.023562 zram_generator::config[1412]: No configuration found. Mar 10 01:01:38.026542 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 01:01:38.027915 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 01:01:38.030665 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 01:01:38.031849 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 10 01:01:38.031975 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Mar 10 01:01:38.057835 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:01:38.057923 systemd-tmpfiles[1387]: Skipping /boot Mar 10 01:01:38.105938 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:01:38.106038 systemd-tmpfiles[1387]: Skipping /boot Mar 10 01:01:38.525534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:01:38.687591 systemd[1]: Reloading finished in 847 ms. Mar 10 01:01:38.776655 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:01:38.834676 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:01:38.876685 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 01:01:38.916322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 01:01:39.015046 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:01:39.059006 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 01:01:39.092811 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:01:39.096945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:01:39.117699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:01:39.149764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:01:39.182572 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:01:39.195988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:01:39.196851 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:01:39.200003 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 01:01:39.210798 augenrules[1484]: No rules Mar 10 01:01:39.219789 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:01:39.236956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:01:39.237588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:01:39.255042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:01:39.255754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:01:39.292034 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:01:39.294001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:01:39.311503 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 01:01:39.353567 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:01:39.354394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:01:39.378269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:01:39.440395 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:01:39.477752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:01:39.485798 systemd-resolved[1470]: Positive Trust Anchors: Mar 10 01:01:39.486373 systemd-resolved[1470]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:01:39.486579 systemd-resolved[1470]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:01:39.496946 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:01:39.507642 systemd-resolved[1470]: Defaulting to hostname 'linux'. Mar 10 01:01:39.509832 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 01:01:39.531858 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:01:39.533981 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:01:39.539335 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:01:39.565044 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 01:01:39.583928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:01:39.587289 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:01:39.607030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:01:39.607580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:01:39.630235 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:01:39.633957 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:01:39.654538 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 01:01:39.707844 systemd[1]: Reached target network.target - Network. Mar 10 01:01:39.741362 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 01:01:39.762321 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:01:39.777730 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:01:39.778524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:01:39.796686 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:01:39.813391 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:01:39.828241 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:01:39.860323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:01:39.872837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:01:39.877784 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:01:39.877902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:01:39.882618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:01:39.882892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:01:39.896664 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:01:39.897271 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:01:39.927991 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:01:39.962011 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:01:39.985956 systemd[1]: Finished ensure-sysext.service. Mar 10 01:01:40.009323 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:01:40.010900 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:01:40.073797 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:01:40.074014 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:01:40.090540 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 01:01:40.386930 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 01:01:40.420057 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:01:40.430821 systemd-timesyncd[1532]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 01:01:40.431652 systemd-timesyncd[1532]: Initial clock synchronization to Tue 2026-03-10 01:01:40.426883 UTC. Mar 10 01:01:40.448362 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 01:01:40.472260 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 01:01:40.501576 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 01:01:40.524710 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 01:01:40.524757 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:01:40.548054 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 01:01:40.562992 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 01:01:40.580378 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 01:01:40.609225 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:01:40.627276 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 01:01:40.665929 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 01:01:40.694498 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 01:01:40.724329 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 01:01:40.759601 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:01:40.786937 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:01:40.803342 systemd[1]: System is tainted: cgroupsv1 Mar 10 01:01:40.803600 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:01:40.803644 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:01:40.809246 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 01:01:40.853858 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 01:01:40.894923 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 01:01:40.919048 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 01:01:40.966520 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 01:01:40.985322 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 01:01:41.004343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:01:41.039673 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 01:01:41.071317 jq[1541]: false Mar 10 01:01:41.071892 extend-filesystems[1542]: Found loop3 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found loop4 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found loop5 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found sr0 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found vda Mar 10 01:01:41.071892 extend-filesystems[1542]: Found vda1 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found vda2 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found vda3 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found usr Mar 10 01:01:41.071892 extend-filesystems[1542]: Found vda4 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found vda6 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found vda7 Mar 10 01:01:41.071892 extend-filesystems[1542]: Found vda9 Mar 10 01:01:41.071892 extend-filesystems[1542]: Checking size of /dev/vda9 Mar 10 01:01:41.112959 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 01:01:41.138969 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 01:01:41.156537 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 01:01:41.250603 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 01:01:41.268971 extend-filesystems[1542]: Resized partition /dev/vda9 Mar 10 01:01:41.316304 extend-filesystems[1573]: resize2fs 1.47.1 (20-May-2024) Mar 10 01:01:41.360597 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 01:01:41.294246 dbus-daemon[1539]: [system] SELinux support is enabled Mar 10 01:01:41.378509 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 01:01:41.394473 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1564) Mar 10 01:01:41.421948 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 01:01:41.457561 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 01:01:41.476510 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 01:01:41.495584 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 01:01:41.508990 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 01:01:41.556421 extend-filesystems[1573]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 01:01:41.556421 extend-filesystems[1573]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 01:01:41.556421 extend-filesystems[1573]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 01:01:41.555856 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 01:01:41.619654 jq[1584]: true Mar 10 01:01:41.619944 extend-filesystems[1542]: Resized filesystem in /dev/vda9 Mar 10 01:01:41.640905 update_engine[1583]: I20260310 01:01:41.590615 1583 main.cc:92] Flatcar Update Engine starting Mar 10 01:01:41.640905 update_engine[1583]: I20260310 01:01:41.595739 1583 update_check_scheduler.cc:74] Next update check in 2m2s Mar 10 01:01:41.570553 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 01:01:41.571435 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 01:01:41.571882 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 01:01:41.651054 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 01:01:41.651792 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 01:01:41.678931 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 01:01:41.716468 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 01:01:41.717727 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 01:01:41.812413 systemd-logind[1578]: Watching system buttons on /dev/input/event1 (Power Button) Mar 10 01:01:41.813603 systemd-logind[1578]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 01:01:41.821646 systemd-logind[1578]: New seat seat0. Mar 10 01:01:41.830477 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 01:01:41.850797 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 01:01:41.852519 (ntainerd)[1595]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 01:01:41.874445 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 01:01:41.903696 jq[1594]: true Mar 10 01:01:42.188748 tar[1593]: linux-amd64/LICENSE Mar 10 01:01:42.196744 tar[1593]: linux-amd64/helm Mar 10 01:01:42.196541 dbus-daemon[1539]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 10 01:01:42.218851 systemd[1]: Started update-engine.service - Update Engine. Mar 10 01:01:42.260667 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 01:01:42.261697 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 01:01:42.261901 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 01:01:42.288027 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 01:01:42.288686 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 01:01:42.317527 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 01:01:42.336755 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 01:01:42.957792 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Mar 10 01:01:42.972853 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 01:01:43.056684 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 01:01:43.159407 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 01:01:44.385932 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 01:01:44.486386 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 01:01:44.521585 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 01:01:44.576521 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:36706.service - OpenSSH per-connection server daemon (10.0.0.1:36706). Mar 10 01:01:45.901958 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 01:01:45.902747 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 01:01:45.978843 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 01:01:45.985916 locksmithd[1626]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 01:01:46.735997 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 01:01:46.789571 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 36706 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:01:46.777046 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:01:46.813462 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 01:01:46.880846 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 01:01:46.905934 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 01:01:47.086634 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 01:01:47.089916 systemd-logind[1578]: New session 1 of user core. Mar 10 01:01:47.270671 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 01:01:47.603695 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 01:01:47.650783 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 01:01:48.117558 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 01:01:48.780272 systemd[1673]: Queued start job for default target default.target. Mar 10 01:01:48.781761 systemd[1673]: Created slice app.slice - User Application Slice. Mar 10 01:01:48.781791 systemd[1673]: Reached target paths.target - Paths. Mar 10 01:01:48.781813 systemd[1673]: Reached target timers.target - Timers. Mar 10 01:01:48.791362 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 01:01:48.816286 containerd[1595]: time="2026-03-10T01:01:48.814641907Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 10 01:01:48.848806 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 01:01:48.851410 systemd[1673]: Reached target sockets.target - Sockets. Mar 10 01:01:48.851530 systemd[1673]: Reached target basic.target - Basic System. Mar 10 01:01:48.851692 systemd[1673]: Reached target default.target - Main User Target. Mar 10 01:01:48.851753 systemd[1673]: Startup finished in 352ms. Mar 10 01:01:48.852296 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 01:01:48.887518 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 01:01:48.920835 containerd[1595]: time="2026-03-10T01:01:48.919875512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:01:48.939479 containerd[1595]: time="2026-03-10T01:01:48.939412279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:01:48.939641 containerd[1595]: time="2026-03-10T01:01:48.939617424Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 10 01:01:48.939723 containerd[1595]: time="2026-03-10T01:01:48.939700818Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 10 01:01:48.940855 containerd[1595]: time="2026-03-10T01:01:48.940826617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 10 01:01:48.941821 containerd[1595]: time="2026-03-10T01:01:48.941528162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 10 01:01:48.943311 containerd[1595]: time="2026-03-10T01:01:48.943279606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:01:48.943394 containerd[1595]: time="2026-03-10T01:01:48.943371215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:01:48.943824 containerd[1595]: time="2026-03-10T01:01:48.943792804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:01:48.943909 containerd[1595]: time="2026-03-10T01:01:48.943889061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 10 01:01:48.944014 containerd[1595]: time="2026-03-10T01:01:48.943991177Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:01:48.944321 containerd[1595]: time="2026-03-10T01:01:48.944298509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 10 01:01:48.944534 containerd[1595]: time="2026-03-10T01:01:48.944505829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:01:48.945328 containerd[1595]: time="2026-03-10T01:01:48.945303290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:01:48.945624 containerd[1595]: time="2026-03-10T01:01:48.945592932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:01:48.945701 containerd[1595]: time="2026-03-10T01:01:48.945682756Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 10 01:01:48.945907 containerd[1595]: time="2026-03-10T01:01:48.945882763Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 10 01:01:48.946630 containerd[1595]: time="2026-03-10T01:01:48.946605766Z" level=info msg="metadata content store policy set" policy=shared Mar 10 01:01:48.987679 containerd[1595]: time="2026-03-10T01:01:48.987616049Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 10 01:01:48.988381 containerd[1595]: time="2026-03-10T01:01:48.988351573Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 10 01:01:48.988526 containerd[1595]: time="2026-03-10T01:01:48.988501533Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 10 01:01:48.988630 containerd[1595]: time="2026-03-10T01:01:48.988606154Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 10 01:01:48.988733 containerd[1595]: time="2026-03-10T01:01:48.988713700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 10 01:01:48.991321 containerd[1595]: time="2026-03-10T01:01:48.991293958Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 10 01:01:48.993498 containerd[1595]: time="2026-03-10T01:01:48.993473031Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 10 01:01:48.993780 containerd[1595]: time="2026-03-10T01:01:48.993751515Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 10 01:01:48.993886 containerd[1595]: time="2026-03-10T01:01:48.993861014Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 10 01:01:48.993983 containerd[1595]: time="2026-03-10T01:01:48.993958693Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 10 01:01:48.996971 containerd[1595]: time="2026-03-10T01:01:48.996937502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 10 01:01:48.997411 containerd[1595]: time="2026-03-10T01:01:48.997383574Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 10 01:01:48.997505 containerd[1595]: time="2026-03-10T01:01:48.997482245Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 10 01:01:48.997592 containerd[1595]: time="2026-03-10T01:01:48.997571569Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 10 01:01:48.997670 containerd[1595]: time="2026-03-10T01:01:48.997651468Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 10 01:01:48.997743 containerd[1595]: time="2026-03-10T01:01:48.997725386Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 10 01:01:48.997820 containerd[1595]: time="2026-03-10T01:01:48.997800085Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 10 01:01:48.997907 containerd[1595]: time="2026-03-10T01:01:48.997885674Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 10 01:01:48.998340 containerd[1595]: time="2026-03-10T01:01:48.998309477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.998450 containerd[1595]: time="2026-03-10T01:01:48.998413278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.998550 containerd[1595]: time="2026-03-10T01:01:48.998529158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.998650 containerd[1595]: time="2026-03-10T01:01:48.998630093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.998756 containerd[1595]: time="2026-03-10T01:01:48.998730226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.998824 containerd[1595]: time="2026-03-10T01:01:48.998809804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.998869 containerd[1595]: time="2026-03-10T01:01:48.998857787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.998912 containerd[1595]: time="2026-03-10T01:01:48.998901343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.998956 containerd[1595]: time="2026-03-10T01:01:48.998944728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.999002 containerd[1595]: time="2026-03-10T01:01:48.998989535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.999373 containerd[1595]: time="2026-03-10T01:01:48.999348828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.999463 containerd[1595]: time="2026-03-10T01:01:48.999440908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.999543 containerd[1595]: time="2026-03-10T01:01:48.999524952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.999629 containerd[1595]: time="2026-03-10T01:01:48.999610281Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 10 01:01:48.999833 containerd[1595]: time="2026-03-10T01:01:48.999809685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.999911 containerd[1595]: time="2026-03-10T01:01:48.999893882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 10 01:01:48.999979 containerd[1595]: time="2026-03-10T01:01:48.999961619Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 10 01:01:49.001709 containerd[1595]: time="2026-03-10T01:01:49.001684525Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 10 01:01:49.001918 containerd[1595]: time="2026-03-10T01:01:49.001889372Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 10 01:01:49.001997 containerd[1595]: time="2026-03-10T01:01:49.001978708Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 10 01:01:49.002552 containerd[1595]: time="2026-03-10T01:01:49.002525489Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 10 01:01:49.002645 containerd[1595]: time="2026-03-10T01:01:49.002623270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 10 01:01:49.002854 containerd[1595]: time="2026-03-10T01:01:49.002823397Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 10 01:01:49.002965 containerd[1595]: time="2026-03-10T01:01:49.002941944Z" level=info msg="NRI interface is disabled by configuration." Mar 10 01:01:49.003314 containerd[1595]: time="2026-03-10T01:01:49.003026781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 10 01:01:49.006509 containerd[1595]: time="2026-03-10T01:01:49.006418964Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 10 01:01:49.007419 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:36722.service - OpenSSH per-connection server daemon (10.0.0.1:36722). Mar 10 01:01:49.040969 containerd[1595]: time="2026-03-10T01:01:49.027845740Z" level=info msg="Connect containerd service" Mar 10 01:01:49.040969 containerd[1595]: time="2026-03-10T01:01:49.028608777Z" level=info msg="using legacy CRI server" Mar 10 01:01:49.040969 containerd[1595]: time="2026-03-10T01:01:49.028637457Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 01:01:49.040969 containerd[1595]: time="2026-03-10T01:01:49.033694348Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 10 01:01:49.046006 containerd[1595]: time="2026-03-10T01:01:49.045607128Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:01:49.050933 containerd[1595]: time="2026-03-10T01:01:49.046646938Z" level=info msg="Start subscribing containerd event" Mar 10 01:01:49.050933 containerd[1595]: time="2026-03-10T01:01:49.047008998Z" level=info msg="Start recovering state" Mar 10 01:01:49.063634 containerd[1595]: time="2026-03-10T01:01:49.062999974Z" level=info msg="Start event monitor" Mar 10 01:01:49.064681 containerd[1595]: time="2026-03-10T01:01:49.064650727Z" level=info msg="Start snapshots syncer" Mar 10 01:01:49.064893 containerd[1595]: time="2026-03-10T01:01:49.064866463Z" level=info msg="Start cni network conf syncer for default" Mar 10 01:01:49.064990 containerd[1595]: time="2026-03-10T01:01:49.064966988Z" level=info msg="Start streaming server" Mar 10 01:01:49.074634 containerd[1595]: time="2026-03-10T01:01:49.074603479Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 01:01:49.074796 containerd[1595]: time="2026-03-10T01:01:49.074774717Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 01:01:49.075959 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 01:01:49.109701 containerd[1595]: time="2026-03-10T01:01:49.109650884Z" level=info msg="containerd successfully booted in 0.304298s" Mar 10 01:01:49.203981 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 36722 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:01:49.208806 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:01:49.232590 systemd-logind[1578]: New session 2 of user core. Mar 10 01:01:49.243486 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 01:01:49.347702 tar[1593]: linux-amd64/README.md Mar 10 01:01:49.394544 sshd[1690]: pam_unix(sshd:session): session closed for user core Mar 10 01:01:49.406634 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:36728.service - OpenSSH per-connection server daemon (10.0.0.1:36728). Mar 10 01:01:49.431301 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:36722.service: Deactivated successfully. Mar 10 01:01:49.436932 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 01:01:49.454427 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 01:01:49.461746 systemd-logind[1578]: Session 2 logged out. Waiting for processes to exit. Mar 10 01:01:49.475774 systemd-logind[1578]: Removed session 2. Mar 10 01:01:49.549939 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 36728 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:01:49.554531 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:01:49.576473 systemd-logind[1578]: New session 3 of user core. Mar 10 01:01:49.583596 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 01:01:49.727843 sshd[1700]: pam_unix(sshd:session): session closed for user core Mar 10 01:01:49.737457 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:36728.service: Deactivated successfully. Mar 10 01:01:49.746777 systemd-logind[1578]: Session 3 logged out. Waiting for processes to exit. Mar 10 01:01:49.750382 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 01:01:49.756681 systemd-logind[1578]: Removed session 3. Mar 10 01:01:50.625055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:01:50.632996 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:01:50.644652 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 01:01:50.663765 systemd[1]: Startup finished in 1min 20.757s (kernel) + 44.611s (userspace) = 2min 5.368s. Mar 10 01:01:59.329785 kubelet[1720]: E0310 01:01:59.324901 1720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:01:59.385934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:01:59.386794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:01:59.920937 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:54214.service - OpenSSH per-connection server daemon (10.0.0.1:54214). Mar 10 01:02:00.370981 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 54214 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:02:00.388775 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:02:00.521312 systemd-logind[1578]: New session 4 of user core. Mar 10 01:02:00.538496 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 01:02:00.718261 sshd[1735]: pam_unix(sshd:session): session closed for user core Mar 10 01:02:00.756830 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:54218.service - OpenSSH per-connection server daemon (10.0.0.1:54218). Mar 10 01:02:00.764799 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:54214.service: Deactivated successfully. Mar 10 01:02:00.774777 systemd-logind[1578]: Session 4 logged out. Waiting for processes to exit. Mar 10 01:02:00.788505 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 01:02:00.791677 systemd-logind[1578]: Removed session 4. Mar 10 01:02:00.906659 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 54218 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:02:00.920269 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:02:01.026435 systemd-logind[1578]: New session 5 of user core. Mar 10 01:02:01.046656 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 01:02:01.171925 sshd[1740]: pam_unix(sshd:session): session closed for user core Mar 10 01:02:01.197429 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:54230.service - OpenSSH per-connection server daemon (10.0.0.1:54230). Mar 10 01:02:01.198597 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:54218.service: Deactivated successfully. Mar 10 01:02:01.201907 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 01:02:01.205268 systemd-logind[1578]: Session 5 logged out. Waiting for processes to exit. Mar 10 01:02:01.212714 systemd-logind[1578]: Removed session 5. Mar 10 01:02:01.388565 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 54230 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:02:01.394848 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:02:01.468715 systemd-logind[1578]: New session 6 of user core. Mar 10 01:02:01.511436 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 01:02:01.645883 sshd[1748]: pam_unix(sshd:session): session closed for user core Mar 10 01:02:01.661608 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:54242.service - OpenSSH per-connection server daemon (10.0.0.1:54242). Mar 10 01:02:01.662769 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:54230.service: Deactivated successfully. Mar 10 01:02:01.690893 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 01:02:01.697370 systemd-logind[1578]: Session 6 logged out. Waiting for processes to exit. Mar 10 01:02:01.704807 systemd-logind[1578]: Removed session 6. Mar 10 01:02:01.776623 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 54242 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:02:01.785033 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:02:01.845705 systemd-logind[1578]: New session 7 of user core. Mar 10 01:02:01.877471 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 01:02:02.271717 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 01:02:02.274043 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:02:02.360587 sudo[1763]: pam_unix(sudo:session): session closed for user root Mar 10 01:02:02.386328 sshd[1756]: pam_unix(sshd:session): session closed for user core Mar 10 01:02:02.409403 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:37772.service - OpenSSH per-connection server daemon (10.0.0.1:37772). Mar 10 01:02:02.411598 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:54242.service: Deactivated successfully. Mar 10 01:02:02.427546 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 01:02:02.444276 systemd-logind[1578]: Session 7 logged out. Waiting for processes to exit. Mar 10 01:02:02.459528 systemd-logind[1578]: Removed session 7. Mar 10 01:02:02.639271 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 37772 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:02:02.643536 sshd[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:02:02.692386 systemd-logind[1578]: New session 8 of user core. Mar 10 01:02:02.701043 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 01:02:02.927587 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 01:02:02.937305 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:02:02.962632 sudo[1773]: pam_unix(sudo:session): session closed for user root Mar 10 01:02:02.988790 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 10 01:02:02.989738 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:02:03.085447 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 10 01:02:03.103714 auditctl[1776]: No rules Mar 10 01:02:03.107537 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:02:03.109443 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 10 01:02:03.119743 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:02:03.454501 augenrules[1795]: No rules Mar 10 01:02:03.461334 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:02:03.464625 sudo[1772]: pam_unix(sudo:session): session closed for user root Mar 10 01:02:03.473754 sshd[1765]: pam_unix(sshd:session): session closed for user core Mar 10 01:02:03.494955 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:37778.service - OpenSSH per-connection server daemon (10.0.0.1:37778). Mar 10 01:02:03.501426 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:37772.service: Deactivated successfully. Mar 10 01:02:03.529420 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 01:02:03.535470 systemd-logind[1578]: Session 8 logged out. Waiting for processes to exit. Mar 10 01:02:03.556677 systemd-logind[1578]: Removed session 8. Mar 10 01:02:03.632967 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 37778 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:02:03.662716 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:02:03.751036 systemd-logind[1578]: New session 9 of user core. Mar 10 01:02:03.764589 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 01:02:03.874571 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 01:02:03.875541 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:02:09.510546 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 01:02:09.554457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:02:13.012866 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 01:02:13.030740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:02:13.073052 (kubelet)[1839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:02:13.076926 (dockerd)[1837]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 01:02:15.897016 kubelet[1839]: E0310 01:02:15.887824 1839 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:02:15.920337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:02:15.921052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:02:22.627383 dockerd[1837]: time="2026-03-10T01:02:22.620982935Z" level=info msg="Starting up" Mar 10 01:02:24.289043 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1093496921-merged.mount: Deactivated successfully. Mar 10 01:02:24.372987 systemd[1]: var-lib-docker-metacopy\x2dcheck718860723-merged.mount: Deactivated successfully. Mar 10 01:02:24.596703 dockerd[1837]: time="2026-03-10T01:02:24.590734673Z" level=info msg="Loading containers: start." Mar 10 01:02:26.272810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 01:02:26.610012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:02:27.311610 update_engine[1583]: I20260310 01:02:27.305891 1583 update_attempter.cc:509] Updating boot flags... Mar 10 01:02:27.702683 kernel: Initializing XFRM netlink socket Mar 10 01:02:28.607027 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1927) Mar 10 01:02:29.316842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1925) Mar 10 01:02:29.324462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:02:29.381037 (kubelet)[1948]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:02:30.838561 systemd-networkd[1247]: docker0: Link UP Mar 10 01:02:31.124499 dockerd[1837]: time="2026-03-10T01:02:31.123652398Z" level=info msg="Loading containers: done." Mar 10 01:02:31.131605 kubelet[1948]: E0310 01:02:31.130944 1948 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:02:31.144726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:02:31.145052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:02:32.595750 dockerd[1837]: time="2026-03-10T01:02:32.591804441Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 01:02:32.617840 dockerd[1837]: time="2026-03-10T01:02:32.604931886Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 10 01:02:32.617840 dockerd[1837]: time="2026-03-10T01:02:32.606973413Z" level=info msg="Daemon has completed initialization" Mar 10 01:02:34.448992 dockerd[1837]: time="2026-03-10T01:02:34.447880582Z" level=info msg="API listen on /run/docker.sock" Mar 10 01:02:34.476457 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 01:02:41.378523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 01:02:41.438871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:02:44.522810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:02:44.568792 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:02:44.668534 containerd[1595]: time="2026-03-10T01:02:44.665064249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 10 01:02:45.844026 kubelet[2040]: E0310 01:02:45.826479 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:02:45.859610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:02:45.860031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:02:47.819043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868316386.mount: Deactivated successfully. Mar 10 01:02:56.020418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 10 01:02:56.093511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:02:57.845555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:02:57.887661 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:02:59.036543 kubelet[2120]: E0310 01:02:59.033058 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:02:59.044730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:02:59.046613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:03:08.218465 containerd[1595]: time="2026-03-10T01:03:08.217788456Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 10 01:03:08.226955 containerd[1595]: time="2026-03-10T01:03:08.215556183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:03:08.231947 containerd[1595]: time="2026-03-10T01:03:08.231891309Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:03:08.257752 containerd[1595]: time="2026-03-10T01:03:08.257510953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:03:08.267215 containerd[1595]: time="2026-03-10T01:03:08.265808287Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 23.596229184s" Mar 10 01:03:08.267786 containerd[1595]: time="2026-03-10T01:03:08.267409658Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 10 01:03:08.306411 containerd[1595]: time="2026-03-10T01:03:08.305972126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 10 01:03:09.255011 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 10 01:03:09.287912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:03:12.296584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:03:12.811426 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:03:15.231708 kubelet[2147]: E0310 01:03:15.229572 2147 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:03:15.237669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:03:15.238550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:03:25.287520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 10 01:03:25.421854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:03:27.787805 containerd[1595]: time="2026-03-10T01:03:27.780438647Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 10 01:03:27.787805 containerd[1595]: time="2026-03-10T01:03:27.787455051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:03:27.904867 containerd[1595]: time="2026-03-10T01:03:27.903752478Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:03:28.084459 containerd[1595]: time="2026-03-10T01:03:28.083366929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:03:28.084706 containerd[1595]: time="2026-03-10T01:03:28.083793496Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 19.777112371s" Mar 10 01:03:28.084706 containerd[1595]: time="2026-03-10T01:03:28.084505793Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 10 01:03:28.111316 containerd[1595]: time="2026-03-10T01:03:28.109395192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 10 01:03:28.660012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:03:28.706794 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:03:29.718816 kubelet[2166]: E0310 01:03:29.716909 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:03:29.730038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:03:29.730783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:03:39.345689 containerd[1595]: time="2026-03-10T01:03:39.344656216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:03:39.349704 containerd[1595]: time="2026-03-10T01:03:39.348981590Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 10 01:03:39.352904 containerd[1595]: time="2026-03-10T01:03:39.352856963Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:03:39.366892 containerd[1595]: time="2026-03-10T01:03:39.366268350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:03:39.370678 containerd[1595]: time="2026-03-10T01:03:39.370637752Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 11.261156378s" Mar 10 01:03:39.372056 containerd[1595]: time="2026-03-10T01:03:39.371593214Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 10 01:03:39.456500 containerd[1595]: time="2026-03-10T01:03:39.455556818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 10 01:03:39.764756 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 10 01:03:39.798852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:03:46.324053 update_engine[1583]: I20260310 01:03:46.319031 1583 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 10 01:03:46.324053 update_engine[1583]: I20260310 01:03:46.320708 1583 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 10 01:03:46.529601 update_engine[1583]: I20260310 01:03:46.390944 1583 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 10 01:03:48.635402 update_engine[1583]: I20260310 01:03:48.321993 1583 omaha_request_params.cc:62] Current group set to lts Mar 10 01:03:48.635402 update_engine[1583]: I20260310 01:03:48.888844 1583 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 10 01:03:48.635402 update_engine[1583]: I20260310 01:03:48.889269 1583 update_attempter.cc:643] Scheduling an action processor start. Mar 10 01:03:48.635402 update_engine[1583]: I20260310 01:03:48.889302 1583 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 10 01:03:48.635402 update_engine[1583]: I20260310 01:03:48.890810 1583 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 10 01:03:48.635402 update_engine[1583]: I20260310 01:03:48.891446 1583 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 10 01:03:48.635402 update_engine[1583]: I20260310 01:03:48.895674 1583 omaha_request_action.cc:272] Request: Mar 10 01:03:48.635402 update_engine[1583]: Mar 10 01:03:48.635402 update_engine[1583]: Mar 10 01:03:48.635402 update_engine[1583]: Mar 10 01:03:48.635402 update_engine[1583]: Mar 10 01:03:48.635402 update_engine[1583]: Mar 10 01:03:48.635402 update_engine[1583]: Mar 10 01:03:48.635402 update_engine[1583]: Mar 10 01:03:48.635402 update_engine[1583]: Mar 10 01:03:48.635402 update_engine[1583]: I20260310 01:03:48.895697 1583 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:03:49.373890 update_engine[1583]: I20260310 01:03:49.370301 1583 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:03:49.481636 update_engine[1583]: I20260310 01:03:49.401376 1583 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:03:50.615427 update_engine[1583]: E20260310 01:03:50.590905 1583 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:03:50.705821 update_engine[1583]: I20260310 01:03:50.669485 1583 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 10 01:03:51.183848 locksmithd[1626]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 10 01:03:52.502878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:03:52.713846 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:03:54.883338 kubelet[2198]: E0310 01:03:54.882840 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:03:54.892393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:03:54.892953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:04:00.865395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591682691.mount: Deactivated successfully. Mar 10 01:04:01.309620 update_engine[1583]: I20260310 01:04:01.308597 1583 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:04:01.316423 update_engine[1583]: I20260310 01:04:01.315933 1583 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:04:01.318869 update_engine[1583]: I20260310 01:04:01.318794 1583 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:04:01.349557 update_engine[1583]: E20260310 01:04:01.348610 1583 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:04:01.349557 update_engine[1583]: I20260310 01:04:01.348968 1583 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 10 01:04:05.018945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 10 01:04:05.116350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:04:11.319698 update_engine[1583]: I20260310 01:04:11.309727 1583 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:04:11.348651 update_engine[1583]: I20260310 01:04:11.320396 1583 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:04:11.348651 update_engine[1583]: I20260310 01:04:11.348016 1583 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:04:11.364787 update_engine[1583]: E20260310 01:04:11.363648 1583 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:04:11.364787 update_engine[1583]: I20260310 01:04:11.364734 1583 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 10 01:04:14.629610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:04:14.750633 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:04:19.361759 kubelet[2225]: E0310 01:04:19.360896 2225 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:04:19.380587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:04:19.381757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:04:20.300517 containerd[1595]: time="2026-03-10T01:04:20.299346170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:04:20.305713 containerd[1595]: time="2026-03-10T01:04:20.303908882Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 10 01:04:20.316338 containerd[1595]: time="2026-03-10T01:04:20.315549012Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:04:20.379857 containerd[1595]: time="2026-03-10T01:04:20.377511900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:04:20.460715 containerd[1595]: time="2026-03-10T01:04:20.458594823Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 41.002733311s" Mar 10 01:04:20.460715 containerd[1595]: time="2026-03-10T01:04:20.459887771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 10 01:04:20.563901 containerd[1595]: time="2026-03-10T01:04:20.549878466Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 10 01:04:21.318587 update_engine[1583]: I20260310 01:04:21.311729 1583 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:04:21.318587 update_engine[1583]: I20260310 01:04:21.317804 1583 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:04:21.326596 update_engine[1583]: I20260310 01:04:21.325363 1583 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:04:21.427626 update_engine[1583]: E20260310 01:04:21.410611 1583 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:04:21.427626 update_engine[1583]: I20260310 01:04:21.426757 1583 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 10 01:04:21.427626 update_engine[1583]: I20260310 01:04:21.426780 1583 omaha_request_action.cc:617] Omaha request response: Mar 10 01:04:21.427626 update_engine[1583]: E20260310 01:04:21.428318 1583 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.433679 1583 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.433849 1583 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.433866 1583 update_attempter.cc:306] Processing Done. Mar 10 01:04:21.456466 update_engine[1583]: E20260310 01:04:21.433889 1583 update_attempter.cc:619] Update failed. Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.433900 1583 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.433912 1583 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.433923 1583 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.434451 1583 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.434549 1583 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.434567 1583 omaha_request_action.cc:272] Request: Mar 10 01:04:21.456466 update_engine[1583]: Mar 10 01:04:21.456466 update_engine[1583]: Mar 10 01:04:21.456466 update_engine[1583]: Mar 10 01:04:21.456466 update_engine[1583]: Mar 10 01:04:21.456466 update_engine[1583]: Mar 10 01:04:21.456466 update_engine[1583]: Mar 10 01:04:21.456466 update_engine[1583]: I20260310 01:04:21.434583 1583 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:04:21.486500 update_engine[1583]: I20260310 01:04:21.462457 1583 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:04:21.486500 update_engine[1583]: I20260310 01:04:21.472927 1583 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:04:21.486575 locksmithd[1626]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 10 01:04:21.514772 update_engine[1583]: E20260310 01:04:21.511315 1583 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:04:21.514772 update_engine[1583]: I20260310 01:04:21.514596 1583 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 10 01:04:21.514772 update_engine[1583]: I20260310 01:04:21.514623 1583 omaha_request_action.cc:617] Omaha request response: Mar 10 01:04:21.514772 update_engine[1583]: I20260310 01:04:21.514642 1583 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:04:21.514772 update_engine[1583]: I20260310 01:04:21.514652 1583 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:04:21.514772 update_engine[1583]: I20260310 01:04:21.514660 1583 update_attempter.cc:306] Processing Done. Mar 10 01:04:21.514772 update_engine[1583]: I20260310 01:04:21.514672 1583 update_attempter.cc:310] Error event sent. Mar 10 01:04:21.518684 update_engine[1583]: I20260310 01:04:21.514901 1583 update_check_scheduler.cc:74] Next update check in 46m41s Mar 10 01:04:21.583485 locksmithd[1626]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 10 01:04:23.916765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347036157.mount: Deactivated successfully. Mar 10 01:04:29.514922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 10 01:04:29.572722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:04:32.088649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:04:32.221674 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:04:34.426557 kubelet[2298]: E0310 01:04:34.424621 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:04:34.437745 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:04:34.443459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:04:40.291903 containerd[1595]: time="2026-03-10T01:04:40.290868543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:04:40.323868 containerd[1595]: time="2026-03-10T01:04:40.313863429Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 10 01:04:40.335049 containerd[1595]: time="2026-03-10T01:04:40.331866119Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:04:40.366568 containerd[1595]: time="2026-03-10T01:04:40.362916245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:04:40.393840 containerd[1595]: time="2026-03-10T01:04:40.388826886Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 19.838154359s" Mar 10 01:04:40.393840 containerd[1595]: time="2026-03-10T01:04:40.391850281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 10 01:04:40.413875 containerd[1595]: time="2026-03-10T01:04:40.413691652Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 10 01:04:42.484856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098997188.mount: Deactivated successfully. Mar 10 01:04:42.532895 containerd[1595]: time="2026-03-10T01:04:42.529717020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:04:42.544729 containerd[1595]: time="2026-03-10T01:04:42.544651056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 10 01:04:42.552033 containerd[1595]: time="2026-03-10T01:04:42.551964561Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:04:42.572464 containerd[1595]: time="2026-03-10T01:04:42.571778383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:04:42.574929 containerd[1595]: time="2026-03-10T01:04:42.574892609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.16111243s" Mar 10 01:04:42.579506 containerd[1595]: time="2026-03-10T01:04:42.577973218Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 10 01:04:42.662810 containerd[1595]: time="2026-03-10T01:04:42.659707534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 10 01:04:44.519611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 10 01:04:44.587761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:04:44.927710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152569270.mount: Deactivated successfully. Mar 10 01:04:47.705056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:04:48.051523 (kubelet)[2338]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:04:48.987739 kubelet[2338]: E0310 01:04:48.987675 2338 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:04:49.003773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:04:49.010699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:04:59.597446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 10 01:04:59.701690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:05:02.010878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:05:02.355485 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:05:04.318626 kubelet[2400]: E0310 01:05:04.317901 2400 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:05:04.354804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:05:04.368819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:05:11.510763 containerd[1595]: time="2026-03-10T01:05:11.505853388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:05:11.520905 containerd[1595]: time="2026-03-10T01:05:11.518776328Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 10 01:05:11.555658 containerd[1595]: time="2026-03-10T01:05:11.552809449Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:05:11.710780 containerd[1595]: time="2026-03-10T01:05:11.709940742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:05:11.753835 containerd[1595]: time="2026-03-10T01:05:11.742901515Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 29.082196822s" Mar 10 01:05:11.753835 containerd[1595]: time="2026-03-10T01:05:11.742974952Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 10 01:05:14.507970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 10 01:05:14.545305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:05:15.695564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:05:15.701033 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:05:16.119711 kubelet[2457]: E0310 01:05:16.118620 2457 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:05:16.127510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:05:16.132588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:05:26.260615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Mar 10 01:05:26.283858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:05:26.885855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:05:26.901539 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:05:27.189474 kubelet[2479]: E0310 01:05:27.188326 2479 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:05:27.195530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:05:27.195946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:05:29.300688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:05:29.322623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:05:29.472564 systemd[1]: Reloading requested from client PID 2498 ('systemctl') (unit session-9.scope)... Mar 10 01:05:29.472675 systemd[1]: Reloading... Mar 10 01:05:29.679333 zram_generator::config[2534]: No configuration found. Mar 10 01:05:30.164516 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:05:30.454855 systemd[1]: Reloading finished in 981 ms. Mar 10 01:05:30.630733 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 10 01:05:30.631047 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 10 01:05:30.631900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:05:30.658585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:05:31.387692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:05:31.453044 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:05:32.029789 kubelet[2597]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:05:32.029789 kubelet[2597]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:05:32.043409 kubelet[2597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:05:32.043409 kubelet[2597]: I0310 01:05:32.032796 2597 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:05:33.835486 kubelet[2597]: I0310 01:05:33.832058 2597 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 10 01:05:33.835486 kubelet[2597]: I0310 01:05:33.833598 2597 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:05:33.849528 kubelet[2597]: I0310 01:05:33.845715 2597 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:05:34.645445 kubelet[2597]: E0310 01:05:34.645375 2597 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:05:34.682489 kubelet[2597]: I0310 01:05:34.681867 2597 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:05:34.718778 kubelet[2597]: E0310 01:05:34.718560 2597 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:05:34.718778 kubelet[2597]: I0310 01:05:34.718680 2597 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 10 01:05:34.773971 kubelet[2597]: I0310 01:05:34.773849 2597 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 10 01:05:34.779625 kubelet[2597]: I0310 01:05:34.777885 2597 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:05:34.782399 kubelet[2597]: I0310 01:05:34.780339 2597 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 10 01:05:34.782399 kubelet[2597]: I0310 01:05:34.780928 2597 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:05:34.782399 kubelet[2597]: I0310 01:05:34.780947 2597 container_manager_linux.go:303] "Creating device plugin manager" Mar 10 01:05:34.782399 kubelet[2597]: I0310 01:05:34.781543 2597 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:05:34.805886 kubelet[2597]: I0310 01:05:34.804684 2597 kubelet.go:480] "Attempting to sync node with API server" Mar 10 01:05:34.805886 kubelet[2597]: I0310 01:05:34.804740 2597 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:05:34.805886 kubelet[2597]: I0310 01:05:34.804783 2597 kubelet.go:386] "Adding apiserver pod source" Mar 10 01:05:34.811365 kubelet[2597]: I0310 01:05:34.808649 2597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:05:34.871516 kubelet[2597]: E0310 01:05:34.869845 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:05:34.879572 kubelet[2597]: E0310 01:05:34.873580 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:05:34.913860 kubelet[2597]: I0310 01:05:34.908874 2597 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:05:34.927385 kubelet[2597]: I0310 01:05:34.920799 2597 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:05:34.935703 kubelet[2597]: W0310 01:05:34.931493 2597 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 01:05:35.002654 kubelet[2597]: I0310 01:05:34.997471 2597 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 10 01:05:35.009955 kubelet[2597]: I0310 01:05:35.005455 2597 server.go:1289] "Started kubelet" Mar 10 01:05:35.009955 kubelet[2597]: I0310 01:05:35.005727 2597 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:05:35.009955 kubelet[2597]: I0310 01:05:35.007450 2597 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:05:35.017379 kubelet[2597]: I0310 01:05:35.016912 2597 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:05:35.019803 kubelet[2597]: I0310 01:05:35.019652 2597 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:05:35.028820 kubelet[2597]: E0310 01:05:35.021961 2597 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b555cb716f034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:05:34.997508148 +0000 UTC m=+3.506885486,LastTimestamp:2026-03-10 01:05:34.997508148 +0000 UTC m=+3.506885486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:05:35.043849 kubelet[2597]: E0310 01:05:35.038047 2597 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:05:35.059009 kubelet[2597]: I0310 01:05:35.057884 2597 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:05:35.059009 kubelet[2597]: I0310 01:05:35.058446 2597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:05:35.061546 kubelet[2597]: I0310 01:05:35.060520 2597 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 10 01:05:35.070312 kubelet[2597]: E0310 01:05:35.063834 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:05:35.070312 kubelet[2597]: I0310 01:05:35.064990 2597 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 10 01:05:35.071334 kubelet[2597]: I0310 01:05:35.071313 2597 reconciler.go:26] "Reconciler: start to sync state" Mar 10 01:05:35.072386 kubelet[2597]: E0310 01:05:35.071960 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:05:35.076912 kubelet[2597]: E0310 01:05:35.072999 2597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Mar 10 01:05:35.081486 kubelet[2597]: I0310 01:05:35.080038 2597 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:05:35.081486 kubelet[2597]: I0310 01:05:35.080634 2597 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:05:35.094438 kubelet[2597]: I0310 01:05:35.091618 2597 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:05:35.213479 kubelet[2597]: E0310 01:05:35.207749 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:05:35.312612 kubelet[2597]: E0310 01:05:35.311780 2597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Mar 10 01:05:35.316779 kubelet[2597]: E0310 01:05:35.316733 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:05:35.418815 kubelet[2597]: E0310 01:05:35.418657 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:05:35.469628 kubelet[2597]: I0310 01:05:35.467867 2597 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:05:35.469628 kubelet[2597]: I0310 01:05:35.467966 2597 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:05:35.469628 kubelet[2597]: I0310 01:05:35.467998 2597 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:05:35.499513 kubelet[2597]: I0310 01:05:35.494054 2597 policy_none.go:49] "None policy: Start" Mar 10 01:05:35.499513 kubelet[2597]: I0310 01:05:35.494806 2597 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 10 01:05:35.499513 kubelet[2597]: I0310 01:05:35.494977 2597 state_mem.go:35] "Initializing new in-memory state store" Mar 10 01:05:35.519327 kubelet[2597]: E0310 01:05:35.518965 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:05:35.539041 kubelet[2597]: E0310 01:05:35.538987 2597 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:05:35.561859 kubelet[2597]: I0310 01:05:35.561528 2597 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:05:35.561859 kubelet[2597]: I0310 01:05:35.561568 2597 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:05:35.564408 kubelet[2597]: I0310 01:05:35.563816 2597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:05:35.582576 kubelet[2597]: E0310 01:05:35.569594 2597 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:05:35.582576 kubelet[2597]: E0310 01:05:35.570303 2597 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:05:35.594462 kubelet[2597]: I0310 01:05:35.593825 2597 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 10 01:05:35.607668 kubelet[2597]: I0310 01:05:35.606844 2597 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 10 01:05:35.607668 kubelet[2597]: I0310 01:05:35.607029 2597 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 10 01:05:35.612355 kubelet[2597]: I0310 01:05:35.608432 2597 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:05:35.612355 kubelet[2597]: I0310 01:05:35.608614 2597 kubelet.go:2436] "Starting kubelet main sync loop" Mar 10 01:05:35.612355 kubelet[2597]: E0310 01:05:35.608688 2597 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 10 01:05:35.613547 kubelet[2597]: E0310 01:05:35.613491 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:05:35.782665 kubelet[2597]: E0310 01:05:35.761356 2597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Mar 10 01:05:35.795040 kubelet[2597]: E0310 01:05:35.792850 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:05:35.828987 kubelet[2597]: I0310 01:05:35.826673 2597 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:05:35.870715 kubelet[2597]: E0310 01:05:35.869700 2597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Mar 10 01:05:35.874448 kubelet[2597]: I0310 01:05:35.873465 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e18b7dc583c7c86532322e8b716630c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e18b7dc583c7c86532322e8b716630c\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:05:35.874448 kubelet[2597]: I0310 01:05:35.873538 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e18b7dc583c7c86532322e8b716630c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e18b7dc583c7c86532322e8b716630c\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:05:35.874448 kubelet[2597]: I0310 01:05:35.873673 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e18b7dc583c7c86532322e8b716630c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e18b7dc583c7c86532322e8b716630c\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:05:35.897368 kubelet[2597]: E0310 01:05:35.896501 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:35.900358 kubelet[2597]: E0310 01:05:35.900003 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:35.932041 kubelet[2597]: E0310 01:05:35.930820 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:05:35.941524 kubelet[2597]: E0310 01:05:35.940724 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:35.976613 kubelet[2597]: I0310 01:05:35.975894 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:05:35.976613 kubelet[2597]: I0310 01:05:35.976343 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:05:35.976613 kubelet[2597]: I0310 01:05:35.976374 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:05:35.976613 kubelet[2597]: I0310 01:05:35.976395 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:05:35.976613 kubelet[2597]: I0310 01:05:35.976414 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:05:35.977043 kubelet[2597]: I0310 01:05:35.976573 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:05:36.078528 kubelet[2597]: I0310 01:05:36.075485 2597 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:05:36.078528 kubelet[2597]: E0310 01:05:36.076573 2597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Mar 10 01:05:36.216694 kubelet[2597]: E0310 01:05:36.216027 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:05:36.221879 kubelet[2597]: E0310 01:05:36.217026 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:36.221879 kubelet[2597]: E0310 01:05:36.220468 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:36.235433 containerd[1595]: time="2026-03-10T01:05:36.234842096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e18b7dc583c7c86532322e8b716630c,Namespace:kube-system,Attempt:0,}" Mar 10 01:05:36.246870 containerd[1595]: time="2026-03-10T01:05:36.232818947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 10 01:05:36.246936 kubelet[2597]: E0310 01:05:36.242456 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:36.247673 containerd[1595]: time="2026-03-10T01:05:36.247536622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 10 01:05:36.520533 kubelet[2597]: I0310 01:05:36.517807 2597 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:05:36.521396 kubelet[2597]: E0310 01:05:36.521354 2597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Mar 10 01:05:36.573558 kubelet[2597]: E0310 01:05:36.570009 2597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="1.6s" Mar 10 01:05:36.773808 kubelet[2597]: E0310 01:05:36.772875 2597 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:05:37.032441 kubelet[2597]: E0310 01:05:37.030769 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:05:37.393948 kubelet[2597]: I0310 01:05:37.382668 2597 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:05:37.422508 kubelet[2597]: E0310 01:05:37.418546 2597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Mar 10 01:05:37.554857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734197561.mount: Deactivated successfully. Mar 10 01:05:37.596575 containerd[1595]: time="2026-03-10T01:05:37.592781922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:05:37.660372 containerd[1595]: time="2026-03-10T01:05:37.630544167Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 10 01:05:37.672629 containerd[1595]: time="2026-03-10T01:05:37.670480657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:05:37.680763 containerd[1595]: time="2026-03-10T01:05:37.680693422Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:05:37.684518 containerd[1595]: time="2026-03-10T01:05:37.683714730Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:05:37.693346 containerd[1595]: time="2026-03-10T01:05:37.690726059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:05:37.693346 containerd[1595]: time="2026-03-10T01:05:37.691638757Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:05:37.742784 containerd[1595]: time="2026-03-10T01:05:37.742714967Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.495085601s" Mar 10 01:05:37.749904 containerd[1595]: time="2026-03-10T01:05:37.746914918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:05:37.749904 containerd[1595]: time="2026-03-10T01:05:37.748995401Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.511300516s" Mar 10 01:05:37.756665 containerd[1595]: time="2026-03-10T01:05:37.754655902Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.516945794s" Mar 10 01:05:38.175051 kubelet[2597]: E0310 01:05:38.174596 2597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="3.2s" Mar 10 01:05:38.306476 kubelet[2597]: E0310 01:05:38.293351 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:05:38.307757 kubelet[2597]: E0310 01:05:38.307605 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:05:38.790830 kubelet[2597]: E0310 01:05:38.787957 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:05:39.189592 kubelet[2597]: E0310 01:05:39.183899 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:05:39.230779 kubelet[2597]: I0310 01:05:39.229793 2597 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:05:39.231669 kubelet[2597]: E0310 01:05:39.231635 2597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Mar 10 01:05:40.625952 containerd[1595]: time="2026-03-10T01:05:40.625010772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:05:40.639014 containerd[1595]: time="2026-03-10T01:05:40.632920927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:05:40.639496 containerd[1595]: time="2026-03-10T01:05:40.638976340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:05:40.642908 containerd[1595]: time="2026-03-10T01:05:40.642863105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:05:40.714470 containerd[1595]: time="2026-03-10T01:05:40.711520754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:05:40.714470 containerd[1595]: time="2026-03-10T01:05:40.712014515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:05:40.714470 containerd[1595]: time="2026-03-10T01:05:40.712435521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:05:40.759844 containerd[1595]: time="2026-03-10T01:05:40.757691689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:05:40.788701 containerd[1595]: time="2026-03-10T01:05:40.787763688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:05:40.879800 containerd[1595]: time="2026-03-10T01:05:40.875608868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:05:40.901731 containerd[1595]: time="2026-03-10T01:05:40.896327430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:05:40.922849 containerd[1595]: time="2026-03-10T01:05:40.920527707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:05:41.225901 kubelet[2597]: E0310 01:05:41.222990 2597 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:05:41.381397 kubelet[2597]: E0310 01:05:41.379814 2597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="6.4s" Mar 10 01:05:42.060521 kubelet[2597]: E0310 01:05:42.059566 2597 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b555cb716f034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:05:34.997508148 +0000 UTC m=+3.506885486,LastTimestamp:2026-03-10 01:05:34.997508148 +0000 UTC m=+3.506885486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:05:42.116438 containerd[1595]: time="2026-03-10T01:05:42.115974424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dca99b1fa5367c9fdafd876e61ee01d8ea0dddc4179322f234eb13848408fbc\"" Mar 10 01:05:42.156020 kubelet[2597]: E0310 01:05:42.155721 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:42.464928 containerd[1595]: time="2026-03-10T01:05:42.457480699Z" level=info msg="CreateContainer within sandbox \"5dca99b1fa5367c9fdafd876e61ee01d8ea0dddc4179322f234eb13848408fbc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 01:05:42.465807 kubelet[2597]: I0310 01:05:42.461625 2597 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:05:42.473392 kubelet[2597]: E0310 01:05:42.472461 2597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Mar 10 01:05:42.478930 containerd[1595]: time="2026-03-10T01:05:42.478693174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"993c8c71747433d2bba5416067a6cf97e9dd610a4ff36250434c5c229d5ff3b0\"" Mar 10 01:05:42.483009 kubelet[2597]: E0310 01:05:42.482807 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:42.524403 containerd[1595]: time="2026-03-10T01:05:42.521950576Z" level=info msg="CreateContainer within sandbox \"993c8c71747433d2bba5416067a6cf97e9dd610a4ff36250434c5c229d5ff3b0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 01:05:42.587018 containerd[1595]: time="2026-03-10T01:05:42.585812536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e18b7dc583c7c86532322e8b716630c,Namespace:kube-system,Attempt:0,} returns sandbox id \"159b4e11a9701425485633d5cdc2212b4aa64bff08588a8050dca126b00ae00e\"" Mar 10 01:05:42.611578 kubelet[2597]: E0310 01:05:42.610781 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:42.650938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024669339.mount: Deactivated successfully. Mar 10 01:05:42.658481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2363269767.mount: Deactivated successfully. Mar 10 01:05:42.681387 containerd[1595]: time="2026-03-10T01:05:42.680837767Z" level=info msg="CreateContainer within sandbox \"5dca99b1fa5367c9fdafd876e61ee01d8ea0dddc4179322f234eb13848408fbc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65ac10bf1814224ef9efa1f6e1835aaa17bfdecf20afc0a0b3c234b97d708727\"" Mar 10 01:05:42.692025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190265870.mount: Deactivated successfully. Mar 10 01:05:42.693618 containerd[1595]: time="2026-03-10T01:05:42.692610834Z" level=info msg="StartContainer for \"65ac10bf1814224ef9efa1f6e1835aaa17bfdecf20afc0a0b3c234b97d708727\"" Mar 10 01:05:42.756630 containerd[1595]: time="2026-03-10T01:05:42.729379727Z" level=info msg="CreateContainer within sandbox \"159b4e11a9701425485633d5cdc2212b4aa64bff08588a8050dca126b00ae00e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 01:05:42.765044 containerd[1595]: time="2026-03-10T01:05:42.764997035Z" level=info msg="CreateContainer within sandbox \"993c8c71747433d2bba5416067a6cf97e9dd610a4ff36250434c5c229d5ff3b0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f86fcbb3bdebb6422af514ff8d5613fb000771618615fc7dabccb68f95c4438f\"" Mar 10 01:05:42.772918 containerd[1595]: time="2026-03-10T01:05:42.772883827Z" level=info msg="StartContainer for \"f86fcbb3bdebb6422af514ff8d5613fb000771618615fc7dabccb68f95c4438f\"" Mar 10 01:05:42.845685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007025972.mount: Deactivated successfully. Mar 10 01:05:42.871967 containerd[1595]: time="2026-03-10T01:05:42.871621772Z" level=info msg="CreateContainer within sandbox \"159b4e11a9701425485633d5cdc2212b4aa64bff08588a8050dca126b00ae00e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5af92abde938a96b7f9b42f50f95a71375a0e9c61f89f1d29d2558530a75f366\"" Mar 10 01:05:42.888705 containerd[1595]: time="2026-03-10T01:05:42.881650210Z" level=info msg="StartContainer for \"5af92abde938a96b7f9b42f50f95a71375a0e9c61f89f1d29d2558530a75f366\"" Mar 10 01:05:42.930888 kubelet[2597]: E0310 01:05:42.930050 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:05:43.491017 kubelet[2597]: E0310 01:05:43.490581 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:05:43.584875 containerd[1595]: time="2026-03-10T01:05:43.578734261Z" level=info msg="StartContainer for \"65ac10bf1814224ef9efa1f6e1835aaa17bfdecf20afc0a0b3c234b97d708727\" returns successfully" Mar 10 01:05:43.729880 kubelet[2597]: E0310 01:05:43.729786 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:05:43.920033 kubelet[2597]: E0310 01:05:43.919794 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:43.926769 kubelet[2597]: E0310 01:05:43.926641 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:44.320983 kubelet[2597]: E0310 01:05:44.320438 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:05:44.366420 containerd[1595]: time="2026-03-10T01:05:44.362545705Z" level=info msg="StartContainer for \"f86fcbb3bdebb6422af514ff8d5613fb000771618615fc7dabccb68f95c4438f\" returns successfully" Mar 10 01:05:44.391329 containerd[1595]: time="2026-03-10T01:05:44.387067638Z" level=info msg="StartContainer for \"5af92abde938a96b7f9b42f50f95a71375a0e9c61f89f1d29d2558530a75f366\" returns successfully" Mar 10 01:05:45.073001 kubelet[2597]: E0310 01:05:45.071844 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:45.073001 kubelet[2597]: E0310 01:05:45.072521 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:45.086681 kubelet[2597]: E0310 01:05:45.085839 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:45.087836 kubelet[2597]: E0310 01:05:45.086065 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:45.095581 kubelet[2597]: E0310 01:05:45.093063 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:45.095581 kubelet[2597]: E0310 01:05:45.094703 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:45.799654 kubelet[2597]: E0310 01:05:45.779009 2597 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:05:46.096494 kubelet[2597]: E0310 01:05:46.095938 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:46.097563 kubelet[2597]: E0310 01:05:46.097494 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:46.101992 kubelet[2597]: E0310 01:05:46.101453 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:46.101992 kubelet[2597]: E0310 01:05:46.102044 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:47.140535 kubelet[2597]: E0310 01:05:47.138779 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:47.178383 kubelet[2597]: E0310 01:05:47.161636 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:47.178383 kubelet[2597]: E0310 01:05:47.161038 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:47.178383 kubelet[2597]: E0310 01:05:47.162018 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:47.484527 kubelet[2597]: E0310 01:05:47.479636 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:47.484527 kubelet[2597]: E0310 01:05:47.480615 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:48.922827 kubelet[2597]: I0310 01:05:48.920875 2597 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:05:49.995731 kubelet[2597]: E0310 01:05:49.993831 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:49.995731 kubelet[2597]: E0310 01:05:49.995063 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:52.005455 kubelet[2597]: E0310 01:05:52.004716 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:05:52.005455 kubelet[2597]: E0310 01:05:52.005521 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:05:55.792670 kubelet[2597]: E0310 01:05:55.791023 2597 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:05:58.791919 kubelet[2597]: E0310 01:05:58.791067 2597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Mar 10 01:06:01.102046 kubelet[2597]: E0310 01:06:01.101029 2597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 10 01:06:01.170557 kubelet[2597]: E0310 01:06:01.109579 2597 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:06:01.170557 kubelet[2597]: E0310 01:06:01.118496 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:06:01.244836 kubelet[2597]: E0310 01:06:01.244775 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:01.251579 kubelet[2597]: E0310 01:06:01.247508 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:01.304653 kubelet[2597]: E0310 01:06:01.302875 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:06:02.077976 kubelet[2597]: E0310 01:06:02.066715 2597 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189b555cb716f034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:05:34.997508148 +0000 UTC m=+3.506885486,LastTimestamp:2026-03-10 01:05:34.997508148 +0000 UTC m=+3.506885486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:06:02.385865 kubelet[2597]: E0310 01:06:02.375938 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:06:06.141923 kubelet[2597]: E0310 01:06:06.126541 2597 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:06:06.183738 kubelet[2597]: E0310 01:06:06.143711 2597 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:06:08.214668 kubelet[2597]: I0310 01:06:08.210410 2597 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:06:10.536050 kubelet[2597]: E0310 01:06:10.523888 2597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:06:10.536050 kubelet[2597]: E0310 01:06:10.524771 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:12.302740 kubelet[2597]: E0310 01:06:12.300663 2597 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 01:06:12.692886 kubelet[2597]: E0310 01:06:12.671017 2597 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189b555cb716f034 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:05:34.997508148 +0000 UTC m=+3.506885486,LastTimestamp:2026-03-10 01:05:34.997508148 +0000 UTC m=+3.506885486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:06:12.782000 kubelet[2597]: I0310 01:06:12.771994 2597 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:06:12.782000 kubelet[2597]: E0310 01:06:12.772466 2597 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 10 01:06:13.112991 kubelet[2597]: E0310 01:06:13.112579 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:13.247319 kubelet[2597]: E0310 01:06:13.220673 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:13.326546 kubelet[2597]: E0310 01:06:13.321004 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:13.428030 kubelet[2597]: E0310 01:06:13.427629 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:14.035651 kubelet[2597]: E0310 01:06:14.030968 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:14.417635 kubelet[2597]: E0310 01:06:14.329829 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:14.780476 kubelet[2597]: E0310 01:06:14.588005 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:14.905484 kubelet[2597]: E0310 01:06:14.904900 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:15.017750 kubelet[2597]: E0310 01:06:15.011870 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:15.116425 kubelet[2597]: E0310 01:06:15.114629 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:15.215940 kubelet[2597]: E0310 01:06:15.215689 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:15.331695 kubelet[2597]: E0310 01:06:15.324953 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:15.431966 kubelet[2597]: E0310 01:06:15.431447 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:15.553394 kubelet[2597]: E0310 01:06:15.552737 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:15.658357 kubelet[2597]: E0310 01:06:15.653865 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:06:15.767806 kubelet[2597]: I0310 01:06:15.764992 2597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:16.049444 kubelet[2597]: I0310 01:06:16.040813 2597 apiserver.go:52] "Watching apiserver" Mar 10 01:06:16.203724 kubelet[2597]: I0310 01:06:16.203352 2597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:16.450754 kubelet[2597]: E0310 01:06:16.206010 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:16.483927 kubelet[2597]: I0310 01:06:16.211884 2597 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 10 01:06:16.603060 kubelet[2597]: I0310 01:06:16.600552 2597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:06:16.667328 kubelet[2597]: E0310 01:06:16.666004 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:16.972647 kubelet[2597]: E0310 01:06:16.969698 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:24.966612 kubelet[2597]: E0310 01:06:24.965959 2597 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.15s" Mar 10 01:06:31.091856 kubelet[2597]: I0310 01:06:31.045835 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=15.045793241 podStartE2EDuration="15.045793241s" podCreationTimestamp="2026-03-10 01:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:06:28.630377055 +0000 UTC m=+57.139754403" watchObservedRunningTime="2026-03-10 01:06:31.045793241 +0000 UTC m=+59.555170558" Mar 10 01:06:31.208484 kubelet[2597]: E0310 01:06:31.206646 2597 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.375s" Mar 10 01:06:31.219424 kubelet[2597]: E0310 01:06:31.219387 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:31.228064 kubelet[2597]: I0310 01:06:31.173040 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=15.173018882 podStartE2EDuration="15.173018882s" podCreationTimestamp="2026-03-10 01:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:06:31.107535393 +0000 UTC m=+59.616912711" watchObservedRunningTime="2026-03-10 01:06:31.173018882 +0000 UTC m=+59.682396210" Mar 10 01:06:31.489416 kubelet[2597]: I0310 01:06:31.480869 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=15.480847687 podStartE2EDuration="15.480847687s" podCreationTimestamp="2026-03-10 01:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:06:31.383614888 +0000 UTC m=+59.892992236" watchObservedRunningTime="2026-03-10 01:06:31.480847687 +0000 UTC m=+59.990225006" Mar 10 01:06:32.014399 kubelet[2597]: E0310 01:06:32.013008 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:32.259701 kubelet[2597]: E0310 01:06:32.256830 2597 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:33.511531 systemd[1]: Reloading requested from client PID 2894 ('systemctl') (unit session-9.scope)... Mar 10 01:06:33.511564 systemd[1]: Reloading... Mar 10 01:06:35.312431 zram_generator::config[2936]: No configuration found. Mar 10 01:06:37.250416 kubelet[2597]: E0310 01:06:37.236661 2597 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.588s" Mar 10 01:06:39.181487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:06:40.366491 systemd[1]: Reloading finished in 6852 ms. Mar 10 01:06:40.974028 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:06:41.028004 kubelet[2597]: I0310 01:06:40.989412 2597 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:06:41.198492 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:06:41.206925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:06:41.291811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:06:44.105264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:06:44.194565 (kubelet)[2997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:06:45.336055 kubelet[2997]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:06:45.336055 kubelet[2997]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:06:45.336055 kubelet[2997]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:06:45.336055 kubelet[2997]: I0310 01:06:45.328849 2997 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:06:45.591827 kubelet[2997]: I0310 01:06:45.591778 2997 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 10 01:06:45.593439 kubelet[2997]: I0310 01:06:45.593057 2997 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:06:45.593813 kubelet[2997]: I0310 01:06:45.593794 2997 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:06:45.610720 kubelet[2997]: I0310 01:06:45.607686 2997 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 01:06:45.626636 kubelet[2997]: I0310 01:06:45.626590 2997 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:06:45.662027 kubelet[2997]: E0310 01:06:45.657953 2997 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:06:45.662027 kubelet[2997]: I0310 01:06:45.658016 2997 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 10 01:06:45.666709 sudo[3014]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 10 01:06:45.670858 sudo[3014]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 10 01:06:46.403049 kubelet[2997]: I0310 01:06:46.401278 2997 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 10 01:06:46.406696 kubelet[2997]: I0310 01:06:46.406520 2997 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:06:46.416448 kubelet[2997]: I0310 01:06:46.406695 2997 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 10 01:06:46.416448 kubelet[2997]: I0310 01:06:46.413775 2997 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:06:46.416448 kubelet[2997]: I0310 01:06:46.413805 2997 container_manager_linux.go:303] "Creating device plugin manager" Mar 10 01:06:46.416448 kubelet[2997]: I0310 01:06:46.413979 2997 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:06:46.416448 kubelet[2997]: I0310 01:06:46.414977 2997 kubelet.go:480] "Attempting to sync node with API server" Mar 10 01:06:46.418026 kubelet[2997]: I0310 01:06:46.415002 2997 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:06:46.423313 kubelet[2997]: I0310 01:06:46.422311 2997 kubelet.go:386] "Adding apiserver pod source" Mar 10 01:06:46.423313 kubelet[2997]: I0310 01:06:46.422448 2997 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:06:46.466040 kubelet[2997]: I0310 01:06:46.462621 2997 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:06:46.468732 kubelet[2997]: I0310 01:06:46.466888 2997 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:06:46.531768 kubelet[2997]: I0310 01:06:46.531646 2997 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 10 01:06:46.531768 kubelet[2997]: I0310 01:06:46.531701 2997 server.go:1289] "Started kubelet" Mar 10 01:06:46.571956 kubelet[2997]: I0310 01:06:46.569787 2997 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:06:46.575927 kubelet[2997]: I0310 01:06:46.575774 2997 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:06:46.576710 kubelet[2997]: I0310 01:06:46.576687 2997 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:06:46.592270 kubelet[2997]: I0310 01:06:46.591978 2997 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:06:46.609930 kubelet[2997]: I0310 01:06:46.609760 2997 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:06:46.620803 kubelet[2997]: I0310 01:06:46.615953 2997 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:06:46.992992 kubelet[2997]: I0310 01:06:46.991593 2997 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 10 01:06:46.994494 kubelet[2997]: I0310 01:06:46.994472 2997 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 10 01:06:46.995064 kubelet[2997]: I0310 01:06:46.995047 2997 reconciler.go:26] "Reconciler: start to sync state" Mar 10 01:06:46.998709 kubelet[2997]: I0310 01:06:46.997826 2997 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:06:46.998709 kubelet[2997]: I0310 01:06:46.998030 2997 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:06:47.015864 kubelet[2997]: E0310 01:06:47.015824 2997 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:06:47.029717 kubelet[2997]: I0310 01:06:47.029675 2997 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:06:47.448790 kubelet[2997]: I0310 01:06:47.428678 2997 apiserver.go:52] "Watching apiserver" Mar 10 01:06:47.483727 kubelet[2997]: I0310 01:06:47.480886 2997 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 10 01:06:47.518042 kubelet[2997]: I0310 01:06:47.517877 2997 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 10 01:06:47.544709 kubelet[2997]: I0310 01:06:47.540792 2997 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 10 01:06:47.548753 kubelet[2997]: I0310 01:06:47.545782 2997 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:06:47.548753 kubelet[2997]: I0310 01:06:47.545809 2997 kubelet.go:2436] "Starting kubelet main sync loop" Mar 10 01:06:47.548753 kubelet[2997]: E0310 01:06:47.546779 2997 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:06:47.657795 kubelet[2997]: E0310 01:06:47.657750 2997 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:06:47.861623 kubelet[2997]: E0310 01:06:47.861018 2997 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:06:47.862960 kubelet[2997]: I0310 01:06:47.862831 2997 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:06:47.863018 kubelet[2997]: I0310 01:06:47.862960 2997 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:06:47.863018 kubelet[2997]: I0310 01:06:47.862987 2997 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:06:47.864574 kubelet[2997]: I0310 01:06:47.864058 2997 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 10 01:06:47.864574 kubelet[2997]: I0310 01:06:47.864486 2997 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 10 01:06:47.864574 kubelet[2997]: I0310 01:06:47.864514 2997 policy_none.go:49] "None policy: Start" Mar 10 01:06:47.864574 kubelet[2997]: I0310 01:06:47.864527 2997 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 10 01:06:47.864574 kubelet[2997]: I0310 01:06:47.864543 2997 state_mem.go:35] "Initializing new in-memory state store" Mar 10 01:06:47.865527 kubelet[2997]: I0310 01:06:47.864781 2997 state_mem.go:75] "Updated machine memory state" Mar 10 01:06:47.871289 kubelet[2997]: E0310 01:06:47.870038 2997 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:06:47.871289 kubelet[2997]: I0310 01:06:47.870697 2997 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:06:47.871289 kubelet[2997]: I0310 01:06:47.870713 2997 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:06:47.879329 kubelet[2997]: I0310 01:06:47.878790 2997 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:06:47.883516 kubelet[2997]: I0310 01:06:47.882570 2997 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 01:06:47.894928 kubelet[2997]: E0310 01:06:47.893980 2997 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:06:47.898648 containerd[1595]: time="2026-03-10T01:06:47.892064293Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 01:06:47.906007 kubelet[2997]: I0310 01:06:47.900847 2997 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 01:06:48.209942 kubelet[2997]: I0310 01:06:48.199047 2997 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:06:48.463777 kubelet[2997]: I0310 01:06:48.459318 2997 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 10 01:06:48.463777 kubelet[2997]: I0310 01:06:48.461622 2997 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:06:48.503317 kubelet[2997]: I0310 01:06:48.503067 2997 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 10 01:06:48.516283 kubelet[2997]: I0310 01:06:48.515761 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:48.516283 kubelet[2997]: I0310 01:06:48.515814 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:48.516283 kubelet[2997]: I0310 01:06:48.515850 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc-xtables-lock\") pod \"kube-proxy-wzxvs\" (UID: \"ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc\") " pod="kube-system/kube-proxy-wzxvs" Mar 10 01:06:48.516283 kubelet[2997]: I0310 01:06:48.515880 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzt8r\" (UniqueName: \"kubernetes.io/projected/ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc-kube-api-access-dzt8r\") pod \"kube-proxy-wzxvs\" (UID: \"ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc\") " pod="kube-system/kube-proxy-wzxvs" Mar 10 01:06:48.516283 kubelet[2997]: I0310 01:06:48.515927 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:06:48.516708 kubelet[2997]: I0310 01:06:48.515959 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e18b7dc583c7c86532322e8b716630c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e18b7dc583c7c86532322e8b716630c\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:48.516708 kubelet[2997]: I0310 01:06:48.515985 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e18b7dc583c7c86532322e8b716630c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e18b7dc583c7c86532322e8b716630c\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:48.516708 kubelet[2997]: I0310 01:06:48.516019 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e18b7dc583c7c86532322e8b716630c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e18b7dc583c7c86532322e8b716630c\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:06:48.517866 kubelet[2997]: I0310 01:06:48.517842 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:48.517983 kubelet[2997]: I0310 01:06:48.517963 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:48.518311 kubelet[2997]: I0310 01:06:48.518288 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc-kube-proxy\") pod \"kube-proxy-wzxvs\" (UID: \"ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc\") " pod="kube-system/kube-proxy-wzxvs" Mar 10 01:06:48.523274 kubelet[2997]: I0310 01:06:48.523060 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc-lib-modules\") pod \"kube-proxy-wzxvs\" (UID: \"ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc\") " pod="kube-system/kube-proxy-wzxvs" Mar 10 01:06:48.523555 kubelet[2997]: I0310 01:06:48.523525 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:06:48.948596 kubelet[2997]: E0310 01:06:48.942034 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:48.978065 kubelet[2997]: E0310 01:06:48.976769 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:49.172745 kubelet[2997]: E0310 01:06:49.171911 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:49.601669 kubelet[2997]: E0310 01:06:49.601453 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:49.650499 containerd[1595]: time="2026-03-10T01:06:49.647970168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzxvs,Uid:ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc,Namespace:kube-system,Attempt:0,}" Mar 10 01:06:49.976898 kubelet[2997]: E0310 01:06:49.975911 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:49.976898 kubelet[2997]: E0310 01:06:49.976785 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:50.829229 containerd[1595]: time="2026-03-10T01:06:50.821852479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:06:50.829229 containerd[1595]: time="2026-03-10T01:06:50.822809969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:06:50.829229 containerd[1595]: time="2026-03-10T01:06:50.822827291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:50.829229 containerd[1595]: time="2026-03-10T01:06:50.828754032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:06:50.999809 kubelet[2997]: E0310 01:06:50.998804 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:51.003278 kubelet[2997]: E0310 01:06:51.001772 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:51.438860 systemd[1]: run-containerd-runc-k8s.io-ec3bffd8c10d14e0904fc243f6b5e19d3c807b112b139583f0667608155ec08c-runc.sxEbuI.mount: Deactivated successfully. Mar 10 01:06:51.531329 kubelet[2997]: E0310 01:06:51.529322 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:52.490337 containerd[1595]: time="2026-03-10T01:06:52.490033694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzxvs,Uid:ef818ba5-dfe0-4da2-b864-6c0b2ee4f2dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec3bffd8c10d14e0904fc243f6b5e19d3c807b112b139583f0667608155ec08c\"" Mar 10 01:06:52.688345 kubelet[2997]: E0310 01:06:52.687735 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:52.754967 containerd[1595]: time="2026-03-10T01:06:52.746944547Z" level=info msg="CreateContainer within sandbox \"ec3bffd8c10d14e0904fc243f6b5e19d3c807b112b139583f0667608155ec08c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 01:06:52.899773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401561948.mount: Deactivated successfully. Mar 10 01:06:52.962820 containerd[1595]: time="2026-03-10T01:06:52.950287316Z" level=info msg="CreateContainer within sandbox \"ec3bffd8c10d14e0904fc243f6b5e19d3c807b112b139583f0667608155ec08c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b24b592c61584b1df927557734bb3d4250594625b0ac9a7e2fc59f103aded3e\"" Mar 10 01:06:52.980799 containerd[1595]: time="2026-03-10T01:06:52.980743321Z" level=info msg="StartContainer for \"1b24b592c61584b1df927557734bb3d4250594625b0ac9a7e2fc59f103aded3e\"" Mar 10 01:06:53.046981 kubelet[2997]: E0310 01:06:53.036557 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:53.076970 sudo[3014]: pam_unix(sudo:session): session closed for user root Mar 10 01:06:53.162513 kubelet[2997]: E0310 01:06:53.159640 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:54.963514 containerd[1595]: time="2026-03-10T01:06:54.959713229Z" level=info msg="StartContainer for \"1b24b592c61584b1df927557734bb3d4250594625b0ac9a7e2fc59f103aded3e\" returns successfully" Mar 10 01:06:55.322913 kubelet[2997]: E0310 01:06:55.320975 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:57.588958 kubelet[2997]: E0310 01:06:57.588758 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:58.189373 kubelet[2997]: E0310 01:06:58.184702 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:58.403035 kubelet[2997]: I0310 01:06:58.401763 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wzxvs" podStartSLOduration=13.401742143 podStartE2EDuration="13.401742143s" podCreationTimestamp="2026-03-10 01:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:06:55.450945832 +0000 UTC m=+11.217396635" watchObservedRunningTime="2026-03-10 01:06:58.401742143 +0000 UTC m=+14.168192946" Mar 10 01:06:58.961942 kubelet[2997]: E0310 01:06:58.960817 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:06:59.704323 kubelet[2997]: I0310 01:06:59.703872 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a81274c7-0f7f-4307-8b97-678613572cf8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-w54w9\" (UID: \"a81274c7-0f7f-4307-8b97-678613572cf8\") " pod="kube-system/cilium-operator-6c4d7847fc-w54w9" Mar 10 01:06:59.704323 kubelet[2997]: I0310 01:06:59.704032 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxd9x\" (UniqueName: \"kubernetes.io/projected/a81274c7-0f7f-4307-8b97-678613572cf8-kube-api-access-wxd9x\") pod \"cilium-operator-6c4d7847fc-w54w9\" (UID: \"a81274c7-0f7f-4307-8b97-678613572cf8\") " pod="kube-system/cilium-operator-6c4d7847fc-w54w9" Mar 10 01:07:00.287344 kubelet[2997]: E0310 01:07:00.281937 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:00.314047 containerd[1595]: time="2026-03-10T01:07:00.312370703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w54w9,Uid:a81274c7-0f7f-4307-8b97-678613572cf8,Namespace:kube-system,Attempt:0,}" Mar 10 01:07:00.438610 kubelet[2997]: I0310 01:07:00.438563 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1506dd7-6edf-4834-8a2b-060079dd93ad-clustermesh-secrets\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.438841 kubelet[2997]: I0310 01:07:00.438821 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-host-proc-sys-net\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.438937 kubelet[2997]: I0310 01:07:00.438920 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-host-proc-sys-kernel\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.439549 kubelet[2997]: I0310 01:07:00.439323 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-run\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.439653 kubelet[2997]: I0310 01:07:00.439636 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-hostproc\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.439739 kubelet[2997]: I0310 01:07:00.439724 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-etc-cni-netd\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.441847 kubelet[2997]: I0310 01:07:00.439806 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcvr4\" (UniqueName: \"kubernetes.io/projected/d1506dd7-6edf-4834-8a2b-060079dd93ad-kube-api-access-zcvr4\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.443295 kubelet[2997]: I0310 01:07:00.441960 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-lib-modules\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.443295 kubelet[2997]: I0310 01:07:00.441993 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-config-path\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.443295 kubelet[2997]: I0310 01:07:00.442024 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-xtables-lock\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.443295 kubelet[2997]: I0310 01:07:00.442048 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cni-path\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.470349 kubelet[2997]: I0310 01:07:00.464324 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-cgroup\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.470349 kubelet[2997]: I0310 01:07:00.464536 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-bpf-maps\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.470349 kubelet[2997]: I0310 01:07:00.464647 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1506dd7-6edf-4834-8a2b-060079dd93ad-hubble-tls\") pod \"cilium-4q54t\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " pod="kube-system/cilium-4q54t" Mar 10 01:07:00.962010 containerd[1595]: time="2026-03-10T01:07:00.951745862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:07:00.982880 containerd[1595]: time="2026-03-10T01:07:00.964007098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:07:00.982880 containerd[1595]: time="2026-03-10T01:07:00.964065988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:07:00.982880 containerd[1595]: time="2026-03-10T01:07:00.965705316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:07:01.587369 kubelet[2997]: E0310 01:07:01.585842 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:01.610715 containerd[1595]: time="2026-03-10T01:07:01.602962875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4q54t,Uid:d1506dd7-6edf-4834-8a2b-060079dd93ad,Namespace:kube-system,Attempt:0,}" Mar 10 01:07:01.669378 kubelet[2997]: E0310 01:07:01.656567 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:02.758036 containerd[1595]: time="2026-03-10T01:07:02.751061551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:07:02.758036 containerd[1595]: time="2026-03-10T01:07:02.751542281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:07:02.758036 containerd[1595]: time="2026-03-10T01:07:02.751591323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:07:02.758036 containerd[1595]: time="2026-03-10T01:07:02.751883270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:07:03.012546 systemd[1]: run-containerd-runc-k8s.io-e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1-runc.06tnO0.mount: Deactivated successfully. Mar 10 01:07:06.292344 kubelet[2997]: E0310 01:07:06.291388 2997 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.094s" Mar 10 01:07:06.469784 containerd[1595]: time="2026-03-10T01:07:06.469048187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4q54t,Uid:d1506dd7-6edf-4834-8a2b-060079dd93ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\"" Mar 10 01:07:06.495365 kubelet[2997]: E0310 01:07:06.493597 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:06.509808 containerd[1595]: time="2026-03-10T01:07:06.508977831Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 10 01:07:06.586649 containerd[1595]: time="2026-03-10T01:07:06.586030039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w54w9,Uid:a81274c7-0f7f-4307-8b97-678613572cf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a\"" Mar 10 01:07:06.623583 kubelet[2997]: E0310 01:07:06.603853 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:30.708811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361641113.mount: Deactivated successfully. Mar 10 01:07:57.558304 containerd[1595]: time="2026-03-10T01:07:57.557245689Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:07:57.561512 containerd[1595]: time="2026-03-10T01:07:57.560988664Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 10 01:07:57.563905 containerd[1595]: time="2026-03-10T01:07:57.563712463Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:07:57.569873 containerd[1595]: time="2026-03-10T01:07:57.569602235Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 51.059852821s" Mar 10 01:07:57.569873 containerd[1595]: time="2026-03-10T01:07:57.569817969Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 10 01:07:57.576061 containerd[1595]: time="2026-03-10T01:07:57.575307993Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 10 01:07:57.590483 containerd[1595]: time="2026-03-10T01:07:57.589544701Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:07:57.632357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154672723.mount: Deactivated successfully. Mar 10 01:07:57.645726 containerd[1595]: time="2026-03-10T01:07:57.644591343Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3\"" Mar 10 01:07:57.655495 containerd[1595]: time="2026-03-10T01:07:57.652950472Z" level=info msg="StartContainer for \"df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3\"" Mar 10 01:07:57.873451 systemd[1]: run-containerd-runc-k8s.io-df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3-runc.aZUHtD.mount: Deactivated successfully. Mar 10 01:07:58.189283 containerd[1595]: time="2026-03-10T01:07:58.188942259Z" level=info msg="StartContainer for \"df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3\" returns successfully" Mar 10 01:07:58.600645 containerd[1595]: time="2026-03-10T01:07:58.599848649Z" level=info msg="shim disconnected" id=df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3 namespace=k8s.io Mar 10 01:07:58.600645 containerd[1595]: time="2026-03-10T01:07:58.600282251Z" level=warning msg="cleaning up after shim disconnected" id=df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3 namespace=k8s.io Mar 10 01:07:58.600645 containerd[1595]: time="2026-03-10T01:07:58.600372069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:07:58.625289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3-rootfs.mount: Deactivated successfully. Mar 10 01:07:58.652447 containerd[1595]: time="2026-03-10T01:07:58.651952150Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:07:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:07:58.819262 kubelet[2997]: E0310 01:07:58.816691 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:58.853360 containerd[1595]: time="2026-03-10T01:07:58.852532596Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:07:59.035525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806458264.mount: Deactivated successfully. Mar 10 01:07:59.097488 containerd[1595]: time="2026-03-10T01:07:59.097285197Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd\"" Mar 10 01:07:59.102860 containerd[1595]: time="2026-03-10T01:07:59.098937730Z" level=info msg="StartContainer for \"bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd\"" Mar 10 01:07:59.422687 containerd[1595]: time="2026-03-10T01:07:59.422636136Z" level=info msg="StartContainer for \"bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd\" returns successfully" Mar 10 01:07:59.472263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:07:59.472653 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:07:59.472735 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:07:59.488010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:07:59.643295 containerd[1595]: time="2026-03-10T01:07:59.641372661Z" level=info msg="shim disconnected" id=bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd namespace=k8s.io Mar 10 01:07:59.643295 containerd[1595]: time="2026-03-10T01:07:59.642836823Z" level=warning msg="cleaning up after shim disconnected" id=bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd namespace=k8s.io Mar 10 01:07:59.643295 containerd[1595]: time="2026-03-10T01:07:59.642861189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:07:59.658229 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:07:59.664693 containerd[1595]: time="2026-03-10T01:07:59.664067861Z" level=error msg="collecting metrics for bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd" error="ttrpc: closed: unknown" Mar 10 01:07:59.825369 kubelet[2997]: E0310 01:07:59.824653 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:07:59.858897 containerd[1595]: time="2026-03-10T01:07:59.855484812Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:07:59.931525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295564971.mount: Deactivated successfully. Mar 10 01:07:59.955225 containerd[1595]: time="2026-03-10T01:07:59.953351598Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9652c777a5b3feb3c7d93ee076ba3b9ead6919c2cdbc2c72b7d8576a9b8fa15b\"" Mar 10 01:07:59.961923 containerd[1595]: time="2026-03-10T01:07:59.960424607Z" level=info msg="StartContainer for \"9652c777a5b3feb3c7d93ee076ba3b9ead6919c2cdbc2c72b7d8576a9b8fa15b\"" Mar 10 01:08:00.267919 containerd[1595]: time="2026-03-10T01:08:00.267493564Z" level=info msg="StartContainer for \"9652c777a5b3feb3c7d93ee076ba3b9ead6919c2cdbc2c72b7d8576a9b8fa15b\" returns successfully" Mar 10 01:08:00.540930 containerd[1595]: time="2026-03-10T01:08:00.540471685Z" level=info msg="shim disconnected" id=9652c777a5b3feb3c7d93ee076ba3b9ead6919c2cdbc2c72b7d8576a9b8fa15b namespace=k8s.io Mar 10 01:08:00.541728 containerd[1595]: time="2026-03-10T01:08:00.541698191Z" level=warning msg="cleaning up after shim disconnected" id=9652c777a5b3feb3c7d93ee076ba3b9ead6919c2cdbc2c72b7d8576a9b8fa15b namespace=k8s.io Mar 10 01:08:00.542578 containerd[1595]: time="2026-03-10T01:08:00.542265153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:00.842693 kubelet[2997]: E0310 01:08:00.840891 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:00.871040 containerd[1595]: time="2026-03-10T01:08:00.868016335Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:08:00.944354 containerd[1595]: time="2026-03-10T01:08:00.943624566Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7\"" Mar 10 01:08:00.946735 containerd[1595]: time="2026-03-10T01:08:00.946700211Z" level=info msg="StartContainer for \"c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7\"" Mar 10 01:08:01.232903 containerd[1595]: time="2026-03-10T01:08:01.232666990Z" level=info msg="StartContainer for \"c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7\" returns successfully" Mar 10 01:08:01.378422 containerd[1595]: time="2026-03-10T01:08:01.377446500Z" level=info msg="shim disconnected" id=c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7 namespace=k8s.io Mar 10 01:08:01.378422 containerd[1595]: time="2026-03-10T01:08:01.377614935Z" level=warning msg="cleaning up after shim disconnected" id=c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7 namespace=k8s.io Mar 10 01:08:01.378422 containerd[1595]: time="2026-03-10T01:08:01.377631997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:08:01.625593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7-rootfs.mount: Deactivated successfully. Mar 10 01:08:01.874063 kubelet[2997]: E0310 01:08:01.873036 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:01.900314 containerd[1595]: time="2026-03-10T01:08:01.896633272Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:08:01.997253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2777999108.mount: Deactivated successfully. Mar 10 01:08:02.023306 containerd[1595]: time="2026-03-10T01:08:02.021321387Z" level=info msg="CreateContainer within sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a\"" Mar 10 01:08:02.023496 containerd[1595]: time="2026-03-10T01:08:02.023470201Z" level=info msg="StartContainer for \"88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a\"" Mar 10 01:08:02.127596 containerd[1595]: time="2026-03-10T01:08:02.127338620Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:08:02.133637 containerd[1595]: time="2026-03-10T01:08:02.132252752Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 10 01:08:02.140933 containerd[1595]: time="2026-03-10T01:08:02.140899712Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:08:02.146358 containerd[1595]: time="2026-03-10T01:08:02.144336195Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.568883732s" Mar 10 01:08:02.146358 containerd[1595]: time="2026-03-10T01:08:02.144390106Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 10 01:08:02.164875 containerd[1595]: time="2026-03-10T01:08:02.164568209Z" level=info msg="CreateContainer within sandbox \"4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 10 01:08:02.245458 containerd[1595]: time="2026-03-10T01:08:02.244691605Z" level=info msg="CreateContainer within sandbox \"4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404\"" Mar 10 01:08:02.253454 containerd[1595]: time="2026-03-10T01:08:02.251951748Z" level=info msg="StartContainer for \"25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404\"" Mar 10 01:08:02.356248 containerd[1595]: time="2026-03-10T01:08:02.355472904Z" level=info msg="StartContainer for \"88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a\" returns successfully" Mar 10 01:08:02.623232 containerd[1595]: time="2026-03-10T01:08:02.622615658Z" level=info msg="StartContainer for \"25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404\" returns successfully" Mar 10 01:08:02.946689 kubelet[2997]: E0310 01:08:02.944266 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:03.000973 kubelet[2997]: I0310 01:08:02.998464 2997 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 10 01:08:03.134301 kubelet[2997]: I0310 01:08:03.132713 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-w54w9" podStartSLOduration=8.632863689 podStartE2EDuration="1m4.132687518s" podCreationTimestamp="2026-03-10 01:06:59 +0000 UTC" firstStartedPulling="2026-03-10 01:07:06.647757609 +0000 UTC m=+22.414208412" lastFinishedPulling="2026-03-10 01:08:02.147581437 +0000 UTC m=+77.914032241" observedRunningTime="2026-03-10 01:08:03.032421486 +0000 UTC m=+78.798872319" watchObservedRunningTime="2026-03-10 01:08:03.132687518 +0000 UTC m=+78.899138321" Mar 10 01:08:03.268627 kubelet[2997]: I0310 01:08:03.268582 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d3b320e-0069-447f-8443-3f05868c734f-config-volume\") pod \"coredns-674b8bbfcf-dhsf4\" (UID: \"9d3b320e-0069-447f-8443-3f05868c734f\") " pod="kube-system/coredns-674b8bbfcf-dhsf4" Mar 10 01:08:03.273295 kubelet[2997]: I0310 01:08:03.273269 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcqpp\" (UniqueName: \"kubernetes.io/projected/9d3b320e-0069-447f-8443-3f05868c734f-kube-api-access-gcqpp\") pod \"coredns-674b8bbfcf-dhsf4\" (UID: \"9d3b320e-0069-447f-8443-3f05868c734f\") " pod="kube-system/coredns-674b8bbfcf-dhsf4" Mar 10 01:08:03.377451 kubelet[2997]: I0310 01:08:03.377015 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnrc\" (UniqueName: \"kubernetes.io/projected/a369754a-2877-486c-8651-cd3628999a5b-kube-api-access-pcnrc\") pod \"coredns-674b8bbfcf-6qdmd\" (UID: \"a369754a-2877-486c-8651-cd3628999a5b\") " pod="kube-system/coredns-674b8bbfcf-6qdmd" Mar 10 01:08:03.377451 kubelet[2997]: I0310 01:08:03.377322 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a369754a-2877-486c-8651-cd3628999a5b-config-volume\") pod \"coredns-674b8bbfcf-6qdmd\" (UID: \"a369754a-2877-486c-8651-cd3628999a5b\") " pod="kube-system/coredns-674b8bbfcf-6qdmd" Mar 10 01:08:03.512218 kubelet[2997]: E0310 01:08:03.508871 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:03.512640 containerd[1595]: time="2026-03-10T01:08:03.512605674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhsf4,Uid:9d3b320e-0069-447f-8443-3f05868c734f,Namespace:kube-system,Attempt:0,}" Mar 10 01:08:03.842342 kubelet[2997]: E0310 01:08:03.837458 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:03.842498 containerd[1595]: time="2026-03-10T01:08:03.839946299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6qdmd,Uid:a369754a-2877-486c-8651-cd3628999a5b,Namespace:kube-system,Attempt:0,}" Mar 10 01:08:03.943257 kubelet[2997]: E0310 01:08:03.940743 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:03.950000 kubelet[2997]: E0310 01:08:03.949965 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:04.031608 kubelet[2997]: I0310 01:08:04.030678 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4q54t" podStartSLOduration=13.964792343 podStartE2EDuration="1m5.030658266s" podCreationTimestamp="2026-03-10 01:06:59 +0000 UTC" firstStartedPulling="2026-03-10 01:07:06.506715571 +0000 UTC m=+22.273166374" lastFinishedPulling="2026-03-10 01:07:57.572581494 +0000 UTC m=+73.339032297" observedRunningTime="2026-03-10 01:08:04.009558173 +0000 UTC m=+79.776008987" watchObservedRunningTime="2026-03-10 01:08:04.030658266 +0000 UTC m=+79.797109069" Mar 10 01:08:04.565719 kubelet[2997]: E0310 01:08:04.565577 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:04.851388 systemd[1]: run-containerd-runc-k8s.io-88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a-runc.8wzrIc.mount: Deactivated successfully. Mar 10 01:08:05.471003 kubelet[2997]: E0310 01:08:05.470662 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:07.270583 systemd-networkd[1247]: cilium_host: Link UP Mar 10 01:08:07.270947 systemd-networkd[1247]: cilium_net: Link UP Mar 10 01:08:07.271451 systemd-networkd[1247]: cilium_net: Gained carrier Mar 10 01:08:07.271715 systemd-networkd[1247]: cilium_host: Gained carrier Mar 10 01:08:07.670316 systemd-networkd[1247]: cilium_host: Gained IPv6LL Mar 10 01:08:07.733587 systemd-networkd[1247]: cilium_vxlan: Link UP Mar 10 01:08:07.733697 systemd-networkd[1247]: cilium_vxlan: Gained carrier Mar 10 01:08:07.805280 systemd-networkd[1247]: cilium_net: Gained IPv6LL Mar 10 01:08:08.238342 kernel: NET: Registered PF_ALG protocol family Mar 10 01:08:09.724630 systemd-networkd[1247]: cilium_vxlan: Gained IPv6LL Mar 10 01:08:11.077463 systemd-networkd[1247]: lxc_health: Link UP Mar 10 01:08:11.093388 systemd-networkd[1247]: lxc_health: Gained carrier Mar 10 01:08:11.469270 kubelet[2997]: E0310 01:08:11.463640 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:11.560313 systemd-networkd[1247]: lxc9e96aeaa2dd4: Link UP Mar 10 01:08:11.607389 kernel: eth0: renamed from tmp50e08 Mar 10 01:08:11.683434 systemd-networkd[1247]: lxc9e96aeaa2dd4: Gained carrier Mar 10 01:08:11.791588 systemd-networkd[1247]: lxc0089ccb78ca6: Link UP Mar 10 01:08:11.821278 kernel: eth0: renamed from tmpfbc58 Mar 10 01:08:11.855744 systemd-networkd[1247]: lxc0089ccb78ca6: Gained carrier Mar 10 01:08:12.000739 kubelet[2997]: E0310 01:08:12.000700 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:12.733338 systemd-networkd[1247]: lxc_health: Gained IPv6LL Mar 10 01:08:12.991612 systemd-networkd[1247]: lxc9e96aeaa2dd4: Gained IPv6LL Mar 10 01:08:12.992614 systemd-networkd[1247]: lxc0089ccb78ca6: Gained IPv6LL Mar 10 01:08:13.564766 kubelet[2997]: E0310 01:08:13.564653 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:29.777442 kubelet[2997]: E0310 01:08:29.775837 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:29.812930 kubelet[2997]: E0310 01:08:29.812380 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:36.164743 sudo[1808]: pam_unix(sudo:session): session closed for user root Mar 10 01:08:36.201679 sshd[1801]: pam_unix(sshd:session): session closed for user core Mar 10 01:08:36.232518 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:37778.service: Deactivated successfully. Mar 10 01:08:36.269451 systemd-logind[1578]: Session 9 logged out. Waiting for processes to exit. Mar 10 01:08:36.275481 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 01:08:36.276427 containerd[1595]: time="2026-03-10T01:08:36.275478328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:08:36.284658 containerd[1595]: time="2026-03-10T01:08:36.278494708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:08:36.284658 containerd[1595]: time="2026-03-10T01:08:36.278568105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:08:36.284658 containerd[1595]: time="2026-03-10T01:08:36.278876392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:08:36.297764 containerd[1595]: time="2026-03-10T01:08:36.296513176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:08:36.297764 containerd[1595]: time="2026-03-10T01:08:36.297522977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:08:36.297764 containerd[1595]: time="2026-03-10T01:08:36.297543506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:08:36.299581 containerd[1595]: time="2026-03-10T01:08:36.297647820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:08:36.330883 systemd-logind[1578]: Removed session 9. Mar 10 01:08:36.534862 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:08:36.556873 systemd-resolved[1470]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:08:36.759366 containerd[1595]: time="2026-03-10T01:08:36.757907701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhsf4,Uid:9d3b320e-0069-447f-8443-3f05868c734f,Namespace:kube-system,Attempt:0,} returns sandbox id \"50e081785c0108e465615aa98433857a4509176112472601f7950da2dee6a51f\"" Mar 10 01:08:36.780597 kubelet[2997]: E0310 01:08:36.779927 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:36.786488 containerd[1595]: time="2026-03-10T01:08:36.785716593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6qdmd,Uid:a369754a-2877-486c-8651-cd3628999a5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbc58b1ac38fb950ceac94f54a444d093db49978251cb2e0a4cc751539b06f43\"" Mar 10 01:08:36.787415 kubelet[2997]: E0310 01:08:36.786702 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:36.810763 containerd[1595]: time="2026-03-10T01:08:36.809739076Z" level=info msg="CreateContainer within sandbox \"50e081785c0108e465615aa98433857a4509176112472601f7950da2dee6a51f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:08:36.842442 containerd[1595]: time="2026-03-10T01:08:36.841621886Z" level=info msg="CreateContainer within sandbox \"fbc58b1ac38fb950ceac94f54a444d093db49978251cb2e0a4cc751539b06f43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:08:36.985812 containerd[1595]: time="2026-03-10T01:08:36.983967028Z" level=info msg="CreateContainer within sandbox \"50e081785c0108e465615aa98433857a4509176112472601f7950da2dee6a51f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b20b22d49ad19856554643b89259e95d1bff11b3d3bd41843bb361439466fefc\"" Mar 10 01:08:36.987395 containerd[1595]: time="2026-03-10T01:08:36.986613754Z" level=info msg="CreateContainer within sandbox \"fbc58b1ac38fb950ceac94f54a444d093db49978251cb2e0a4cc751539b06f43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e381300b60934278b1db9cbd204a07147e4a41d0d31bf7c6e9bedc261abae532\"" Mar 10 01:08:36.992393 containerd[1595]: time="2026-03-10T01:08:36.989585588Z" level=info msg="StartContainer for \"b20b22d49ad19856554643b89259e95d1bff11b3d3bd41843bb361439466fefc\"" Mar 10 01:08:36.992713 containerd[1595]: time="2026-03-10T01:08:36.992687449Z" level=info msg="StartContainer for \"e381300b60934278b1db9cbd204a07147e4a41d0d31bf7c6e9bedc261abae532\"" Mar 10 01:08:37.329357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138026773.mount: Deactivated successfully. Mar 10 01:08:37.466855 containerd[1595]: time="2026-03-10T01:08:37.465663096Z" level=info msg="StartContainer for \"b20b22d49ad19856554643b89259e95d1bff11b3d3bd41843bb361439466fefc\" returns successfully" Mar 10 01:08:37.483552 containerd[1595]: time="2026-03-10T01:08:37.479423699Z" level=info msg="StartContainer for \"e381300b60934278b1db9cbd204a07147e4a41d0d31bf7c6e9bedc261abae532\" returns successfully" Mar 10 01:08:38.078485 kubelet[2997]: E0310 01:08:38.077947 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:38.203384 kubelet[2997]: E0310 01:08:38.196697 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:38.283372 kubelet[2997]: I0310 01:08:38.280508 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6qdmd" podStartSLOduration=98.280489544 podStartE2EDuration="1m38.280489544s" podCreationTimestamp="2026-03-10 01:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:08:38.204562448 +0000 UTC m=+113.971013271" watchObservedRunningTime="2026-03-10 01:08:38.280489544 +0000 UTC m=+114.046940378" Mar 10 01:08:39.205918 kubelet[2997]: E0310 01:08:39.205686 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:39.205918 kubelet[2997]: E0310 01:08:39.205847 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:39.338646 kubelet[2997]: I0310 01:08:39.338573 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dhsf4" podStartSLOduration=100.338551645 podStartE2EDuration="1m40.338551645s" podCreationTimestamp="2026-03-10 01:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:08:38.307981269 +0000 UTC m=+114.074432072" watchObservedRunningTime="2026-03-10 01:08:39.338551645 +0000 UTC m=+115.105002449" Mar 10 01:08:40.214745 kubelet[2997]: E0310 01:08:40.214707 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:40.227718 kubelet[2997]: E0310 01:08:40.217859 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:08:41.221492 kubelet[2997]: E0310 01:08:41.218665 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:09:26.557616 kubelet[2997]: E0310 01:09:26.556801 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:09:32.562393 kubelet[2997]: E0310 01:09:32.561851 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:09:36.798936 kubelet[2997]: E0310 01:09:36.758954 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:09:37.134292 kubelet[2997]: E0310 01:09:37.131050 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:09:37.285840 kubelet[2997]: E0310 01:09:37.285390 2997 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.085s" Mar 10 01:09:44.551274 kubelet[2997]: E0310 01:09:44.549343 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:09:45.549305 kubelet[2997]: E0310 01:09:45.548291 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:09:58.565433 kubelet[2997]: E0310 01:09:58.564787 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:10:01.558950 kubelet[2997]: E0310 01:10:01.558606 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:10:45.560985 kubelet[2997]: E0310 01:10:45.559652 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:10:49.562433 kubelet[2997]: E0310 01:10:49.558975 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:10:53.551823 kubelet[2997]: E0310 01:10:53.550680 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:10:56.557402 kubelet[2997]: E0310 01:10:56.554981 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:10:58.549573 kubelet[2997]: E0310 01:10:58.548995 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:10:59.243875 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:41488.service - OpenSSH per-connection server daemon (10.0.0.1:41488). Mar 10 01:10:59.582587 sshd[4562]: Accepted publickey for core from 10.0.0.1 port 41488 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:10:59.592505 sshd[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:10:59.653374 systemd-logind[1578]: New session 10 of user core. Mar 10 01:10:59.680503 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 01:11:00.432394 sshd[4562]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:00.462625 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:41488.service: Deactivated successfully. Mar 10 01:11:00.486012 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 01:11:00.488839 systemd-logind[1578]: Session 10 logged out. Waiting for processes to exit. Mar 10 01:11:00.504011 systemd-logind[1578]: Removed session 10. Mar 10 01:11:01.551988 kubelet[2997]: E0310 01:11:01.549598 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:11:01.554937 kubelet[2997]: E0310 01:11:01.552342 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:11:05.455558 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:57118.service - OpenSSH per-connection server daemon (10.0.0.1:57118). Mar 10 01:11:05.573997 sshd[4587]: Accepted publickey for core from 10.0.0.1 port 57118 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:05.582537 sshd[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:05.606531 systemd-logind[1578]: New session 11 of user core. Mar 10 01:11:05.616967 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 01:11:05.993961 sshd[4587]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:06.001875 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:57118.service: Deactivated successfully. Mar 10 01:11:06.014975 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 01:11:06.015455 systemd-logind[1578]: Session 11 logged out. Waiting for processes to exit. Mar 10 01:11:06.033980 systemd-logind[1578]: Removed session 11. Mar 10 01:11:11.019911 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:57122.service - OpenSSH per-connection server daemon (10.0.0.1:57122). Mar 10 01:11:11.227031 sshd[4604]: Accepted publickey for core from 10.0.0.1 port 57122 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:11.240909 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:11.291431 systemd-logind[1578]: New session 12 of user core. Mar 10 01:11:11.318914 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 01:11:12.057413 sshd[4604]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:12.072015 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:57122.service: Deactivated successfully. Mar 10 01:11:12.092574 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 01:11:12.093032 systemd-logind[1578]: Session 12 logged out. Waiting for processes to exit. Mar 10 01:11:12.130540 systemd-logind[1578]: Removed session 12. Mar 10 01:11:17.062815 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:48258.service - OpenSSH per-connection server daemon (10.0.0.1:48258). Mar 10 01:11:17.117636 sshd[4624]: Accepted publickey for core from 10.0.0.1 port 48258 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:17.121248 sshd[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:17.133051 systemd-logind[1578]: New session 13 of user core. Mar 10 01:11:17.152258 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 01:11:17.421746 sshd[4624]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:17.430454 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:48258.service: Deactivated successfully. Mar 10 01:11:17.437540 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 01:11:17.438466 systemd-logind[1578]: Session 13 logged out. Waiting for processes to exit. Mar 10 01:11:17.453327 systemd-logind[1578]: Removed session 13. Mar 10 01:11:22.446018 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:59238.service - OpenSSH per-connection server daemon (10.0.0.1:59238). Mar 10 01:11:22.632841 sshd[4644]: Accepted publickey for core from 10.0.0.1 port 59238 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:22.636466 sshd[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:22.684231 systemd-logind[1578]: New session 14 of user core. Mar 10 01:11:22.690250 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 01:11:23.074491 sshd[4644]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:23.084884 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:59238.service: Deactivated successfully. Mar 10 01:11:23.090875 systemd-logind[1578]: Session 14 logged out. Waiting for processes to exit. Mar 10 01:11:23.096726 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 01:11:23.102787 systemd-logind[1578]: Removed session 14. Mar 10 01:11:24.553827 kubelet[2997]: E0310 01:11:24.553366 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:11:28.086650 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:59240.service - OpenSSH per-connection server daemon (10.0.0.1:59240). Mar 10 01:11:28.148778 sshd[4661]: Accepted publickey for core from 10.0.0.1 port 59240 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:28.154780 sshd[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:28.169934 systemd-logind[1578]: New session 15 of user core. Mar 10 01:11:28.180748 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 01:11:28.434365 sshd[4661]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:28.448699 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:59240.service: Deactivated successfully. Mar 10 01:11:28.457920 systemd-logind[1578]: Session 15 logged out. Waiting for processes to exit. Mar 10 01:11:28.459787 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 01:11:28.464591 systemd-logind[1578]: Removed session 15. Mar 10 01:11:33.463356 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:50668.service - OpenSSH per-connection server daemon (10.0.0.1:50668). Mar 10 01:11:33.526526 sshd[4680]: Accepted publickey for core from 10.0.0.1 port 50668 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:33.530703 sshd[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:33.558962 systemd-logind[1578]: New session 16 of user core. Mar 10 01:11:33.567762 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 01:11:33.875704 sshd[4680]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:33.883759 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:50668.service: Deactivated successfully. Mar 10 01:11:33.890739 systemd-logind[1578]: Session 16 logged out. Waiting for processes to exit. Mar 10 01:11:33.891646 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 01:11:33.895950 systemd-logind[1578]: Removed session 16. Mar 10 01:11:38.896067 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:50674.service - OpenSSH per-connection server daemon (10.0.0.1:50674). Mar 10 01:11:38.958990 sshd[4696]: Accepted publickey for core from 10.0.0.1 port 50674 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:38.963955 sshd[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:38.978393 systemd-logind[1578]: New session 17 of user core. Mar 10 01:11:38.996612 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 01:11:39.269886 sshd[4696]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:39.279642 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:50674.service: Deactivated successfully. Mar 10 01:11:39.284295 systemd-logind[1578]: Session 17 logged out. Waiting for processes to exit. Mar 10 01:11:39.284534 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 01:11:39.288360 systemd-logind[1578]: Removed session 17. Mar 10 01:11:44.287009 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:57112.service - OpenSSH per-connection server daemon (10.0.0.1:57112). Mar 10 01:11:44.363855 sshd[4713]: Accepted publickey for core from 10.0.0.1 port 57112 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:44.368757 sshd[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:44.386708 systemd-logind[1578]: New session 18 of user core. Mar 10 01:11:44.391812 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 01:11:44.755777 sshd[4713]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:44.764616 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:57112.service: Deactivated successfully. Mar 10 01:11:44.774779 systemd-logind[1578]: Session 18 logged out. Waiting for processes to exit. Mar 10 01:11:44.777506 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 01:11:44.782066 systemd-logind[1578]: Removed session 18. Mar 10 01:11:49.572318 kubelet[2997]: E0310 01:11:49.571716 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:11:49.788052 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:57124.service - OpenSSH per-connection server daemon (10.0.0.1:57124). Mar 10 01:11:49.864552 sshd[4734]: Accepted publickey for core from 10.0.0.1 port 57124 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:49.867763 sshd[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:49.884304 systemd-logind[1578]: New session 19 of user core. Mar 10 01:11:49.899908 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 01:11:50.288825 sshd[4734]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:50.299689 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:57124.service: Deactivated successfully. Mar 10 01:11:50.308974 systemd-logind[1578]: Session 19 logged out. Waiting for processes to exit. Mar 10 01:11:50.309811 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 01:11:50.313626 systemd-logind[1578]: Removed session 19. Mar 10 01:11:55.311680 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:59496.service - OpenSSH per-connection server daemon (10.0.0.1:59496). Mar 10 01:11:55.387276 sshd[4750]: Accepted publickey for core from 10.0.0.1 port 59496 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:11:55.391781 sshd[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:11:55.413258 systemd-logind[1578]: New session 20 of user core. Mar 10 01:11:55.441300 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 01:11:55.789933 sshd[4750]: pam_unix(sshd:session): session closed for user core Mar 10 01:11:55.800780 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:59496.service: Deactivated successfully. Mar 10 01:11:55.808250 systemd-logind[1578]: Session 20 logged out. Waiting for processes to exit. Mar 10 01:11:55.808282 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 01:11:55.814695 systemd-logind[1578]: Removed session 20. Mar 10 01:12:00.814674 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:59504.service - OpenSSH per-connection server daemon (10.0.0.1:59504). Mar 10 01:12:00.942965 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 59504 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:00.951969 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:00.972546 systemd-logind[1578]: New session 21 of user core. Mar 10 01:12:00.990585 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 01:12:01.304946 sshd[4767]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:01.314886 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:59504.service: Deactivated successfully. Mar 10 01:12:01.324468 systemd-logind[1578]: Session 21 logged out. Waiting for processes to exit. Mar 10 01:12:01.324583 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 01:12:01.328949 systemd-logind[1578]: Removed session 21. Mar 10 01:12:06.338741 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:48696.service - OpenSSH per-connection server daemon (10.0.0.1:48696). Mar 10 01:12:06.414836 sshd[4787]: Accepted publickey for core from 10.0.0.1 port 48696 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:06.423054 sshd[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:06.450620 systemd-logind[1578]: New session 22 of user core. Mar 10 01:12:06.466891 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 01:12:06.558892 kubelet[2997]: E0310 01:12:06.556869 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:12:06.842542 sshd[4787]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:06.870905 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:48710.service - OpenSSH per-connection server daemon (10.0.0.1:48710). Mar 10 01:12:06.871965 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:48696.service: Deactivated successfully. Mar 10 01:12:06.881451 systemd-logind[1578]: Session 22 logged out. Waiting for processes to exit. Mar 10 01:12:06.881599 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 01:12:06.891992 systemd-logind[1578]: Removed session 22. Mar 10 01:12:06.955866 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 48710 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:06.958966 sshd[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:06.983514 systemd-logind[1578]: New session 23 of user core. Mar 10 01:12:07.000288 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 01:12:07.562306 sshd[4800]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:07.577706 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:48716.service - OpenSSH per-connection server daemon (10.0.0.1:48716). Mar 10 01:12:07.580296 systemd-logind[1578]: Session 23 logged out. Waiting for processes to exit. Mar 10 01:12:07.585871 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:48710.service: Deactivated successfully. Mar 10 01:12:07.614670 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 01:12:07.625630 systemd-logind[1578]: Removed session 23. Mar 10 01:12:07.711485 sshd[4814]: Accepted publickey for core from 10.0.0.1 port 48716 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:07.719689 sshd[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:07.731546 systemd-logind[1578]: New session 24 of user core. Mar 10 01:12:07.755658 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 01:12:08.103738 sshd[4814]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:08.119592 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:48716.service: Deactivated successfully. Mar 10 01:12:08.128537 systemd-logind[1578]: Session 24 logged out. Waiting for processes to exit. Mar 10 01:12:08.129877 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 01:12:08.135583 systemd-logind[1578]: Removed session 24. Mar 10 01:12:09.569517 kubelet[2997]: E0310 01:12:09.568486 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:12:12.565832 kubelet[2997]: E0310 01:12:12.565491 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:12:13.131036 systemd[1]: Started sshd@24-10.0.0.67:22-10.0.0.1:42666.service - OpenSSH per-connection server daemon (10.0.0.1:42666). Mar 10 01:12:13.220773 sshd[4833]: Accepted publickey for core from 10.0.0.1 port 42666 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:13.223875 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:13.244936 systemd-logind[1578]: New session 25 of user core. Mar 10 01:12:13.269774 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 01:12:13.654442 sshd[4833]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:13.670953 systemd[1]: sshd@24-10.0.0.67:22-10.0.0.1:42666.service: Deactivated successfully. Mar 10 01:12:13.678812 systemd-logind[1578]: Session 25 logged out. Waiting for processes to exit. Mar 10 01:12:13.681674 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 01:12:13.687407 systemd-logind[1578]: Removed session 25. Mar 10 01:12:14.565978 kubelet[2997]: E0310 01:12:14.549902 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:12:15.572971 kubelet[2997]: E0310 01:12:15.572745 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:12:18.683970 systemd[1]: Started sshd@25-10.0.0.67:22-10.0.0.1:42672.service - OpenSSH per-connection server daemon (10.0.0.1:42672). Mar 10 01:12:18.743016 sshd[4849]: Accepted publickey for core from 10.0.0.1 port 42672 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:18.747695 sshd[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:18.761435 systemd-logind[1578]: New session 26 of user core. Mar 10 01:12:18.774904 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 01:12:19.212768 sshd[4849]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:19.223991 systemd[1]: sshd@25-10.0.0.67:22-10.0.0.1:42672.service: Deactivated successfully. Mar 10 01:12:19.240916 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 01:12:19.241860 systemd-logind[1578]: Session 26 logged out. Waiting for processes to exit. Mar 10 01:12:19.257665 systemd-logind[1578]: Removed session 26. Mar 10 01:12:20.555739 kubelet[2997]: E0310 01:12:20.555565 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:12:24.233743 systemd[1]: Started sshd@26-10.0.0.67:22-10.0.0.1:45692.service - OpenSSH per-connection server daemon (10.0.0.1:45692). Mar 10 01:12:24.335816 sshd[4865]: Accepted publickey for core from 10.0.0.1 port 45692 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:24.360630 sshd[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:24.386000 systemd-logind[1578]: New session 27 of user core. Mar 10 01:12:24.396615 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 10 01:12:24.760608 sshd[4865]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:24.773512 systemd[1]: sshd@26-10.0.0.67:22-10.0.0.1:45692.service: Deactivated successfully. Mar 10 01:12:24.780768 systemd-logind[1578]: Session 27 logged out. Waiting for processes to exit. Mar 10 01:12:24.783429 systemd[1]: session-27.scope: Deactivated successfully. Mar 10 01:12:24.788817 systemd-logind[1578]: Removed session 27. Mar 10 01:12:29.771444 systemd[1]: Started sshd@27-10.0.0.67:22-10.0.0.1:45708.service - OpenSSH per-connection server daemon (10.0.0.1:45708). Mar 10 01:12:29.841764 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 45708 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:29.847017 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:29.866878 systemd-logind[1578]: New session 28 of user core. Mar 10 01:12:29.874921 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 10 01:12:30.194622 sshd[4880]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:30.205784 systemd[1]: sshd@27-10.0.0.67:22-10.0.0.1:45708.service: Deactivated successfully. Mar 10 01:12:30.217855 systemd[1]: session-28.scope: Deactivated successfully. Mar 10 01:12:30.219002 systemd-logind[1578]: Session 28 logged out. Waiting for processes to exit. Mar 10 01:12:30.226002 systemd-logind[1578]: Removed session 28. Mar 10 01:12:35.217987 systemd[1]: Started sshd@28-10.0.0.67:22-10.0.0.1:42962.service - OpenSSH per-connection server daemon (10.0.0.1:42962). Mar 10 01:12:35.345929 sshd[4898]: Accepted publickey for core from 10.0.0.1 port 42962 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:35.368385 sshd[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:35.392709 systemd-logind[1578]: New session 29 of user core. Mar 10 01:12:35.414030 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 10 01:12:35.768939 sshd[4898]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:35.784055 systemd[1]: sshd@28-10.0.0.67:22-10.0.0.1:42962.service: Deactivated successfully. Mar 10 01:12:35.788804 systemd-logind[1578]: Session 29 logged out. Waiting for processes to exit. Mar 10 01:12:35.790731 systemd[1]: session-29.scope: Deactivated successfully. Mar 10 01:12:35.795712 systemd-logind[1578]: Removed session 29. Mar 10 01:12:40.792745 systemd[1]: Started sshd@29-10.0.0.67:22-10.0.0.1:42972.service - OpenSSH per-connection server daemon (10.0.0.1:42972). Mar 10 01:12:40.967488 sshd[4914]: Accepted publickey for core from 10.0.0.1 port 42972 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:40.969956 sshd[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:41.010704 systemd-logind[1578]: New session 30 of user core. Mar 10 01:12:41.029945 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 10 01:12:41.571516 sshd[4914]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:41.588378 systemd[1]: sshd@29-10.0.0.67:22-10.0.0.1:42972.service: Deactivated successfully. Mar 10 01:12:41.599396 systemd[1]: session-30.scope: Deactivated successfully. Mar 10 01:12:41.599769 systemd-logind[1578]: Session 30 logged out. Waiting for processes to exit. Mar 10 01:12:41.607742 systemd-logind[1578]: Removed session 30. Mar 10 01:12:44.567843 kubelet[2997]: E0310 01:12:44.550798 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:12:46.600840 systemd[1]: Started sshd@30-10.0.0.67:22-10.0.0.1:52666.service - OpenSSH per-connection server daemon (10.0.0.1:52666). Mar 10 01:12:46.700513 sshd[4930]: Accepted publickey for core from 10.0.0.1 port 52666 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:46.703047 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:46.720418 systemd-logind[1578]: New session 31 of user core. Mar 10 01:12:46.735637 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 10 01:12:47.273616 sshd[4930]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:47.285797 systemd[1]: sshd@30-10.0.0.67:22-10.0.0.1:52666.service: Deactivated successfully. Mar 10 01:12:47.299233 systemd-logind[1578]: Session 31 logged out. Waiting for processes to exit. Mar 10 01:12:47.299413 systemd[1]: session-31.scope: Deactivated successfully. Mar 10 01:12:47.306048 systemd-logind[1578]: Removed session 31. Mar 10 01:12:51.554690 kubelet[2997]: E0310 01:12:51.553952 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:12:52.296887 systemd[1]: Started sshd@31-10.0.0.67:22-10.0.0.1:35878.service - OpenSSH per-connection server daemon (10.0.0.1:35878). Mar 10 01:12:52.476849 sshd[4947]: Accepted publickey for core from 10.0.0.1 port 35878 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:52.480635 sshd[4947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:52.507025 systemd-logind[1578]: New session 32 of user core. Mar 10 01:12:52.533641 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 10 01:12:52.974718 sshd[4947]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:52.991711 systemd[1]: sshd@31-10.0.0.67:22-10.0.0.1:35878.service: Deactivated successfully. Mar 10 01:12:53.006803 systemd[1]: session-32.scope: Deactivated successfully. Mar 10 01:12:53.017729 systemd-logind[1578]: Session 32 logged out. Waiting for processes to exit. Mar 10 01:12:53.024056 systemd-logind[1578]: Removed session 32. Mar 10 01:12:57.999412 systemd[1]: Started sshd@32-10.0.0.67:22-10.0.0.1:35892.service - OpenSSH per-connection server daemon (10.0.0.1:35892). Mar 10 01:12:58.119816 sshd[4962]: Accepted publickey for core from 10.0.0.1 port 35892 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:12:58.126710 sshd[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:12:58.162473 systemd-logind[1578]: New session 33 of user core. Mar 10 01:12:58.175768 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 10 01:12:58.678868 sshd[4962]: pam_unix(sshd:session): session closed for user core Mar 10 01:12:58.699879 systemd[1]: sshd@32-10.0.0.67:22-10.0.0.1:35892.service: Deactivated successfully. Mar 10 01:12:58.711526 systemd[1]: session-33.scope: Deactivated successfully. Mar 10 01:12:58.717868 systemd-logind[1578]: Session 33 logged out. Waiting for processes to exit. Mar 10 01:12:58.729724 systemd-logind[1578]: Removed session 33. Mar 10 01:13:03.717995 systemd[1]: Started sshd@33-10.0.0.67:22-10.0.0.1:50272.service - OpenSSH per-connection server daemon (10.0.0.1:50272). Mar 10 01:13:03.875957 sshd[4980]: Accepted publickey for core from 10.0.0.1 port 50272 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:03.888663 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:03.908579 systemd-logind[1578]: New session 34 of user core. Mar 10 01:13:03.916897 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 10 01:13:04.738068 sshd[4980]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:04.772723 systemd[1]: sshd@33-10.0.0.67:22-10.0.0.1:50272.service: Deactivated successfully. Mar 10 01:13:04.779373 systemd-logind[1578]: Session 34 logged out. Waiting for processes to exit. Mar 10 01:13:04.781696 systemd[1]: session-34.scope: Deactivated successfully. Mar 10 01:13:04.787880 systemd-logind[1578]: Removed session 34. Mar 10 01:13:09.777939 systemd[1]: Started sshd@34-10.0.0.67:22-10.0.0.1:50282.service - OpenSSH per-connection server daemon (10.0.0.1:50282). Mar 10 01:13:09.854599 sshd[4996]: Accepted publickey for core from 10.0.0.1 port 50282 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:09.859846 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:09.880701 systemd-logind[1578]: New session 35 of user core. Mar 10 01:13:09.900758 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 10 01:13:10.298693 sshd[4996]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:10.310906 systemd[1]: Started sshd@35-10.0.0.67:22-10.0.0.1:50286.service - OpenSSH per-connection server daemon (10.0.0.1:50286). Mar 10 01:13:10.318040 systemd[1]: sshd@34-10.0.0.67:22-10.0.0.1:50282.service: Deactivated successfully. Mar 10 01:13:10.328723 systemd-logind[1578]: Session 35 logged out. Waiting for processes to exit. Mar 10 01:13:10.332910 systemd[1]: session-35.scope: Deactivated successfully. Mar 10 01:13:10.349545 systemd-logind[1578]: Removed session 35. Mar 10 01:13:10.461930 sshd[5009]: Accepted publickey for core from 10.0.0.1 port 50286 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:10.466750 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:10.488950 systemd-logind[1578]: New session 36 of user core. Mar 10 01:13:10.503778 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 10 01:13:11.557555 kubelet[2997]: E0310 01:13:11.557498 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:13:11.932709 sshd[5009]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:11.965920 systemd[1]: Started sshd@36-10.0.0.67:22-10.0.0.1:50298.service - OpenSSH per-connection server daemon (10.0.0.1:50298). Mar 10 01:13:11.967817 systemd[1]: sshd@35-10.0.0.67:22-10.0.0.1:50286.service: Deactivated successfully. Mar 10 01:13:11.975614 systemd[1]: session-36.scope: Deactivated successfully. Mar 10 01:13:11.990493 systemd-logind[1578]: Session 36 logged out. Waiting for processes to exit. Mar 10 01:13:12.004692 systemd-logind[1578]: Removed session 36. Mar 10 01:13:12.072877 sshd[5023]: Accepted publickey for core from 10.0.0.1 port 50298 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:12.077833 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:12.099884 systemd-logind[1578]: New session 37 of user core. Mar 10 01:13:12.120988 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 10 01:13:13.778604 sshd[5023]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:13.801479 systemd[1]: Started sshd@37-10.0.0.67:22-10.0.0.1:52876.service - OpenSSH per-connection server daemon (10.0.0.1:52876). Mar 10 01:13:13.803788 systemd[1]: sshd@36-10.0.0.67:22-10.0.0.1:50298.service: Deactivated successfully. Mar 10 01:13:13.810045 systemd[1]: session-37.scope: Deactivated successfully. Mar 10 01:13:13.821516 systemd-logind[1578]: Session 37 logged out. Waiting for processes to exit. Mar 10 01:13:13.835509 systemd-logind[1578]: Removed session 37. Mar 10 01:13:13.914976 sshd[5047]: Accepted publickey for core from 10.0.0.1 port 52876 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:13.918235 sshd[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:13.934802 systemd-logind[1578]: New session 38 of user core. Mar 10 01:13:13.952830 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 10 01:13:14.523684 sshd[5047]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:14.527999 systemd[1]: Started sshd@38-10.0.0.67:22-10.0.0.1:52884.service - OpenSSH per-connection server daemon (10.0.0.1:52884). Mar 10 01:13:14.548496 systemd[1]: sshd@37-10.0.0.67:22-10.0.0.1:52876.service: Deactivated successfully. Mar 10 01:13:14.569007 systemd[1]: session-38.scope: Deactivated successfully. Mar 10 01:13:14.570220 systemd-logind[1578]: Session 38 logged out. Waiting for processes to exit. Mar 10 01:13:14.574663 systemd-logind[1578]: Removed session 38. Mar 10 01:13:14.610983 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 52884 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:14.614660 sshd[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:14.628217 systemd-logind[1578]: New session 39 of user core. Mar 10 01:13:14.656681 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 10 01:13:14.932968 sshd[5062]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:14.942659 systemd[1]: sshd@38-10.0.0.67:22-10.0.0.1:52884.service: Deactivated successfully. Mar 10 01:13:14.953868 systemd[1]: session-39.scope: Deactivated successfully. Mar 10 01:13:14.953879 systemd-logind[1578]: Session 39 logged out. Waiting for processes to exit. Mar 10 01:13:14.962509 systemd-logind[1578]: Removed session 39. Mar 10 01:13:19.974916 systemd[1]: Started sshd@39-10.0.0.67:22-10.0.0.1:52898.service - OpenSSH per-connection server daemon (10.0.0.1:52898). Mar 10 01:13:20.068472 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 52898 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:20.071565 sshd[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:20.086561 systemd-logind[1578]: New session 40 of user core. Mar 10 01:13:20.095352 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 10 01:13:20.401826 sshd[5080]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:20.414385 systemd[1]: sshd@39-10.0.0.67:22-10.0.0.1:52898.service: Deactivated successfully. Mar 10 01:13:20.421857 systemd[1]: session-40.scope: Deactivated successfully. Mar 10 01:13:20.422435 systemd-logind[1578]: Session 40 logged out. Waiting for processes to exit. Mar 10 01:13:20.427418 systemd-logind[1578]: Removed session 40. Mar 10 01:13:21.559596 kubelet[2997]: E0310 01:13:21.558705 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:13:24.556568 kubelet[2997]: E0310 01:13:24.555976 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:13:25.428971 systemd[1]: Started sshd@40-10.0.0.67:22-10.0.0.1:52852.service - OpenSSH per-connection server daemon (10.0.0.1:52852). Mar 10 01:13:25.513850 sshd[5095]: Accepted publickey for core from 10.0.0.1 port 52852 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:25.518333 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:25.558879 systemd-logind[1578]: New session 41 of user core. Mar 10 01:13:25.566476 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 10 01:13:25.898887 sshd[5095]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:25.915976 systemd-logind[1578]: Session 41 logged out. Waiting for processes to exit. Mar 10 01:13:25.916418 systemd[1]: sshd@40-10.0.0.67:22-10.0.0.1:52852.service: Deactivated successfully. Mar 10 01:13:25.926634 systemd[1]: session-41.scope: Deactivated successfully. Mar 10 01:13:25.929508 systemd-logind[1578]: Removed session 41. Mar 10 01:13:27.557361 kubelet[2997]: E0310 01:13:27.556903 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:13:28.564498 kubelet[2997]: E0310 01:13:28.564374 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:13:30.934679 systemd[1]: Started sshd@41-10.0.0.67:22-10.0.0.1:52862.service - OpenSSH per-connection server daemon (10.0.0.1:52862). Mar 10 01:13:31.029707 sshd[5110]: Accepted publickey for core from 10.0.0.1 port 52862 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:31.037732 sshd[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:31.063572 systemd-logind[1578]: New session 42 of user core. Mar 10 01:13:31.074443 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 10 01:13:31.518919 sshd[5110]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:31.529610 systemd[1]: sshd@41-10.0.0.67:22-10.0.0.1:52862.service: Deactivated successfully. Mar 10 01:13:31.545474 systemd[1]: session-42.scope: Deactivated successfully. Mar 10 01:13:31.547730 systemd-logind[1578]: Session 42 logged out. Waiting for processes to exit. Mar 10 01:13:31.571826 systemd-logind[1578]: Removed session 42. Mar 10 01:13:36.550870 systemd[1]: Started sshd@42-10.0.0.67:22-10.0.0.1:36186.service - OpenSSH per-connection server daemon (10.0.0.1:36186). Mar 10 01:13:36.625573 sshd[5128]: Accepted publickey for core from 10.0.0.1 port 36186 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:36.629491 sshd[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:36.659682 systemd-logind[1578]: New session 43 of user core. Mar 10 01:13:36.670231 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 10 01:13:37.037926 sshd[5128]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:37.053382 systemd[1]: sshd@42-10.0.0.67:22-10.0.0.1:36186.service: Deactivated successfully. Mar 10 01:13:37.068680 systemd[1]: session-43.scope: Deactivated successfully. Mar 10 01:13:37.072596 systemd-logind[1578]: Session 43 logged out. Waiting for processes to exit. Mar 10 01:13:37.077484 systemd-logind[1578]: Removed session 43. Mar 10 01:13:38.552865 kubelet[2997]: E0310 01:13:38.552717 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:13:42.080707 systemd[1]: Started sshd@43-10.0.0.67:22-10.0.0.1:50830.service - OpenSSH per-connection server daemon (10.0.0.1:50830). Mar 10 01:13:42.202248 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 50830 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:42.210488 sshd[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:42.236360 systemd-logind[1578]: New session 44 of user core. Mar 10 01:13:42.274773 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 10 01:13:42.579484 sshd[5144]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:42.588853 systemd[1]: sshd@43-10.0.0.67:22-10.0.0.1:50830.service: Deactivated successfully. Mar 10 01:13:42.596389 systemd-logind[1578]: Session 44 logged out. Waiting for processes to exit. Mar 10 01:13:42.596675 systemd[1]: session-44.scope: Deactivated successfully. Mar 10 01:13:42.601658 systemd-logind[1578]: Removed session 44. Mar 10 01:13:47.616505 systemd[1]: Started sshd@44-10.0.0.67:22-10.0.0.1:50840.service - OpenSSH per-connection server daemon (10.0.0.1:50840). Mar 10 01:13:47.692409 sshd[5162]: Accepted publickey for core from 10.0.0.1 port 50840 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:47.697633 sshd[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:47.720784 systemd-logind[1578]: New session 45 of user core. Mar 10 01:13:47.737419 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 10 01:13:48.137617 sshd[5162]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:48.180479 systemd[1]: sshd@44-10.0.0.67:22-10.0.0.1:50840.service: Deactivated successfully. Mar 10 01:13:48.188871 systemd-logind[1578]: Session 45 logged out. Waiting for processes to exit. Mar 10 01:13:48.193595 systemd[1]: session-45.scope: Deactivated successfully. Mar 10 01:13:48.199471 systemd-logind[1578]: Removed session 45. Mar 10 01:13:52.552656 kubelet[2997]: E0310 01:13:52.551487 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:13:53.157896 systemd[1]: Started sshd@45-10.0.0.67:22-10.0.0.1:50516.service - OpenSSH per-connection server daemon (10.0.0.1:50516). Mar 10 01:13:53.234772 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 50516 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:53.237950 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:53.272784 systemd-logind[1578]: New session 46 of user core. Mar 10 01:13:53.293891 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 10 01:13:53.629932 sshd[5179]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:53.651039 systemd[1]: Started sshd@46-10.0.0.67:22-10.0.0.1:50522.service - OpenSSH per-connection server daemon (10.0.0.1:50522). Mar 10 01:13:53.669875 systemd[1]: sshd@45-10.0.0.67:22-10.0.0.1:50516.service: Deactivated successfully. Mar 10 01:13:53.676949 systemd[1]: session-46.scope: Deactivated successfully. Mar 10 01:13:53.684027 systemd-logind[1578]: Session 46 logged out. Waiting for processes to exit. Mar 10 01:13:53.691440 systemd-logind[1578]: Removed session 46. Mar 10 01:13:53.774745 sshd[5191]: Accepted publickey for core from 10.0.0.1 port 50522 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:53.779682 sshd[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:53.793461 systemd-logind[1578]: New session 47 of user core. Mar 10 01:13:53.804579 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 10 01:13:56.136734 containerd[1595]: time="2026-03-10T01:13:56.135587472Z" level=info msg="StopContainer for \"25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404\" with timeout 30 (s)" Mar 10 01:13:56.174775 containerd[1595]: time="2026-03-10T01:13:56.163390703Z" level=info msg="Stop container \"25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404\" with signal terminated" Mar 10 01:13:56.428940 containerd[1595]: time="2026-03-10T01:13:56.428523868Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:13:56.483446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404-rootfs.mount: Deactivated successfully. Mar 10 01:13:56.491733 containerd[1595]: time="2026-03-10T01:13:56.491397676Z" level=info msg="StopContainer for \"88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a\" with timeout 2 (s)" Mar 10 01:13:56.504005 containerd[1595]: time="2026-03-10T01:13:56.492533362Z" level=info msg="Stop container \"88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a\" with signal terminated" Mar 10 01:13:56.519618 containerd[1595]: time="2026-03-10T01:13:56.518740599Z" level=info msg="shim disconnected" id=25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404 namespace=k8s.io Mar 10 01:13:56.519618 containerd[1595]: time="2026-03-10T01:13:56.518818325Z" level=warning msg="cleaning up after shim disconnected" id=25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404 namespace=k8s.io Mar 10 01:13:56.519618 containerd[1595]: time="2026-03-10T01:13:56.518837821Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:13:56.565831 systemd-networkd[1247]: lxc_health: Link DOWN Mar 10 01:13:56.565848 systemd-networkd[1247]: lxc_health: Lost carrier Mar 10 01:13:56.715806 containerd[1595]: time="2026-03-10T01:13:56.715524378Z" level=info msg="StopContainer for \"25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404\" returns successfully" Mar 10 01:13:56.733634 containerd[1595]: time="2026-03-10T01:13:56.732386397Z" level=info msg="StopPodSandbox for \"4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a\"" Mar 10 01:13:56.733634 containerd[1595]: time="2026-03-10T01:13:56.732604524Z" level=info msg="Container to stop \"25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:13:56.737064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a-shm.mount: Deactivated successfully. Mar 10 01:13:56.743664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a-rootfs.mount: Deactivated successfully. Mar 10 01:13:56.776027 containerd[1595]: time="2026-03-10T01:13:56.775602657Z" level=info msg="shim disconnected" id=88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a namespace=k8s.io Mar 10 01:13:56.776027 containerd[1595]: time="2026-03-10T01:13:56.775997655Z" level=warning msg="cleaning up after shim disconnected" id=88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a namespace=k8s.io Mar 10 01:13:56.776027 containerd[1595]: time="2026-03-10T01:13:56.776018383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:13:56.869574 containerd[1595]: time="2026-03-10T01:13:56.868828513Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:13:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:13:56.887533 containerd[1595]: time="2026-03-10T01:13:56.887378819Z" level=info msg="StopContainer for \"88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a\" returns successfully" Mar 10 01:13:56.891259 containerd[1595]: time="2026-03-10T01:13:56.888709896Z" level=info msg="StopPodSandbox for \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\"" Mar 10 01:13:56.891259 containerd[1595]: time="2026-03-10T01:13:56.888757655Z" level=info msg="Container to stop \"df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:13:56.891259 containerd[1595]: time="2026-03-10T01:13:56.888775919Z" level=info msg="Container to stop \"9652c777a5b3feb3c7d93ee076ba3b9ead6919c2cdbc2c72b7d8576a9b8fa15b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:13:56.891259 containerd[1595]: time="2026-03-10T01:13:56.888788502Z" level=info msg="Container to stop \"c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:13:56.891259 containerd[1595]: time="2026-03-10T01:13:56.888803711Z" level=info msg="Container to stop \"bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:13:56.891259 containerd[1595]: time="2026-03-10T01:13:56.888818980Z" level=info msg="Container to stop \"88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:13:56.895630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1-shm.mount: Deactivated successfully. Mar 10 01:13:56.999006 containerd[1595]: time="2026-03-10T01:13:56.998699982Z" level=info msg="shim disconnected" id=4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a namespace=k8s.io Mar 10 01:13:56.999006 containerd[1595]: time="2026-03-10T01:13:56.998778719Z" level=warning msg="cleaning up after shim disconnected" id=4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a namespace=k8s.io Mar 10 01:13:56.999006 containerd[1595]: time="2026-03-10T01:13:56.998790000Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:13:57.108937 containerd[1595]: time="2026-03-10T01:13:57.108503050Z" level=info msg="TearDown network for sandbox \"4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a\" successfully" Mar 10 01:13:57.108937 containerd[1595]: time="2026-03-10T01:13:57.108547514Z" level=info msg="StopPodSandbox for \"4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a\" returns successfully" Mar 10 01:13:57.154833 kubelet[2997]: I0310 01:13:57.154656 2997 scope.go:117] "RemoveContainer" containerID="25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404" Mar 10 01:13:57.161573 containerd[1595]: time="2026-03-10T01:13:57.160410854Z" level=info msg="RemoveContainer for \"25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404\"" Mar 10 01:13:57.173400 containerd[1595]: time="2026-03-10T01:13:57.173269340Z" level=info msg="RemoveContainer for \"25db3d9d2739304901a5410ec91e395a7ed9f4476fb33df4b68eba8e5cebb404\" returns successfully" Mar 10 01:13:57.179396 containerd[1595]: time="2026-03-10T01:13:57.177673340Z" level=info msg="shim disconnected" id=e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1 namespace=k8s.io Mar 10 01:13:57.179396 containerd[1595]: time="2026-03-10T01:13:57.177737249Z" level=warning msg="cleaning up after shim disconnected" id=e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1 namespace=k8s.io Mar 10 01:13:57.179396 containerd[1595]: time="2026-03-10T01:13:57.177750334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:13:57.273716 kubelet[2997]: I0310 01:13:57.273551 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxd9x\" (UniqueName: \"kubernetes.io/projected/a81274c7-0f7f-4307-8b97-678613572cf8-kube-api-access-wxd9x\") pod \"a81274c7-0f7f-4307-8b97-678613572cf8\" (UID: \"a81274c7-0f7f-4307-8b97-678613572cf8\") " Mar 10 01:13:57.276035 kubelet[2997]: I0310 01:13:57.275281 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a81274c7-0f7f-4307-8b97-678613572cf8-cilium-config-path\") pod \"a81274c7-0f7f-4307-8b97-678613572cf8\" (UID: \"a81274c7-0f7f-4307-8b97-678613572cf8\") " Mar 10 01:13:57.284052 kubelet[2997]: I0310 01:13:57.283904 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a81274c7-0f7f-4307-8b97-678613572cf8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a81274c7-0f7f-4307-8b97-678613572cf8" (UID: "a81274c7-0f7f-4307-8b97-678613572cf8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:13:57.294707 kubelet[2997]: I0310 01:13:57.294561 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81274c7-0f7f-4307-8b97-678613572cf8-kube-api-access-wxd9x" (OuterVolumeSpecName: "kube-api-access-wxd9x") pod "a81274c7-0f7f-4307-8b97-678613572cf8" (UID: "a81274c7-0f7f-4307-8b97-678613572cf8"). InnerVolumeSpecName "kube-api-access-wxd9x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:13:57.306409 containerd[1595]: time="2026-03-10T01:13:57.305434171Z" level=info msg="TearDown network for sandbox \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" successfully" Mar 10 01:13:57.306409 containerd[1595]: time="2026-03-10T01:13:57.305480577Z" level=info msg="StopPodSandbox for \"e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1\" returns successfully" Mar 10 01:13:57.312018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e54f9c2305af9cd64ac521e24d7af7de6fd68f8849d28502fc3b512cd44d19e1-rootfs.mount: Deactivated successfully. Mar 10 01:13:57.312715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4175b0afa68209f322a07cd65c801a46980a71abd638fc0dc8c94760d79dfe5a-rootfs.mount: Deactivated successfully. Mar 10 01:13:57.312941 systemd[1]: var-lib-kubelet-pods-a81274c7\x2d0f7f\x2d4307\x2d8b97\x2d678613572cf8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwxd9x.mount: Deactivated successfully. Mar 10 01:13:57.378500 kubelet[2997]: I0310 01:13:57.377503 2997 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wxd9x\" (UniqueName: \"kubernetes.io/projected/a81274c7-0f7f-4307-8b97-678613572cf8-kube-api-access-wxd9x\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.378500 kubelet[2997]: I0310 01:13:57.377644 2997 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a81274c7-0f7f-4307-8b97-678613572cf8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.479046 kubelet[2997]: I0310 01:13:57.478799 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1506dd7-6edf-4834-8a2b-060079dd93ad-hubble-tls\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.479046 kubelet[2997]: I0310 01:13:57.478878 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-cgroup\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.479046 kubelet[2997]: I0310 01:13:57.478913 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcvr4\" (UniqueName: \"kubernetes.io/projected/d1506dd7-6edf-4834-8a2b-060079dd93ad-kube-api-access-zcvr4\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.479046 kubelet[2997]: I0310 01:13:57.478942 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-xtables-lock\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.479046 kubelet[2997]: I0310 01:13:57.478974 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-hostproc\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.479046 kubelet[2997]: I0310 01:13:57.479003 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-config-path\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.480824 kubelet[2997]: I0310 01:13:57.479026 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-run\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.480824 kubelet[2997]: I0310 01:13:57.479056 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-host-proc-sys-net\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.480824 kubelet[2997]: I0310 01:13:57.479402 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-host-proc-sys-kernel\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.483661 kubelet[2997]: I0310 01:13:57.483637 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-etc-cni-netd\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.484858 kubelet[2997]: I0310 01:13:57.483747 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cni-path\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.484858 kubelet[2997]: I0310 01:13:57.483853 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1506dd7-6edf-4834-8a2b-060079dd93ad-clustermesh-secrets\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.484858 kubelet[2997]: I0310 01:13:57.483875 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-lib-modules\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.484858 kubelet[2997]: I0310 01:13:57.483889 2997 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-bpf-maps\") pod \"d1506dd7-6edf-4834-8a2b-060079dd93ad\" (UID: \"d1506dd7-6edf-4834-8a2b-060079dd93ad\") " Mar 10 01:13:57.484858 kubelet[2997]: I0310 01:13:57.479615 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.484858 kubelet[2997]: I0310 01:13:57.483947 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.485494 kubelet[2997]: I0310 01:13:57.484718 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.485494 kubelet[2997]: I0310 01:13:57.484750 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cni-path" (OuterVolumeSpecName: "cni-path") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.486682 kubelet[2997]: I0310 01:13:57.485904 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-hostproc" (OuterVolumeSpecName: "hostproc") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.486682 kubelet[2997]: I0310 01:13:57.485956 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.486682 kubelet[2997]: I0310 01:13:57.485985 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.486682 kubelet[2997]: I0310 01:13:57.486231 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.486682 kubelet[2997]: I0310 01:13:57.486272 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.487043 kubelet[2997]: I0310 01:13:57.486409 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:13:57.492678 kubelet[2997]: I0310 01:13:57.492629 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:13:57.502643 systemd[1]: var-lib-kubelet-pods-d1506dd7\x2d6edf\x2d4834\x2d8a2b\x2d060079dd93ad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 10 01:13:57.505651 kubelet[2997]: I0310 01:13:57.503732 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1506dd7-6edf-4834-8a2b-060079dd93ad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 01:13:57.506265 kubelet[2997]: I0310 01:13:57.506052 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1506dd7-6edf-4834-8a2b-060079dd93ad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:13:57.512280 systemd[1]: var-lib-kubelet-pods-d1506dd7\x2d6edf\x2d4834\x2d8a2b\x2d060079dd93ad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 10 01:13:57.512498 kubelet[2997]: I0310 01:13:57.512251 2997 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1506dd7-6edf-4834-8a2b-060079dd93ad-kube-api-access-zcvr4" (OuterVolumeSpecName: "kube-api-access-zcvr4") pod "d1506dd7-6edf-4834-8a2b-060079dd93ad" (UID: "d1506dd7-6edf-4834-8a2b-060079dd93ad"). InnerVolumeSpecName "kube-api-access-zcvr4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:13:57.518494 systemd[1]: var-lib-kubelet-pods-d1506dd7\x2d6edf\x2d4834\x2d8a2b\x2d060079dd93ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzcvr4.mount: Deactivated successfully. Mar 10 01:13:57.590460 kubelet[2997]: I0310 01:13:57.589874 2997 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.590460 kubelet[2997]: I0310 01:13:57.589915 2997 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.590460 kubelet[2997]: I0310 01:13:57.589934 2997 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1506dd7-6edf-4834-8a2b-060079dd93ad-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.590460 kubelet[2997]: I0310 01:13:57.589946 2997 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.590460 kubelet[2997]: I0310 01:13:57.589964 2997 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zcvr4\" (UniqueName: \"kubernetes.io/projected/d1506dd7-6edf-4834-8a2b-060079dd93ad-kube-api-access-zcvr4\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.590460 kubelet[2997]: I0310 01:13:57.589982 2997 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.590460 kubelet[2997]: I0310 01:13:57.589995 2997 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.590460 kubelet[2997]: I0310 01:13:57.590007 2997 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.590836 kubelet[2997]: I0310 01:13:57.590018 2997 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.590836 kubelet[2997]: I0310 01:13:57.590030 2997 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.621563 kubelet[2997]: I0310 01:13:57.620438 2997 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.621563 kubelet[2997]: I0310 01:13:57.620499 2997 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.621563 kubelet[2997]: I0310 01:13:57.620513 2997 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1506dd7-6edf-4834-8a2b-060079dd93ad-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.621563 kubelet[2997]: I0310 01:13:57.620526 2997 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1506dd7-6edf-4834-8a2b-060079dd93ad-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 10 01:13:57.883742 sshd[5191]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:57.904623 systemd[1]: Started sshd@47-10.0.0.67:22-10.0.0.1:50532.service - OpenSSH per-connection server daemon (10.0.0.1:50532). Mar 10 01:13:57.905986 systemd[1]: sshd@46-10.0.0.67:22-10.0.0.1:50522.service: Deactivated successfully. Mar 10 01:13:57.918955 systemd-logind[1578]: Session 47 logged out. Waiting for processes to exit. Mar 10 01:13:57.921758 systemd[1]: session-47.scope: Deactivated successfully. Mar 10 01:13:57.932571 systemd-logind[1578]: Removed session 47. Mar 10 01:13:58.026251 kubelet[2997]: E0310 01:13:58.026020 2997 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:13:58.030576 sshd[5358]: Accepted publickey for core from 10.0.0.1 port 50532 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:58.036594 sshd[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:58.068004 systemd-logind[1578]: New session 48 of user core. Mar 10 01:13:58.082633 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 10 01:13:58.175667 kubelet[2997]: I0310 01:13:58.174925 2997 scope.go:117] "RemoveContainer" containerID="88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a" Mar 10 01:13:58.190716 containerd[1595]: time="2026-03-10T01:13:58.189589884Z" level=info msg="RemoveContainer for \"88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a\"" Mar 10 01:13:58.201772 containerd[1595]: time="2026-03-10T01:13:58.201635499Z" level=info msg="RemoveContainer for \"88e17f08ef113316d64abd82f625f41d535c15a6d8efc4b989a4bcf9366c2c3a\" returns successfully" Mar 10 01:13:58.202883 kubelet[2997]: I0310 01:13:58.202754 2997 scope.go:117] "RemoveContainer" containerID="c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7" Mar 10 01:13:58.210624 containerd[1595]: time="2026-03-10T01:13:58.209657146Z" level=info msg="RemoveContainer for \"c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7\"" Mar 10 01:13:58.242030 containerd[1595]: time="2026-03-10T01:13:58.239695165Z" level=info msg="RemoveContainer for \"c6717b53959ad1e440e3c168de673069bcc68d00f59d97407b140f69023766c7\" returns successfully" Mar 10 01:13:58.256840 kubelet[2997]: I0310 01:13:58.256658 2997 scope.go:117] "RemoveContainer" containerID="9652c777a5b3feb3c7d93ee076ba3b9ead6919c2cdbc2c72b7d8576a9b8fa15b" Mar 10 01:13:58.266064 containerd[1595]: time="2026-03-10T01:13:58.265745598Z" level=info msg="RemoveContainer for \"9652c777a5b3feb3c7d93ee076ba3b9ead6919c2cdbc2c72b7d8576a9b8fa15b\"" Mar 10 01:13:58.291496 containerd[1595]: time="2026-03-10T01:13:58.290860270Z" level=info msg="RemoveContainer for \"9652c777a5b3feb3c7d93ee076ba3b9ead6919c2cdbc2c72b7d8576a9b8fa15b\" returns successfully" Mar 10 01:13:58.303401 kubelet[2997]: I0310 01:13:58.297927 2997 scope.go:117] "RemoveContainer" containerID="bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd" Mar 10 01:13:58.319949 containerd[1595]: time="2026-03-10T01:13:58.319494255Z" level=info msg="RemoveContainer for \"bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd\"" Mar 10 01:13:58.357862 containerd[1595]: time="2026-03-10T01:13:58.357816104Z" level=info msg="RemoveContainer for \"bceeb42c9f8e2a6870e4c4a229ce7c19aeda2ef7bccb7b3cc0f2ca1fcdaf8edd\" returns successfully" Mar 10 01:13:58.360479 kubelet[2997]: I0310 01:13:58.360252 2997 scope.go:117] "RemoveContainer" containerID="df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3" Mar 10 01:13:58.365685 containerd[1595]: time="2026-03-10T01:13:58.365007055Z" level=info msg="RemoveContainer for \"df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3\"" Mar 10 01:13:58.377698 containerd[1595]: time="2026-03-10T01:13:58.377499309Z" level=info msg="RemoveContainer for \"df9520f66b54b6d34b2673cacc61814c08b5b9b057f1466614a55a50ef9177e3\" returns successfully" Mar 10 01:13:59.387677 sshd[5358]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:59.406877 systemd[1]: Started sshd@48-10.0.0.67:22-10.0.0.1:50540.service - OpenSSH per-connection server daemon (10.0.0.1:50540). Mar 10 01:13:59.408008 systemd[1]: sshd@47-10.0.0.67:22-10.0.0.1:50532.service: Deactivated successfully. Mar 10 01:13:59.433582 systemd[1]: session-48.scope: Deactivated successfully. Mar 10 01:13:59.449695 systemd-logind[1578]: Session 48 logged out. Waiting for processes to exit. Mar 10 01:13:59.472406 systemd-logind[1578]: Removed session 48. Mar 10 01:13:59.587241 kubelet[2997]: I0310 01:13:59.580603 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-cilium-run\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.587241 kubelet[2997]: I0310 01:13:59.580660 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-bpf-maps\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.587241 kubelet[2997]: I0310 01:13:59.580697 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af22080c-3d85-418c-9eb0-eb3d5eacc547-cilium-config-path\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.587241 kubelet[2997]: I0310 01:13:59.580726 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-cilium-cgroup\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.587241 kubelet[2997]: I0310 01:13:59.580752 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-lib-modules\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.587241 kubelet[2997]: I0310 01:13:59.580778 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-host-proc-sys-net\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.590553 kubelet[2997]: I0310 01:13:59.580803 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-hostproc\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.590553 kubelet[2997]: I0310 01:13:59.580831 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-xtables-lock\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.590553 kubelet[2997]: I0310 01:13:59.580853 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/af22080c-3d85-418c-9eb0-eb3d5eacc547-clustermesh-secrets\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.590553 kubelet[2997]: I0310 01:13:59.580880 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-host-proc-sys-kernel\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.590553 kubelet[2997]: I0310 01:13:59.580906 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/af22080c-3d85-418c-9eb0-eb3d5eacc547-hubble-tls\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.590553 kubelet[2997]: I0310 01:13:59.580935 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-cni-path\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.590788 kubelet[2997]: I0310 01:13:59.580960 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/af22080c-3d85-418c-9eb0-eb3d5eacc547-etc-cni-netd\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.590788 kubelet[2997]: I0310 01:13:59.580986 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/af22080c-3d85-418c-9eb0-eb3d5eacc547-cilium-ipsec-secrets\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.590788 kubelet[2997]: I0310 01:13:59.581507 2997 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81274c7-0f7f-4307-8b97-678613572cf8" path="/var/lib/kubelet/pods/a81274c7-0f7f-4307-8b97-678613572cf8/volumes" Mar 10 01:13:59.590788 kubelet[2997]: I0310 01:13:59.585755 2997 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1506dd7-6edf-4834-8a2b-060079dd93ad" path="/var/lib/kubelet/pods/d1506dd7-6edf-4834-8a2b-060079dd93ad/volumes" Mar 10 01:13:59.636027 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 50540 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:59.659260 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:59.677014 systemd-logind[1578]: New session 49 of user core. Mar 10 01:13:59.683017 kubelet[2997]: I0310 01:13:59.682969 2997 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jmt\" (UniqueName: \"kubernetes.io/projected/af22080c-3d85-418c-9eb0-eb3d5eacc547-kube-api-access-n9jmt\") pod \"cilium-nnmc8\" (UID: \"af22080c-3d85-418c-9eb0-eb3d5eacc547\") " pod="kube-system/cilium-nnmc8" Mar 10 01:13:59.689838 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 10 01:13:59.797615 sshd[5378]: pam_unix(sshd:session): session closed for user core Mar 10 01:13:59.812623 systemd[1]: Started sshd@49-10.0.0.67:22-10.0.0.1:50556.service - OpenSSH per-connection server daemon (10.0.0.1:50556). Mar 10 01:13:59.813660 systemd[1]: sshd@48-10.0.0.67:22-10.0.0.1:50540.service: Deactivated successfully. Mar 10 01:13:59.862934 systemd[1]: session-49.scope: Deactivated successfully. Mar 10 01:13:59.873241 systemd-logind[1578]: Session 49 logged out. Waiting for processes to exit. Mar 10 01:13:59.885963 systemd-logind[1578]: Removed session 49. Mar 10 01:13:59.920415 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 50556 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:13:59.925771 sshd[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:13:59.971283 systemd-logind[1578]: New session 50 of user core. Mar 10 01:13:59.992859 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 10 01:14:00.098194 kubelet[2997]: E0310 01:14:00.097799 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:00.100520 containerd[1595]: time="2026-03-10T01:14:00.100462729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nnmc8,Uid:af22080c-3d85-418c-9eb0-eb3d5eacc547,Namespace:kube-system,Attempt:0,}" Mar 10 01:14:00.366777 containerd[1595]: time="2026-03-10T01:14:00.309637725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:14:00.366777 containerd[1595]: time="2026-03-10T01:14:00.336631070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:14:00.366777 containerd[1595]: time="2026-03-10T01:14:00.336663610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:14:00.366777 containerd[1595]: time="2026-03-10T01:14:00.351008581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:14:00.609460 containerd[1595]: time="2026-03-10T01:14:00.609053331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nnmc8,Uid:af22080c-3d85-418c-9eb0-eb3d5eacc547,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\"" Mar 10 01:14:00.612236 kubelet[2997]: E0310 01:14:00.611664 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:00.656511 containerd[1595]: time="2026-03-10T01:14:00.655938220Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:14:00.706960 containerd[1595]: time="2026-03-10T01:14:00.706747749Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1be93f0b6fc5969b03fbe7d696c7f80c436755087c824d4c95c474d7f932eaad\"" Mar 10 01:14:00.711991 containerd[1595]: time="2026-03-10T01:14:00.711952301Z" level=info msg="StartContainer for \"1be93f0b6fc5969b03fbe7d696c7f80c436755087c824d4c95c474d7f932eaad\"" Mar 10 01:14:00.975615 containerd[1595]: time="2026-03-10T01:14:00.974860192Z" level=info msg="StartContainer for \"1be93f0b6fc5969b03fbe7d696c7f80c436755087c824d4c95c474d7f932eaad\" returns successfully" Mar 10 01:14:01.177515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1be93f0b6fc5969b03fbe7d696c7f80c436755087c824d4c95c474d7f932eaad-rootfs.mount: Deactivated successfully. Mar 10 01:14:01.228793 kubelet[2997]: E0310 01:14:01.224828 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:01.229237 containerd[1595]: time="2026-03-10T01:14:01.227445224Z" level=info msg="shim disconnected" id=1be93f0b6fc5969b03fbe7d696c7f80c436755087c824d4c95c474d7f932eaad namespace=k8s.io Mar 10 01:14:01.229237 containerd[1595]: time="2026-03-10T01:14:01.227506268Z" level=warning msg="cleaning up after shim disconnected" id=1be93f0b6fc5969b03fbe7d696c7f80c436755087c824d4c95c474d7f932eaad namespace=k8s.io Mar 10 01:14:01.229237 containerd[1595]: time="2026-03-10T01:14:01.227518390Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:14:01.317830 containerd[1595]: time="2026-03-10T01:14:01.317767319Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:14:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:14:02.255026 kubelet[2997]: E0310 01:14:02.254489 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:02.276025 containerd[1595]: time="2026-03-10T01:14:02.275559932Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:14:02.381222 containerd[1595]: time="2026-03-10T01:14:02.378755698Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4078285cce3de99d504d60858b2ee14570edf2684d72a48d0c41722ecb0d0a87\"" Mar 10 01:14:02.393563 containerd[1595]: time="2026-03-10T01:14:02.392846118Z" level=info msg="StartContainer for \"4078285cce3de99d504d60858b2ee14570edf2684d72a48d0c41722ecb0d0a87\"" Mar 10 01:14:02.821003 containerd[1595]: time="2026-03-10T01:14:02.820937971Z" level=info msg="StartContainer for \"4078285cce3de99d504d60858b2ee14570edf2684d72a48d0c41722ecb0d0a87\" returns successfully" Mar 10 01:14:03.042679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4078285cce3de99d504d60858b2ee14570edf2684d72a48d0c41722ecb0d0a87-rootfs.mount: Deactivated successfully. Mar 10 01:14:03.066566 kubelet[2997]: E0310 01:14:03.066510 2997 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:14:03.081474 containerd[1595]: time="2026-03-10T01:14:03.080663577Z" level=info msg="shim disconnected" id=4078285cce3de99d504d60858b2ee14570edf2684d72a48d0c41722ecb0d0a87 namespace=k8s.io Mar 10 01:14:03.081474 containerd[1595]: time="2026-03-10T01:14:03.080743395Z" level=warning msg="cleaning up after shim disconnected" id=4078285cce3de99d504d60858b2ee14570edf2684d72a48d0c41722ecb0d0a87 namespace=k8s.io Mar 10 01:14:03.081474 containerd[1595]: time="2026-03-10T01:14:03.080758333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:14:03.310063 kubelet[2997]: E0310 01:14:03.309730 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:03.367572 containerd[1595]: time="2026-03-10T01:14:03.366610255Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:14:03.484257 containerd[1595]: time="2026-03-10T01:14:03.482979243Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2ee59d0cd5cbeeb6eed9e0780e0a41e4cef2ac359c71639faea53e6056221661\"" Mar 10 01:14:03.486531 containerd[1595]: time="2026-03-10T01:14:03.486023774Z" level=info msg="StartContainer for \"2ee59d0cd5cbeeb6eed9e0780e0a41e4cef2ac359c71639faea53e6056221661\"" Mar 10 01:14:03.885720 containerd[1595]: time="2026-03-10T01:14:03.884681204Z" level=info msg="StartContainer for \"2ee59d0cd5cbeeb6eed9e0780e0a41e4cef2ac359c71639faea53e6056221661\" returns successfully" Mar 10 01:14:04.105899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ee59d0cd5cbeeb6eed9e0780e0a41e4cef2ac359c71639faea53e6056221661-rootfs.mount: Deactivated successfully. Mar 10 01:14:04.166025 containerd[1595]: time="2026-03-10T01:14:04.164908632Z" level=info msg="shim disconnected" id=2ee59d0cd5cbeeb6eed9e0780e0a41e4cef2ac359c71639faea53e6056221661 namespace=k8s.io Mar 10 01:14:04.166025 containerd[1595]: time="2026-03-10T01:14:04.164986868Z" level=warning msg="cleaning up after shim disconnected" id=2ee59d0cd5cbeeb6eed9e0780e0a41e4cef2ac359c71639faea53e6056221661 namespace=k8s.io Mar 10 01:14:04.166025 containerd[1595]: time="2026-03-10T01:14:04.165001214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:14:04.316245 kubelet[2997]: E0310 01:14:04.316031 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:04.356792 containerd[1595]: time="2026-03-10T01:14:04.355981341Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:14:04.431990 containerd[1595]: time="2026-03-10T01:14:04.416808446Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ce9db783d723d354f8b12b2245dc6bf7f3d883efb4c3f5025ae83eb58fe85736\"" Mar 10 01:14:04.431990 containerd[1595]: time="2026-03-10T01:14:04.418920401Z" level=info msg="StartContainer for \"ce9db783d723d354f8b12b2245dc6bf7f3d883efb4c3f5025ae83eb58fe85736\"" Mar 10 01:14:04.684015 containerd[1595]: time="2026-03-10T01:14:04.682418743Z" level=info msg="StartContainer for \"ce9db783d723d354f8b12b2245dc6bf7f3d883efb4c3f5025ae83eb58fe85736\" returns successfully" Mar 10 01:14:04.878017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce9db783d723d354f8b12b2245dc6bf7f3d883efb4c3f5025ae83eb58fe85736-rootfs.mount: Deactivated successfully. Mar 10 01:14:04.899676 containerd[1595]: time="2026-03-10T01:14:04.899058818Z" level=info msg="shim disconnected" id=ce9db783d723d354f8b12b2245dc6bf7f3d883efb4c3f5025ae83eb58fe85736 namespace=k8s.io Mar 10 01:14:04.899676 containerd[1595]: time="2026-03-10T01:14:04.899396599Z" level=warning msg="cleaning up after shim disconnected" id=ce9db783d723d354f8b12b2245dc6bf7f3d883efb4c3f5025ae83eb58fe85736 namespace=k8s.io Mar 10 01:14:04.899676 containerd[1595]: time="2026-03-10T01:14:04.899417497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:14:05.332660 kubelet[2997]: E0310 01:14:05.332455 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:05.360259 containerd[1595]: time="2026-03-10T01:14:05.358678831Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:14:05.451171 containerd[1595]: time="2026-03-10T01:14:05.450860319Z" level=info msg="CreateContainer within sandbox \"0f3cfcd31d965ad0c9316f8e8c408af6da9c0bbb3c595ba310d8174ebe05045c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50fa6a3ffa77a2f3019bf42bb0120ec16738d3f4aa6c52c78aad7091b26b5cd9\"" Mar 10 01:14:05.458939 containerd[1595]: time="2026-03-10T01:14:05.457744608Z" level=info msg="StartContainer for \"50fa6a3ffa77a2f3019bf42bb0120ec16738d3f4aa6c52c78aad7091b26b5cd9\"" Mar 10 01:14:05.709479 containerd[1595]: time="2026-03-10T01:14:05.708968702Z" level=info msg="StartContainer for \"50fa6a3ffa77a2f3019bf42bb0120ec16738d3f4aa6c52c78aad7091b26b5cd9\" returns successfully" Mar 10 01:14:05.799560 systemd[1]: run-containerd-runc-k8s.io-50fa6a3ffa77a2f3019bf42bb0120ec16738d3f4aa6c52c78aad7091b26b5cd9-runc.gGrAdO.mount: Deactivated successfully. Mar 10 01:14:06.327225 kubelet[2997]: I0310 01:14:06.323462 2997 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T01:14:06Z","lastTransitionTime":"2026-03-10T01:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 10 01:14:06.397738 kubelet[2997]: E0310 01:14:06.397586 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:07.380472 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 10 01:14:08.099856 kubelet[2997]: E0310 01:14:08.099724 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:12.264612 systemd[1]: run-containerd-runc-k8s.io-50fa6a3ffa77a2f3019bf42bb0120ec16738d3f4aa6c52c78aad7091b26b5cd9-runc.x8TFY3.mount: Deactivated successfully. Mar 10 01:14:15.575845 systemd-networkd[1247]: lxc_health: Link UP Mar 10 01:14:15.592001 systemd-networkd[1247]: lxc_health: Gained carrier Mar 10 01:14:16.103262 kubelet[2997]: E0310 01:14:16.102937 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:16.193042 kubelet[2997]: I0310 01:14:16.184690 2997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nnmc8" podStartSLOduration=17.184568856 podStartE2EDuration="17.184568856s" podCreationTimestamp="2026-03-10 01:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:14:06.517518678 +0000 UTC m=+442.283969622" watchObservedRunningTime="2026-03-10 01:14:16.184568856 +0000 UTC m=+451.951019679" Mar 10 01:14:16.479803 kubelet[2997]: E0310 01:14:16.478844 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:16.831942 systemd-networkd[1247]: lxc_health: Gained IPv6LL Mar 10 01:14:17.500449 kubelet[2997]: E0310 01:14:17.499752 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:14:19.789544 systemd[1]: run-containerd-runc-k8s.io-50fa6a3ffa77a2f3019bf42bb0120ec16738d3f4aa6c52c78aad7091b26b5cd9-runc.NHWVvC.mount: Deactivated successfully. Mar 10 01:14:22.557830 sshd[5391]: pam_unix(sshd:session): session closed for user core Mar 10 01:14:22.574940 systemd[1]: sshd@49-10.0.0.67:22-10.0.0.1:50556.service: Deactivated successfully. Mar 10 01:14:22.586575 systemd-logind[1578]: Session 50 logged out. Waiting for processes to exit. Mar 10 01:14:22.590624 systemd[1]: session-50.scope: Deactivated successfully. Mar 10 01:14:22.597042 systemd-logind[1578]: Removed session 50. Mar 10 01:14:23.559264 kubelet[2997]: E0310 01:14:23.556905 2997 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"