Jan 23 01:16:24.703930 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:16:24.703966 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:16:24.703984 kernel: BIOS-provided physical RAM map: Jan 23 01:16:24.703994 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 01:16:24.704002 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 01:16:24.704010 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 01:16:24.704022 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 23 01:16:24.704032 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 23 01:16:24.704042 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 01:16:24.704051 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 01:16:24.704059 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:16:24.704074 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 01:16:24.704084 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:16:24.704202 kernel: NX (Execute Disable) protection: active Jan 23 01:16:24.704216 kernel: APIC: Static calls initialized Jan 23 01:16:24.704228 kernel: SMBIOS 2.8 present. Jan 23 01:16:24.705921 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 23 01:16:24.705933 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:16:24.705941 kernel: Hypervisor detected: KVM Jan 23 01:16:24.705950 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 01:16:24.705961 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:16:24.705971 kernel: kvm-clock: using sched offset of 20645565180 cycles Jan 23 01:16:24.705983 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:16:24.705992 kernel: tsc: Detected 2445.424 MHz processor Jan 23 01:16:24.706001 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:16:24.706013 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:16:24.706028 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 23 01:16:24.706037 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 01:16:24.706048 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:16:24.706059 kernel: Using GB pages for direct mapping Jan 23 01:16:24.706071 kernel: ACPI: Early table checksum verification disabled Jan 23 01:16:24.706080 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 23 01:16:24.706089 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:16:24.706192 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:16:24.706204 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:16:24.706220 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 23 01:16:24.706396 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:16:24.706409 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:16:24.706420 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:16:24.706431 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:16:24.706448 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jan 23 01:16:24.706460 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jan 23 01:16:24.706472 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 23 01:16:24.706483 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jan 23 01:16:24.706493 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jan 23 01:16:24.706504 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jan 23 01:16:24.706515 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jan 23 01:16:24.706526 kernel: No NUMA configuration found Jan 23 01:16:24.706537 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 23 01:16:24.706550 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Jan 23 01:16:24.706563 kernel: Zone ranges: Jan 23 01:16:24.706573 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:16:24.706582 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 23 01:16:24.706594 kernel: Normal empty Jan 23 01:16:24.706604 kernel: Device empty Jan 23 01:16:24.706616 kernel: Movable zone start for each node Jan 23 01:16:24.706625 kernel: Early memory node ranges Jan 23 01:16:24.706635 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 01:16:24.706651 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 23 01:16:24.706660 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 23 01:16:24.706671 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:16:24.706683 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 01:16:24.706694 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 23 01:16:24.706704 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:16:24.706713 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:16:24.706725 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:16:24.706736 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:16:24.706749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:16:24.706761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:16:24.706771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:16:24.706783 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:16:24.706792 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:16:24.706802 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:16:24.706814 kernel: TSC deadline timer available Jan 23 01:16:24.706824 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:16:24.706834 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:16:24.706850 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:16:24.706860 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:16:24.706872 kernel: CPU topo: Num. cores per package: 4 Jan 23 01:16:24.706881 kernel: CPU topo: Num. threads per package: 4 Jan 23 01:16:24.706891 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jan 23 01:16:24.706903 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:16:24.706913 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 23 01:16:24.706922 kernel: kvm-guest: setup PV sched yield Jan 23 01:16:24.706934 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 01:16:24.706945 kernel: Booting paravirtualized kernel on KVM Jan 23 01:16:24.706960 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:16:24.706969 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 23 01:16:24.706981 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jan 23 01:16:24.706992 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jan 23 01:16:24.707001 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 23 01:16:24.707011 kernel: kvm-guest: PV spinlocks enabled Jan 23 01:16:24.707023 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 01:16:24.707036 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:16:24.707051 kernel: random: crng init done Jan 23 01:16:24.707061 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:16:24.707073 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:16:24.707083 kernel: Fallback order for Node 0: 0 Jan 23 01:16:24.707186 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Jan 23 01:16:24.707200 kernel: Policy zone: DMA32 Jan 23 01:16:24.707210 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:16:24.707221 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 23 01:16:24.707396 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:16:24.707412 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:16:24.707423 kernel: Dynamic Preempt: voluntary Jan 23 01:16:24.707435 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:16:24.707451 kernel: rcu: RCU event tracing is enabled. Jan 23 01:16:24.707463 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 23 01:16:24.707474 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:16:24.707485 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:16:24.707495 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:16:24.707504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:16:24.707520 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 23 01:16:24.707532 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 01:16:24.707541 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 01:16:24.707553 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 23 01:16:24.707565 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 23 01:16:24.707577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:16:24.707599 kernel: Console: colour VGA+ 80x25 Jan 23 01:16:24.707612 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:16:24.707622 kernel: ACPI: Core revision 20240827 Jan 23 01:16:24.707632 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 01:16:24.707644 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:16:24.710010 kernel: x2apic enabled Jan 23 01:16:24.710030 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:16:24.710043 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 23 01:16:24.710055 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 23 01:16:24.710065 kernel: kvm-guest: setup PV IPIs Jan 23 01:16:24.710082 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 01:16:24.710191 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 01:16:24.710203 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 23 01:16:24.710213 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:16:24.710225 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 01:16:24.710406 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 01:16:24.710418 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:16:24.710428 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 01:16:24.710439 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 01:16:24.710455 kernel: Speculative Store Bypass: Vulnerable Jan 23 01:16:24.710465 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 23 01:16:24.710478 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 23 01:16:24.710490 kernel: active return thunk: srso_alias_return_thunk Jan 23 01:16:24.710502 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 23 01:16:24.710512 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 01:16:24.710522 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:16:24.710534 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:16:24.710551 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:16:24.710561 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:16:24.710573 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:16:24.710584 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 23 01:16:24.710596 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:16:24.710606 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:16:24.710616 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:16:24.710629 kernel: landlock: Up and running. Jan 23 01:16:24.710640 kernel: SELinux: Initializing. Jan 23 01:16:24.710649 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:16:24.710666 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:16:24.710679 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 23 01:16:24.710689 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 23 01:16:24.710699 kernel: signal: max sigframe size: 1776 Jan 23 01:16:24.710712 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:16:24.710723 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:16:24.710733 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:16:24.710746 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 01:16:24.710761 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:16:24.710772 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:16:24.710782 kernel: .... node #0, CPUs: #1 #2 #3 Jan 23 01:16:24.710794 kernel: smp: Brought up 1 node, 4 CPUs Jan 23 01:16:24.710806 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 23 01:16:24.710816 kernel: Memory: 2420712K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 145100K reserved, 0K cma-reserved) Jan 23 01:16:24.710828 kernel: devtmpfs: initialized Jan 23 01:16:24.710840 kernel: x86/mm: Memory block size: 128MB Jan 23 01:16:24.710852 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:16:24.710866 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 23 01:16:24.710878 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:16:24.710890 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:16:24.710900 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:16:24.710913 kernel: audit: type=2000 audit(1769130970.542:1): state=initialized audit_enabled=0 res=1 Jan 23 01:16:24.710924 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:16:24.710936 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:16:24.710946 kernel: cpuidle: using governor menu Jan 23 01:16:24.710956 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:16:24.710972 kernel: dca service started, version 1.12.1 Jan 23 01:16:24.710983 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 01:16:24.710992 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 01:16:24.711005 kernel: PCI: Using configuration type 1 for base access Jan 23 01:16:24.711017 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:16:24.711029 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:16:24.711038 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:16:24.711050 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:16:24.711062 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:16:24.711076 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:16:24.711088 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:16:24.711199 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:16:24.711212 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:16:24.711223 kernel: ACPI: Interpreter enabled Jan 23 01:16:24.711406 kernel: ACPI: PM: (supports S0 S3 S5) Jan 23 01:16:24.711420 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:16:24.711433 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:16:24.711527 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:16:24.711545 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:16:24.711556 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:16:24.712042 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:16:24.712639 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 01:16:24.712911 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 01:16:24.712928 kernel: PCI host bridge to bus 0000:00 Jan 23 01:16:24.713681 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:16:24.713855 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:16:24.714012 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:16:24.714778 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 23 01:16:24.715222 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 01:16:24.715568 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 23 01:16:24.715865 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:16:24.716712 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:16:24.717034 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:16:24.717521 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Jan 23 01:16:24.717706 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Jan 23 01:16:24.717887 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Jan 23 01:16:24.718063 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:16:24.718512 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x100 took 12695 usecs Jan 23 01:16:24.718716 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jan 23 01:16:24.718896 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 01:16:24.719073 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Jan 23 01:16:24.719521 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Jan 23 01:16:24.719930 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 01:16:24.720216 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Jan 23 01:16:24.720837 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Jan 23 01:16:24.721027 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Jan 23 01:16:24.721753 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 01:16:24.722832 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Jan 23 01:16:24.723017 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Jan 23 01:16:24.723780 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 23 01:16:24.723963 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Jan 23 01:16:24.724470 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:16:24.724660 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:16:24.724839 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0xc0 took 10742 usecs Jan 23 01:16:24.725036 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:16:24.725488 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Jan 23 01:16:24.725666 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Jan 23 01:16:24.726409 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:16:24.726691 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 01:16:24.726709 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:16:24.726722 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:16:24.726734 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:16:24.726744 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:16:24.726756 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:16:24.726767 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:16:24.726779 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:16:24.726790 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:16:24.726804 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:16:24.726818 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:16:24.726829 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:16:24.726838 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:16:24.726851 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:16:24.726862 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:16:24.726874 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:16:24.726884 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:16:24.726895 kernel: iommu: Default domain type: Translated Jan 23 01:16:24.726912 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:16:24.726921 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:16:24.726934 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:16:24.726945 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 01:16:24.726958 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 23 01:16:24.727609 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:16:24.727790 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:16:24.727965 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:16:24.727983 kernel: vgaarb: loaded Jan 23 01:16:24.728001 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 01:16:24.728014 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 01:16:24.728024 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:16:24.728034 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:16:24.728047 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:16:24.728059 kernel: pnp: PnP ACPI init Jan 23 01:16:24.729018 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 01:16:24.729040 kernel: pnp: PnP ACPI: found 6 devices Jan 23 01:16:24.729058 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:16:24.729072 kernel: NET: Registered PF_INET protocol family Jan 23 01:16:24.729082 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:16:24.729188 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 01:16:24.729202 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:16:24.729212 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:16:24.729223 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 01:16:24.729403 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 01:16:24.729415 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:16:24.729433 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:16:24.729445 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:16:24.729455 kernel: NET: Registered PF_XDP protocol family Jan 23 01:16:24.730026 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:16:24.730517 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:16:24.730689 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:16:24.730997 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 23 01:16:24.731436 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 01:16:24.731610 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 23 01:16:24.731629 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:16:24.731640 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd5e8294, max_idle_ns: 440795237246 ns Jan 23 01:16:24.731652 kernel: Initialise system trusted keyrings Jan 23 01:16:24.731664 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 01:16:24.731676 kernel: Key type asymmetric registered Jan 23 01:16:24.731687 kernel: Asymmetric key parser 'x509' registered Jan 23 01:16:24.731697 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:16:24.731709 kernel: io scheduler mq-deadline registered Jan 23 01:16:24.731725 kernel: io scheduler kyber registered Jan 23 01:16:24.731736 kernel: io scheduler bfq registered Jan 23 01:16:24.731748 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:16:24.731761 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:16:24.731773 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:16:24.731783 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 01:16:24.731795 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:16:24.731806 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:16:24.731816 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:16:24.731833 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:16:24.731845 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:16:24.732625 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 23 01:16:24.732648 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:16:24.732818 kernel: rtc_cmos 00:04: registered as rtc0 Jan 23 01:16:24.732989 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T01:16:22 UTC (1769130982) Jan 23 01:16:24.733490 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 01:16:24.733510 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 01:16:24.733530 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:16:24.733540 kernel: Segment Routing with IPv6 Jan 23 01:16:24.733550 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:16:24.733562 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:16:24.733574 kernel: Key type dns_resolver registered Jan 23 01:16:24.733584 kernel: IPI shorthand broadcast: enabled Jan 23 01:16:24.733596 kernel: sched_clock: Marking stable (9248060700, 2796418602)->(13119545686, -1075066384) Jan 23 01:16:24.733608 kernel: registered taskstats version 1 Jan 23 01:16:24.733620 kernel: Loading compiled-in X.509 certificates Jan 23 01:16:24.733635 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:16:24.733645 kernel: Demotion targets for Node 0: null Jan 23 01:16:24.733658 kernel: Key type .fscrypt registered Jan 23 01:16:24.733668 kernel: Key type fscrypt-provisioning registered Jan 23 01:16:24.733678 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:16:24.733691 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:16:24.733702 kernel: ima: No architecture policies found Jan 23 01:16:24.733714 kernel: clk: Disabling unused clocks Jan 23 01:16:24.733724 kernel: Warning: unable to open an initial console. Jan 23 01:16:24.733741 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:16:24.733752 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:16:24.733762 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:16:24.733774 kernel: Run /init as init process Jan 23 01:16:24.733786 kernel: with arguments: Jan 23 01:16:24.733798 kernel: /init Jan 23 01:16:24.733807 kernel: with environment: Jan 23 01:16:24.733818 kernel: HOME=/ Jan 23 01:16:24.733830 kernel: TERM=linux Jan 23 01:16:24.733845 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:16:24.733863 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:16:24.733876 systemd[1]: Detected virtualization kvm. Jan 23 01:16:24.733888 systemd[1]: Detected architecture x86-64. Jan 23 01:16:24.733899 systemd[1]: Running in initrd. Jan 23 01:16:24.733912 systemd[1]: No hostname configured, using default hostname. Jan 23 01:16:24.733922 systemd[1]: Hostname set to . Jan 23 01:16:24.733940 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:16:24.733968 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:16:24.733982 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:16:24.733997 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:16:24.734008 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:16:24.734020 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:16:24.734037 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:16:24.734052 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:16:24.734064 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:16:24.734077 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:16:24.734090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:16:24.734201 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:16:24.734212 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:16:24.734393 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:16:24.734408 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:16:24.734419 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:16:24.734433 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:16:24.734445 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:16:24.734456 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:16:24.734470 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:16:24.734482 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:16:24.734500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:16:24.734512 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:16:24.734798 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:16:24.734826 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:16:24.734838 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:16:24.734851 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:16:24.734864 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:16:24.734875 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:16:24.734888 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:16:24.734905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:16:24.734917 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:16:24.734931 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:16:24.735085 systemd-journald[203]: Collecting audit messages is disabled. Jan 23 01:16:24.735216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:16:24.735404 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:16:24.735420 systemd-journald[203]: Journal started Jan 23 01:16:24.735445 systemd-journald[203]: Runtime Journal (/run/log/journal/bb668c3b1e7142a2ae652acccc983a5b) is 6M, max 48.3M, 42.2M free. Jan 23 01:16:24.744488 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:16:24.738829 systemd-modules-load[204]: Inserted module 'overlay' Jan 23 01:16:24.802571 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:16:24.810847 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:16:24.898058 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:16:24.905497 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:16:24.990022 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:16:25.923579 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:16:25.923935 kernel: Bridge firewalling registered Jan 23 01:16:25.013927 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:16:25.017929 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 23 01:16:25.993733 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:16:26.009036 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:16:26.034227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:16:26.098546 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:16:26.132552 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:16:26.299430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:16:26.310878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:16:26.351971 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:16:26.383593 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:16:26.533809 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:16:26.534838 systemd-resolved[240]: Positive Trust Anchors: Jan 23 01:16:26.534856 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:16:26.534892 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:16:26.544204 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 23 01:16:26.548212 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:16:26.616474 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:16:27.149819 kernel: hrtimer: interrupt took 4228676 ns Jan 23 01:16:27.566814 kernel: SCSI subsystem initialized Jan 23 01:16:27.616041 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:16:27.708627 kernel: iscsi: registered transport (tcp) Jan 23 01:16:27.802602 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:16:27.802690 kernel: QLogic iSCSI HBA Driver Jan 23 01:16:27.952822 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:16:28.054105 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:16:28.106491 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:16:28.465998 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:16:28.509378 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:16:28.784754 kernel: raid6: avx2x4 gen() 12820 MB/s Jan 23 01:16:28.805820 kernel: raid6: avx2x2 gen() 18109 MB/s Jan 23 01:16:28.835042 kernel: raid6: avx2x1 gen() 7265 MB/s Jan 23 01:16:28.835218 kernel: raid6: using algorithm avx2x2 gen() 18109 MB/s Jan 23 01:16:28.864873 kernel: raid6: .... xor() 13447 MB/s, rmw enabled Jan 23 01:16:28.864962 kernel: raid6: using avx2x2 recovery algorithm Jan 23 01:16:28.985450 kernel: xor: automatically using best checksumming function avx Jan 23 01:16:30.112753 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:16:30.157512 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:16:30.187217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:16:30.315561 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jan 23 01:16:30.328659 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:16:30.348484 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:16:30.492945 dracut-pre-trigger[456]: rd.md=0: removing MD RAID activation Jan 23 01:16:30.694742 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:16:30.724429 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:16:30.959077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:16:31.000827 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:16:31.130954 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 23 01:16:31.175848 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:16:31.202541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:16:31.227999 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 23 01:16:31.202705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:16:31.240798 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:16:31.316878 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:16:31.316918 kernel: GPT:9289727 != 19775487 Jan 23 01:16:31.316955 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:16:31.316970 kernel: GPT:9289727 != 19775487 Jan 23 01:16:31.316982 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:16:31.316998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:16:31.257874 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:16:31.329650 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:16:31.439753 kernel: libata version 3.00 loaded. Jan 23 01:16:31.482869 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:16:31.484097 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:16:31.533089 kernel: AES CTR mode by8 optimization enabled Jan 23 01:16:31.533383 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:16:31.533656 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:16:31.533860 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:16:31.562578 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:16:31.603636 kernel: scsi host0: ahci Jan 23 01:16:31.605628 kernel: scsi host1: ahci Jan 23 01:16:31.606812 kernel: scsi host2: ahci Jan 23 01:16:31.610606 kernel: scsi host3: ahci Jan 23 01:16:31.614401 kernel: scsi host4: ahci Jan 23 01:16:31.635574 kernel: scsi host5: ahci Jan 23 01:16:31.635852 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Jan 23 01:16:31.635871 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Jan 23 01:16:31.635884 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Jan 23 01:16:31.635900 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Jan 23 01:16:31.635915 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Jan 23 01:16:31.635928 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Jan 23 01:16:31.661081 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 01:16:32.698738 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:16:32.698791 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:16:32.698810 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 01:16:32.698828 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:16:32.698844 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:16:32.698858 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 01:16:32.698874 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 01:16:32.698893 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 01:16:32.698915 kernel: ata3.00: applying bridge limits Jan 23 01:16:32.698934 kernel: ata3.00: LPM support broken, forcing max_power Jan 23 01:16:32.698951 kernel: ata3.00: configured for UDMA/100 Jan 23 01:16:32.698969 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 01:16:32.700052 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 01:16:32.700973 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 01:16:32.700990 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 23 01:16:32.736613 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 01:16:32.754015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:16:32.806841 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 01:16:32.840840 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 01:16:32.904628 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:16:32.940120 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:16:33.024670 disk-uuid[623]: Primary Header is updated. Jan 23 01:16:33.024670 disk-uuid[623]: Secondary Entries is updated. Jan 23 01:16:33.024670 disk-uuid[623]: Secondary Header is updated. Jan 23 01:16:33.066715 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:16:33.103772 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:16:33.380225 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:16:33.428797 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:16:33.445557 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:16:33.476099 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:16:33.515657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:16:33.617480 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:16:34.122498 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 01:16:34.159483 disk-uuid[624]: The operation has completed successfully. Jan 23 01:16:34.302749 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:16:34.303058 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:16:34.347868 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:16:34.433481 sh[648]: Success Jan 23 01:16:34.541943 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:16:34.542034 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:16:34.542660 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:16:34.625695 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:16:34.766071 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:16:34.792577 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:16:34.843757 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:16:34.887640 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (660) Jan 23 01:16:34.904006 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:16:34.904066 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:16:34.989858 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:16:34.989956 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:16:35.007529 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:16:35.027089 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:16:35.049635 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:16:35.076805 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:16:35.114497 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:16:35.235587 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (693) Jan 23 01:16:35.262472 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:16:35.262541 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:16:35.317622 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:16:35.317706 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:16:35.350762 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:16:35.365671 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:16:35.401963 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:16:36.507895 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:16:36.591644 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:16:37.067727 systemd-networkd[829]: lo: Link UP Jan 23 01:16:37.082637 systemd-networkd[829]: lo: Gained carrier Jan 23 01:16:37.085834 systemd-networkd[829]: Enumeration completed Jan 23 01:16:37.085991 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:16:37.091507 systemd-networkd[829]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:16:37.091515 systemd-networkd[829]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:16:37.120051 systemd-networkd[829]: eth0: Link UP Jan 23 01:16:37.123981 systemd-networkd[829]: eth0: Gained carrier Jan 23 01:16:37.124005 systemd-networkd[829]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:16:37.182736 systemd[1]: Reached target network.target - Network. Jan 23 01:16:37.322467 systemd-networkd[829]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 01:16:38.331809 ignition[758]: Ignition 2.22.0 Jan 23 01:16:38.331902 ignition[758]: Stage: fetch-offline Jan 23 01:16:38.332508 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:16:38.332526 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:16:38.333574 ignition[758]: parsed url from cmdline: "" Jan 23 01:16:38.333581 ignition[758]: no config URL provided Jan 23 01:16:38.333670 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:16:38.333685 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:16:38.387550 ignition[758]: op(1): [started] loading QEMU firmware config module Jan 23 01:16:38.387560 ignition[758]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 23 01:16:39.018046 systemd-networkd[829]: eth0: Gained IPv6LL Jan 23 01:16:39.060860 ignition[758]: op(1): [finished] loading QEMU firmware config module Jan 23 01:16:40.413048 ignition[758]: parsing config with SHA512: 98e62d09a88d48ba2eaa08ce98fefc92380e5ed4857d9818a2a200dc878bc1be68809014a924e2dd289b2ae808356a7f67332b17749d90eed764358aee6c1e1c Jan 23 01:16:40.456883 unknown[758]: fetched base config from "system" Jan 23 01:16:40.456902 unknown[758]: fetched user config from "qemu" Jan 23 01:16:40.461895 ignition[758]: fetch-offline: fetch-offline passed Jan 23 01:16:40.482575 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:16:40.462054 ignition[758]: Ignition finished successfully Jan 23 01:16:40.492774 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 23 01:16:40.495414 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:16:40.832128 ignition[842]: Ignition 2.22.0 Jan 23 01:16:40.832659 ignition[842]: Stage: kargs Jan 23 01:16:40.833824 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:16:40.851935 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:16:40.833841 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:16:40.885507 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:16:40.841388 ignition[842]: kargs: kargs passed Jan 23 01:16:40.841468 ignition[842]: Ignition finished successfully Jan 23 01:16:41.028767 ignition[851]: Ignition 2.22.0 Jan 23 01:16:41.028866 ignition[851]: Stage: disks Jan 23 01:16:41.029062 ignition[851]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:16:41.029078 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:16:41.068601 ignition[851]: disks: disks passed Jan 23 01:16:41.068692 ignition[851]: Ignition finished successfully Jan 23 01:16:41.097480 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:16:41.108694 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:16:41.127848 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:16:41.128052 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:16:41.190542 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:16:41.201698 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:16:41.231513 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:16:41.357827 systemd-fsck[861]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 01:16:41.382714 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:16:41.420064 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:16:42.394222 kernel: EXT4-fs (vda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:16:42.407498 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:16:42.420736 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:16:42.459874 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:16:42.481733 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:16:42.482719 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 01:16:42.482788 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:16:42.482827 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:16:42.588444 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:16:42.616891 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:16:42.646505 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (870) Jan 23 01:16:42.673387 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:16:42.673468 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:16:42.732127 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:16:42.732424 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:16:42.740016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:16:43.039034 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:16:43.082474 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:16:43.125063 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:16:43.166209 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:16:44.016487 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:16:44.040846 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:16:44.077406 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:16:44.157129 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:16:44.198097 kernel: BTRFS info (device vda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:16:44.300956 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:16:44.495993 ignition[986]: INFO : Ignition 2.22.0 Jan 23 01:16:44.516105 ignition[986]: INFO : Stage: mount Jan 23 01:16:44.516105 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:16:44.516105 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:16:44.562526 ignition[986]: INFO : mount: mount passed Jan 23 01:16:44.562526 ignition[986]: INFO : Ignition finished successfully Jan 23 01:16:44.569913 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:16:44.616089 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:16:44.814918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:16:44.906531 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (998) Jan 23 01:16:44.921053 kernel: BTRFS info (device vda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:16:44.921115 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:16:45.000574 kernel: BTRFS info (device vda6): turning on async discard Jan 23 01:16:45.000656 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 01:16:45.005088 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:16:45.239084 ignition[1015]: INFO : Ignition 2.22.0 Jan 23 01:16:45.239084 ignition[1015]: INFO : Stage: files Jan 23 01:16:45.239084 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:16:45.239084 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:16:45.351379 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:16:45.389006 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:16:45.389006 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:16:45.419408 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:16:45.419408 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:16:45.454944 unknown[1015]: wrote ssh authorized keys file for user: core Jan 23 01:16:45.483970 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:16:45.508785 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:16:45.508785 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 01:16:45.819834 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:16:46.726978 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:16:46.751880 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 01:16:46.751880 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 23 01:16:47.182055 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 01:16:49.925483 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1905897670 wd_nsec: 1905896962 Jan 23 01:16:50.695030 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:16:50.746479 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:16:51.157021 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:16:51.157021 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:16:51.157021 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 01:16:51.279531 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 01:17:00.350997 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:17:00.393986 ignition[1015]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 23 01:17:00.393986 ignition[1015]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:17:00.393986 ignition[1015]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:17:00.393986 ignition[1015]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 23 01:17:00.393986 ignition[1015]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 23 01:17:00.393986 ignition[1015]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 01:17:00.393986 ignition[1015]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 23 01:17:00.393986 ignition[1015]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 23 01:17:00.393986 ignition[1015]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 23 01:17:00.596385 ignition[1015]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 01:17:00.623908 ignition[1015]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 23 01:17:00.623908 ignition[1015]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 23 01:17:00.623908 ignition[1015]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:17:00.623908 ignition[1015]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:17:00.740449 ignition[1015]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:17:00.740449 ignition[1015]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:17:00.740449 ignition[1015]: INFO : files: files passed Jan 23 01:17:00.740449 ignition[1015]: INFO : Ignition finished successfully Jan 23 01:17:00.634474 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:17:00.667959 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:17:00.711156 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:17:00.858593 initrd-setup-root-after-ignition[1042]: grep: /sysroot/oem/oem-release: No such file or directory Jan 23 01:17:00.821436 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:17:00.904597 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:17:00.904597 initrd-setup-root-after-ignition[1044]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:17:00.841823 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:17:00.905000 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:17:00.904715 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:17:00.961442 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:17:00.961685 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:17:01.110480 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:17:01.110791 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:17:01.136925 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:17:01.148968 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:17:01.172742 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:17:01.209580 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:17:01.290927 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:17:01.321085 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:17:01.402722 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:17:01.418539 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:17:01.431054 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:17:01.450664 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:17:01.450936 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:17:01.517103 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:17:01.561124 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:17:01.594851 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:17:01.640958 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:17:01.680780 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:17:01.700173 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:17:01.743656 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:17:01.758923 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:17:01.786942 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:17:01.816539 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:17:01.844468 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:17:01.887047 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:17:01.895959 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:17:01.944162 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:17:01.986110 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:17:02.018593 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:17:02.023554 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:17:02.061497 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:17:02.061905 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:17:02.118512 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:17:02.119024 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:17:02.139578 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:17:02.197571 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:17:02.202066 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:17:02.205106 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:17:02.248023 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:17:02.264618 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:17:02.264754 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:17:02.279043 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:17:02.279170 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:17:02.309166 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:17:02.309803 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:17:02.331531 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:17:02.332000 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:17:02.356960 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:17:02.453767 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:17:02.462439 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:17:02.462617 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:17:02.492764 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:17:02.492935 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:17:02.555964 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:17:02.557459 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:17:02.637109 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:17:02.661012 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:17:02.661662 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:17:02.702721 ignition[1070]: INFO : Ignition 2.22.0 Jan 23 01:17:02.702721 ignition[1070]: INFO : Stage: umount Jan 23 01:17:02.702721 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:17:02.702721 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 23 01:17:02.702721 ignition[1070]: INFO : umount: umount passed Jan 23 01:17:02.702721 ignition[1070]: INFO : Ignition finished successfully Jan 23 01:17:02.715028 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:17:02.715633 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:17:02.745907 systemd[1]: Stopped target network.target - Network. Jan 23 01:17:02.795693 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:17:02.796590 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:17:02.819137 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:17:02.822740 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:17:02.848534 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:17:02.848720 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:17:02.872876 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:17:02.874880 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:17:02.918495 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:17:02.918633 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:17:02.931007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:17:02.956777 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:17:03.028511 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:17:03.028741 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:17:03.092425 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:17:03.092999 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:17:03.094139 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:17:03.133024 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:17:03.135563 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:17:03.147494 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:17:03.147577 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:17:03.196103 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:17:03.204969 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:17:03.205091 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:17:03.237955 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:17:03.238056 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:17:03.300077 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:17:03.300548 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:17:03.308865 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:17:03.308950 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:17:03.352644 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:17:03.363693 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:17:03.363769 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:17:03.515845 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:17:03.530855 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:17:03.556692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:17:03.556840 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:17:03.585877 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:17:03.585948 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:17:03.613812 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:17:03.613915 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:17:03.641615 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:17:03.641722 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:17:03.667978 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:17:03.668082 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:17:03.709492 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:17:03.715107 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:17:03.715178 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:17:03.760595 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:17:03.760709 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:17:03.787599 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 01:17:03.787696 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:17:03.812983 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:17:03.813072 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:17:03.841871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:17:03.841998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:17:03.887014 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 01:17:03.887106 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 01:17:03.887166 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 01:17:03.888014 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:17:03.888713 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:17:03.888842 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:17:03.896500 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:17:03.896658 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:17:03.937724 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:17:03.988940 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:17:04.089021 systemd[1]: Switching root. Jan 23 01:17:04.167017 systemd-journald[203]: Journal stopped Jan 23 01:17:09.590080 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 23 01:17:09.590163 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:17:09.590186 kernel: SELinux: policy capability open_perms=1 Jan 23 01:17:09.590712 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:17:09.590731 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:17:09.590746 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:17:09.590761 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:17:09.590779 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:17:09.590792 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:17:09.590811 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:17:09.590826 kernel: audit: type=1403 audit(1769131024.714:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:17:09.590852 systemd[1]: Successfully loaded SELinux policy in 231.131ms. Jan 23 01:17:09.590881 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 22.769ms. Jan 23 01:17:09.590898 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:17:09.590915 systemd[1]: Detected virtualization kvm. Jan 23 01:17:09.590931 systemd[1]: Detected architecture x86-64. Jan 23 01:17:09.590947 systemd[1]: Detected first boot. Jan 23 01:17:09.590964 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:17:09.590987 zram_generator::config[1114]: No configuration found. Jan 23 01:17:09.591434 kernel: Guest personality initialized and is inactive Jan 23 01:17:09.591457 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:17:09.591473 kernel: Initialized host personality Jan 23 01:17:09.591600 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:17:09.591621 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:17:09.591638 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:17:09.591656 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:17:09.591677 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:17:09.591696 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:17:09.591712 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:17:09.591728 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:17:09.591747 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:17:09.591762 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:17:09.591778 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:17:09.591793 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:17:09.591811 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:17:09.591830 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:17:09.591845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:17:09.591860 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:17:09.591877 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:17:09.591894 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:17:09.591913 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:17:09.591929 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:17:09.591949 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:17:09.591966 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:17:09.591983 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:17:09.591999 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:17:09.592016 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:17:09.592033 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:17:09.592051 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:17:09.592067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:17:09.592084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:17:09.594488 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:17:09.594507 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:17:09.594518 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:17:09.594529 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:17:09.594540 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:17:09.594551 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:17:09.594562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:17:09.594572 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:17:09.594589 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:17:09.594601 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:17:09.594615 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:17:09.594626 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:17:09.594637 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:17:09.594647 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:17:09.594658 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:17:09.594669 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:17:09.594680 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:17:09.594691 systemd[1]: Reached target machines.target - Containers. Jan 23 01:17:09.594706 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:17:09.594717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:17:09.594727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:17:09.594738 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:17:09.594748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:17:09.594759 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:17:09.594769 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:17:09.594780 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:17:09.594791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:17:09.594880 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:17:09.594891 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:17:09.594902 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:17:09.594913 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:17:09.594924 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:17:09.594935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:17:09.594945 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:17:09.594956 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:17:09.594969 kernel: loop: module loaded Jan 23 01:17:09.594980 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:17:09.594990 kernel: fuse: init (API version 7.41) Jan 23 01:17:09.595000 kernel: ACPI: bus type drm_connector registered Jan 23 01:17:09.595012 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:17:09.595023 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:17:09.595034 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:17:09.595073 systemd-journald[1199]: Collecting audit messages is disabled. Jan 23 01:17:09.595101 systemd-journald[1199]: Journal started Jan 23 01:17:09.595127 systemd-journald[1199]: Runtime Journal (/run/log/journal/bb668c3b1e7142a2ae652acccc983a5b) is 6M, max 48.3M, 42.2M free. Jan 23 01:17:09.633529 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:17:09.633607 systemd[1]: Stopped verity-setup.service. Jan 23 01:17:09.633642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:17:07.261820 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:17:07.318983 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 01:17:07.325171 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:17:07.326817 systemd[1]: systemd-journald.service: Consumed 3.017s CPU time. Jan 23 01:17:09.669559 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:17:09.695588 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:17:09.709163 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:17:09.727805 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:17:09.746947 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:17:09.760630 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:17:09.778077 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:17:09.797137 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:17:09.811713 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:17:09.827174 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:17:09.827883 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:17:09.848066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:17:09.849057 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:17:09.862434 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:17:09.862769 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:17:09.878080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:17:09.880799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:17:09.895014 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:17:09.896002 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:17:09.911108 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:17:09.921478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:17:09.935047 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:17:09.950058 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:17:09.965819 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:17:09.986529 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:17:10.004017 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:17:10.039568 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:17:10.055011 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:17:10.078677 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:17:10.088653 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:17:10.088802 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:17:10.100951 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:17:10.140759 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:17:10.158004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:17:10.161810 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:17:10.199578 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:17:10.222013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:17:10.227574 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:17:10.241018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:17:10.253577 systemd-journald[1199]: Time spent on flushing to /var/log/journal/bb668c3b1e7142a2ae652acccc983a5b is 72.689ms for 984 entries. Jan 23 01:17:10.253577 systemd-journald[1199]: System Journal (/var/log/journal/bb668c3b1e7142a2ae652acccc983a5b) is 8M, max 195.6M, 187.6M free. Jan 23 01:17:10.406745 systemd-journald[1199]: Received client request to flush runtime journal. Jan 23 01:17:10.246713 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:17:10.298782 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:17:10.318646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:17:10.356977 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:17:10.377925 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:17:10.396996 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:17:10.415480 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:17:10.439996 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:17:10.477952 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:17:10.513950 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:17:10.561670 kernel: loop0: detected capacity change from 0 to 128560 Jan 23 01:17:10.592545 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 23 01:17:10.592736 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 23 01:17:10.602645 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:17:10.604148 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:17:10.622732 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:17:10.656613 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:17:10.722471 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:17:10.784473 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:17:10.803874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:17:10.821095 kernel: loop1: detected capacity change from 0 to 229808 Jan 23 01:17:10.883963 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 23 01:17:10.883992 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 23 01:17:10.898114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:17:10.928506 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 01:17:11.124467 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 01:17:11.251951 kernel: loop4: detected capacity change from 0 to 229808 Jan 23 01:17:11.426422 kernel: loop5: detected capacity change from 0 to 110984 Jan 23 01:17:11.551653 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 23 01:17:11.553011 (sd-merge)[1261]: Merged extensions into '/usr'. Jan 23 01:17:11.570778 systemd[1]: Reload requested from client PID 1234 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:17:11.570926 systemd[1]: Reloading... Jan 23 01:17:11.768068 zram_generator::config[1287]: No configuration found. Jan 23 01:17:12.200607 ldconfig[1229]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:17:12.348486 systemd[1]: Reloading finished in 776 ms. Jan 23 01:17:12.424187 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:17:12.442804 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:17:12.488140 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:17:12.584768 systemd[1]: Starting ensure-sysext.service... Jan 23 01:17:12.626465 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:17:12.659954 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:17:12.750105 systemd[1]: Reload requested from client PID 1325 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:17:12.750464 systemd[1]: Reloading... Jan 23 01:17:13.070917 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:17:13.071697 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:17:13.072193 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:17:13.073102 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:17:13.098646 systemd-tmpfiles[1326]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:17:13.099124 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 23 01:17:13.099589 systemd-tmpfiles[1326]: ACLs are not supported, ignoring. Jan 23 01:17:13.109553 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:17:13.109657 systemd-tmpfiles[1326]: Skipping /boot Jan 23 01:17:13.124497 zram_generator::config[1353]: No configuration found. Jan 23 01:17:13.134470 systemd-tmpfiles[1326]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:17:13.134492 systemd-tmpfiles[1326]: Skipping /boot Jan 23 01:17:13.165805 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 23 01:17:13.741662 systemd[1]: Reloading finished in 990 ms. Jan 23 01:17:13.767044 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:17:13.820938 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:17:13.891640 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 23 01:17:13.909403 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:17:13.909498 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:17:13.912767 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:17:13.925869 systemd[1]: Finished ensure-sysext.service. Jan 23 01:17:13.973847 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:17:13.985156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:17:13.994957 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:17:13.991603 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:17:14.029145 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:17:14.042770 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:17:14.047863 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:17:14.083808 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:17:14.099550 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:17:14.118800 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:17:14.145926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:17:14.146090 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:17:14.150509 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:17:14.188697 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:17:14.212022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:17:14.238980 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 01:17:14.257927 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:17:14.283146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:17:14.286062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:17:14.288741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:17:14.304488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:17:14.305676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:17:14.324788 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:17:14.325182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:17:14.358780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 01:17:14.389741 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:17:14.391531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:17:14.437663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:17:14.455784 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:17:14.456153 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:17:14.461636 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:17:14.492952 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:17:14.505644 augenrules[1479]: No rules Jan 23 01:17:14.524042 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:17:14.524997 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:17:14.540874 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:17:14.566011 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:17:14.597046 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:17:14.645534 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:17:14.699054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:17:14.711831 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:17:14.720657 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:17:14.837018 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:17:15.187706 systemd-networkd[1454]: lo: Link UP Jan 23 01:17:15.187724 systemd-networkd[1454]: lo: Gained carrier Jan 23 01:17:15.190859 systemd-networkd[1454]: Enumeration completed Jan 23 01:17:15.191873 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:17:15.195674 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:17:15.195777 systemd-networkd[1454]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:17:15.199508 systemd-networkd[1454]: eth0: Link UP Jan 23 01:17:15.199730 systemd-networkd[1454]: eth0: Gained carrier Jan 23 01:17:15.199756 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:17:15.227600 systemd-networkd[1454]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 23 01:17:16.017860 systemd-timesyncd[1459]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 23 01:17:16.018743 systemd-timesyncd[1459]: Initial clock synchronization to Fri 2026-01-23 01:17:16.015529 UTC. Jan 23 01:17:16.089898 systemd-resolved[1456]: Positive Trust Anchors: Jan 23 01:17:16.090477 systemd-resolved[1456]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:17:16.090516 systemd-resolved[1456]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:17:16.125465 systemd-resolved[1456]: Defaulting to hostname 'linux'. Jan 23 01:17:16.223429 kernel: kvm_amd: TSC scaling supported Jan 23 01:17:16.223538 kernel: kvm_amd: Nested Virtualization enabled Jan 23 01:17:16.223589 kernel: kvm_amd: Nested Paging enabled Jan 23 01:17:16.223606 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 23 01:17:16.223626 kernel: kvm_amd: PMU virtualization is disabled Jan 23 01:17:16.687654 kernel: EDAC MC: Ver: 3.0.0 Jan 23 01:17:17.059710 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 01:17:17.072650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:17:17.086848 systemd[1]: Reached target network.target - Network. Jan 23 01:17:17.098771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:17:17.100886 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:17:17.116826 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:17:17.136565 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:17:17.170788 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:17:17.186303 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:17:17.204721 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:17:17.219678 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:17:17.233741 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:17:17.248443 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:17:17.261833 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:17:17.277424 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:17:17.294463 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:17:17.294603 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:17:17.306447 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:17:17.323505 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:17:17.342788 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:17:17.360858 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:17:17.381753 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:17:17.387192 systemd-networkd[1454]: eth0: Gained IPv6LL Jan 23 01:17:17.396835 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:17:17.437531 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:17:17.452495 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:17:17.474328 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:17:17.499232 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:17:17.522409 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:17:17.543466 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:17:17.557546 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:17:17.570903 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:17:17.583773 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:17:17.583909 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:17:17.588599 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:17:17.625272 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 23 01:17:17.664931 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:17:17.684282 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:17:17.705419 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:17:17.734219 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:17:17.752436 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:17:17.754816 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:17:17.770612 jq[1519]: false Jan 23 01:17:17.775801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:17:17.796242 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:17:17.816278 extend-filesystems[1520]: Found /dev/vda6 Jan 23 01:17:17.824144 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:17:17.848912 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:17:17.869419 extend-filesystems[1520]: Found /dev/vda9 Jan 23 01:17:17.869419 extend-filesystems[1520]: Checking size of /dev/vda9 Jan 23 01:17:17.939713 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 23 01:17:17.882403 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:17:17.886188 oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jan 23 01:17:17.940542 extend-filesystems[1520]: Resized partition /dev/vda9 Jan 23 01:17:17.954427 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing passwd entry cache Jan 23 01:17:17.954427 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting users, quitting Jan 23 01:17:17.954427 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:17:17.954427 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Refreshing group entry cache Jan 23 01:17:17.907248 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:17:17.954911 extend-filesystems[1545]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:17:17.942162 oslogin_cache_refresh[1521]: Failure getting users, quitting Jan 23 01:17:17.970729 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Failure getting groups, quitting Jan 23 01:17:17.970729 google_oslogin_nss_cache[1521]: oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:17:17.942204 oslogin_cache_refresh[1521]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:17:17.942294 oslogin_cache_refresh[1521]: Refreshing group entry cache Jan 23 01:17:17.962912 oslogin_cache_refresh[1521]: Failure getting groups, quitting Jan 23 01:17:17.962939 oslogin_cache_refresh[1521]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:17:17.981301 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:17:18.002197 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:17:18.003718 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:17:18.005293 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:17:18.023439 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:17:18.056169 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:17:18.081545 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:17:18.205230 jq[1554]: true Jan 23 01:17:18.205369 update_engine[1551]: I20260123 01:17:18.109741 1551 main.cc:92] Flatcar Update Engine starting Jan 23 01:17:18.084912 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:17:18.085658 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:17:18.086191 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:17:18.103794 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:17:18.104459 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:17:18.117935 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:17:18.143930 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:17:18.145323 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:17:18.202413 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:17:18.218402 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 23 01:17:18.219336 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 23 01:17:18.231640 jq[1562]: true Jan 23 01:17:18.271200 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 23 01:17:18.305844 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:17:18.308437 tar[1560]: linux-amd64/LICENSE Jan 23 01:17:18.310837 systemd-logind[1549]: Watching system buttons on /dev/input/event2 (Power Button) Jan 23 01:17:18.311168 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:17:18.311843 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 01:17:18.311843 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 23 01:17:18.311843 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 23 01:17:18.351746 extend-filesystems[1520]: Resized filesystem in /dev/vda9 Jan 23 01:17:18.316466 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:17:18.351888 tar[1560]: linux-amd64/helm Jan 23 01:17:18.316492 systemd-logind[1549]: New seat seat0. Jan 23 01:17:18.317190 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:17:18.335685 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:17:18.386840 dbus-daemon[1517]: [system] SELinux support is enabled Jan 23 01:17:18.411420 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:17:18.429853 update_engine[1551]: I20260123 01:17:18.428802 1551 update_check_scheduler.cc:74] Next update check in 11m0s Jan 23 01:17:18.431895 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:17:18.432248 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:17:18.435673 dbus-daemon[1517]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 01:17:18.451193 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:17:18.451247 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:17:18.470843 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:17:18.503937 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:17:18.528602 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:17:18.531297 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:17:18.560365 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 01:17:18.606358 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:17:18.647178 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:17:18.693772 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:17:18.718469 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:17:18.761588 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:17:18.761927 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:17:18.782884 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:17:18.803751 containerd[1563]: time="2026-01-23T01:17:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:17:18.821593 containerd[1563]: time="2026-01-23T01:17:18.809683098Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:17:18.840370 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:17:18.843255 containerd[1563]: time="2026-01-23T01:17:18.842189928Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=2.429767ms Jan 23 01:17:18.843255 containerd[1563]: time="2026-01-23T01:17:18.842340208Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:17:18.843255 containerd[1563]: time="2026-01-23T01:17:18.842373991Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:17:18.843255 containerd[1563]: time="2026-01-23T01:17:18.842599002Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:17:18.843255 containerd[1563]: time="2026-01-23T01:17:18.842623728Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:17:18.843255 containerd[1563]: time="2026-01-23T01:17:18.842657752Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:17:18.843255 containerd[1563]: time="2026-01-23T01:17:18.842730617Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:17:18.843255 containerd[1563]: time="2026-01-23T01:17:18.842748771Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:17:18.843421 containerd[1563]: time="2026-01-23T01:17:18.843256129Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:17:18.843421 containerd[1563]: time="2026-01-23T01:17:18.843279022Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:17:18.843421 containerd[1563]: time="2026-01-23T01:17:18.843302626Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:17:18.843421 containerd[1563]: time="2026-01-23T01:17:18.843311913Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:17:18.843421 containerd[1563]: time="2026-01-23T01:17:18.843411710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:17:18.844655 containerd[1563]: time="2026-01-23T01:17:18.843681233Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:17:18.844655 containerd[1563]: time="2026-01-23T01:17:18.843746725Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:17:18.844655 containerd[1563]: time="2026-01-23T01:17:18.843766172Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:17:18.844655 containerd[1563]: time="2026-01-23T01:17:18.843799543Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:17:18.845541 containerd[1563]: time="2026-01-23T01:17:18.845409099Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:17:18.848419 containerd[1563]: time="2026-01-23T01:17:18.845824875Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:17:18.864464 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:17:18.866842 containerd[1563]: time="2026-01-23T01:17:18.866404720Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:17:18.867327 containerd[1563]: time="2026-01-23T01:17:18.867295203Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:17:18.868487 containerd[1563]: time="2026-01-23T01:17:18.868463515Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:17:18.868932 containerd[1563]: time="2026-01-23T01:17:18.868908966Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:17:18.869281 containerd[1563]: time="2026-01-23T01:17:18.869247949Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:17:18.869370 containerd[1563]: time="2026-01-23T01:17:18.869350401Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:17:18.869449 containerd[1563]: time="2026-01-23T01:17:18.869430841Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:17:18.870280 containerd[1563]: time="2026-01-23T01:17:18.869508466Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.870353604Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.870378921Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.870396474Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.870415399Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.870627886Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872361443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872391800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872410464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872427567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872443717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872461941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872477019Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872493640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872509540Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:17:18.873388 containerd[1563]: time="2026-01-23T01:17:18.872523586Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:17:18.873785 containerd[1563]: time="2026-01-23T01:17:18.873266483Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:17:18.873785 containerd[1563]: time="2026-01-23T01:17:18.873291340Z" level=info msg="Start snapshots syncer" Jan 23 01:17:18.873785 containerd[1563]: time="2026-01-23T01:17:18.873436150Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:17:18.877267 containerd[1563]: time="2026-01-23T01:17:18.876334989Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:17:18.877267 containerd[1563]: time="2026-01-23T01:17:18.876756776Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880236364Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880572211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880608278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880629979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880644356Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880661197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880676817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880694049Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880729023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880744663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:17:18.880782 containerd[1563]: time="2026-01-23T01:17:18.880760823Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:17:18.881395 containerd[1563]: time="2026-01-23T01:17:18.880802251Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:17:18.881395 containerd[1563]: time="2026-01-23T01:17:18.880825344Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:17:18.881395 containerd[1563]: time="2026-01-23T01:17:18.880840702Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:17:18.881395 containerd[1563]: time="2026-01-23T01:17:18.880853566Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:17:18.881395 containerd[1563]: time="2026-01-23T01:17:18.880864476Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:17:18.881395 containerd[1563]: time="2026-01-23T01:17:18.880877431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:17:18.881395 containerd[1563]: time="2026-01-23T01:17:18.880902147Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:17:18.881395 containerd[1563]: time="2026-01-23T01:17:18.880923667Z" level=info msg="runtime interface created" Jan 23 01:17:18.881395 containerd[1563]: time="2026-01-23T01:17:18.880939667Z" level=info msg="created NRI interface" Jan 23 01:17:18.881610 containerd[1563]: time="2026-01-23T01:17:18.881563732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:17:18.881610 containerd[1563]: time="2026-01-23T01:17:18.881590803Z" level=info msg="Connect containerd service" Jan 23 01:17:18.881659 containerd[1563]: time="2026-01-23T01:17:18.881617513Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:17:18.883559 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:17:18.886496 containerd[1563]: time="2026-01-23T01:17:18.885919947Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:17:18.897377 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:17:19.088736 containerd[1563]: time="2026-01-23T01:17:19.088682580Z" level=info msg="Start subscribing containerd event" Jan 23 01:17:19.088923 containerd[1563]: time="2026-01-23T01:17:19.088896309Z" level=info msg="Start recovering state" Jan 23 01:17:19.089241 containerd[1563]: time="2026-01-23T01:17:19.089226736Z" level=info msg="Start event monitor" Jan 23 01:17:19.089295 containerd[1563]: time="2026-01-23T01:17:19.089284704Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:17:19.089336 containerd[1563]: time="2026-01-23T01:17:19.089326533Z" level=info msg="Start streaming server" Jan 23 01:17:19.089382 containerd[1563]: time="2026-01-23T01:17:19.089367058Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:17:19.089438 containerd[1563]: time="2026-01-23T01:17:19.089424315Z" level=info msg="runtime interface starting up..." Jan 23 01:17:19.089520 containerd[1563]: time="2026-01-23T01:17:19.089504444Z" level=info msg="starting plugins..." Jan 23 01:17:19.089591 containerd[1563]: time="2026-01-23T01:17:19.089578132Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:17:19.098522 containerd[1563]: time="2026-01-23T01:17:19.098492035Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:17:19.098649 containerd[1563]: time="2026-01-23T01:17:19.098634461Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:17:19.099357 containerd[1563]: time="2026-01-23T01:17:19.099336452Z" level=info msg="containerd successfully booted in 0.296870s" Jan 23 01:17:19.099465 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:17:19.592830 tar[1560]: linux-amd64/README.md Jan 23 01:17:19.753271 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:17:23.138573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:17:23.163701 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:17:23.223400 (kubelet)[1652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:17:23.249784 systemd[1]: Startup finished in 9.620s (kernel) + 41.482s (initrd) + 18.036s (userspace) = 1min 9.139s. Jan 23 01:17:27.249531 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:17:27.253432 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:34864.service - OpenSSH per-connection server daemon (10.0.0.1:34864). Jan 23 01:17:28.153295 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 34864 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:17:28.171697 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:17:28.222788 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:17:28.232941 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:17:28.258190 systemd-logind[1549]: New session 1 of user core. Jan 23 01:17:28.535514 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:17:28.549598 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:17:28.654470 (systemd)[1670]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:17:28.702793 systemd-logind[1549]: New session c1 of user core. Jan 23 01:17:30.566290 systemd[1670]: Queued start job for default target default.target. Jan 23 01:17:30.635692 systemd[1670]: Created slice app.slice - User Application Slice. Jan 23 01:17:30.636253 systemd[1670]: Reached target paths.target - Paths. Jan 23 01:17:30.636492 systemd[1670]: Reached target timers.target - Timers. Jan 23 01:17:30.658543 systemd[1670]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:17:31.053604 systemd[1670]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:17:31.059355 systemd[1670]: Reached target sockets.target - Sockets. Jan 23 01:17:31.059471 systemd[1670]: Reached target basic.target - Basic System. Jan 23 01:17:31.059539 systemd[1670]: Reached target default.target - Main User Target. Jan 23 01:17:31.059626 systemd[1670]: Startup finished in 2.127s. Jan 23 01:17:31.059677 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:17:31.113799 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:17:31.607318 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:34908.service - OpenSSH per-connection server daemon (10.0.0.1:34908). Jan 23 01:17:32.120263 kubelet[1652]: E0123 01:17:32.117626 1652 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:17:32.169456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:17:32.170201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:17:32.171836 systemd[1]: kubelet.service: Consumed 6.506s CPU time, 271.5M memory peak. Jan 23 01:17:32.552411 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 34908 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:17:32.568928 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:17:32.653792 systemd-logind[1549]: New session 2 of user core. Jan 23 01:17:32.665770 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:17:32.941206 sshd[1686]: Connection closed by 10.0.0.1 port 34908 Jan 23 01:17:32.944541 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Jan 23 01:17:32.973881 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:32822.service - OpenSSH per-connection server daemon (10.0.0.1:32822). Jan 23 01:17:32.989826 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:34908.service: Deactivated successfully. Jan 23 01:17:33.001635 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:17:33.006847 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:17:33.024861 systemd-logind[1549]: Removed session 2. Jan 23 01:17:33.243293 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 32822 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:17:33.252765 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:17:33.422330 systemd-logind[1549]: New session 3 of user core. Jan 23 01:17:33.454850 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:17:33.637357 sshd[1695]: Connection closed by 10.0.0.1 port 32822 Jan 23 01:17:33.633651 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Jan 23 01:17:33.658199 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:32822.service: Deactivated successfully. Jan 23 01:17:33.670450 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:17:33.672736 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:17:33.695601 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:32826.service - OpenSSH per-connection server daemon (10.0.0.1:32826). Jan 23 01:17:33.707521 systemd-logind[1549]: Removed session 3. Jan 23 01:17:33.905826 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 32826 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:17:33.914749 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:17:34.003523 systemd-logind[1549]: New session 4 of user core. Jan 23 01:17:34.025657 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:17:34.158577 sshd[1704]: Connection closed by 10.0.0.1 port 32826 Jan 23 01:17:34.160402 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jan 23 01:17:34.177458 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:32826.service: Deactivated successfully. Jan 23 01:17:34.185396 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:17:34.189755 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:17:34.199368 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:32840.service - OpenSSH per-connection server daemon (10.0.0.1:32840). Jan 23 01:17:34.203575 systemd-logind[1549]: Removed session 4. Jan 23 01:17:34.352675 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 32840 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:17:34.358472 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:17:34.382812 systemd-logind[1549]: New session 5 of user core. Jan 23 01:17:34.408830 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:17:34.562523 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:17:34.563287 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:17:34.638839 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 23 01:17:34.644567 sshd[1713]: Connection closed by 10.0.0.1 port 32840 Jan 23 01:17:34.645190 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jan 23 01:17:34.688724 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:32840.service: Deactivated successfully. Jan 23 01:17:34.698648 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:17:34.704391 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:17:34.725332 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:32842.service - OpenSSH per-connection server daemon (10.0.0.1:32842). Jan 23 01:17:34.733498 systemd-logind[1549]: Removed session 5. Jan 23 01:17:34.888803 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 32842 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:17:34.893252 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:17:34.918446 systemd-logind[1549]: New session 6 of user core. Jan 23 01:17:34.938850 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:17:35.062643 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:17:35.072735 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:17:35.150859 sudo[1725]: pam_unix(sudo:session): session closed for user root Jan 23 01:17:35.205280 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:17:35.206241 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:17:35.313185 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:17:35.618547 augenrules[1747]: No rules Jan 23 01:17:35.623706 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:17:35.624596 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:17:35.633336 sudo[1724]: pam_unix(sudo:session): session closed for user root Jan 23 01:17:35.641966 sshd[1723]: Connection closed by 10.0.0.1 port 32842 Jan 23 01:17:35.642953 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Jan 23 01:17:35.677567 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:32842.service: Deactivated successfully. Jan 23 01:17:35.722310 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:17:35.728944 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:17:35.765862 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:32852.service - OpenSSH per-connection server daemon (10.0.0.1:32852). Jan 23 01:17:35.770776 systemd-logind[1549]: Removed session 6. Jan 23 01:17:36.375696 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 32852 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:17:36.405547 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:17:36.615287 systemd-logind[1549]: New session 7 of user core. Jan 23 01:17:36.661711 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:17:37.127839 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:17:37.134344 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:17:42.273925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:17:42.286603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:17:44.306524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:17:44.382619 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:17:45.158632 kubelet[1788]: E0123 01:17:45.156960 1788 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:17:45.192534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:17:45.193504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:17:45.204477 systemd[1]: kubelet.service: Consumed 1.692s CPU time, 109M memory peak. Jan 23 01:17:46.341861 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:17:46.383538 (dockerd)[1798]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:17:47.323405 dockerd[1798]: time="2026-01-23T01:17:47.321535100Z" level=info msg="Starting up" Jan 23 01:17:47.326700 dockerd[1798]: time="2026-01-23T01:17:47.326575773Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:17:47.398791 dockerd[1798]: time="2026-01-23T01:17:47.398589979Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:17:47.609615 dockerd[1798]: time="2026-01-23T01:17:47.607901694Z" level=info msg="Loading containers: start." Jan 23 01:17:47.670876 kernel: Initializing XFRM netlink socket Jan 23 01:17:57.748326 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 6579540692 wd_nsec: 6579538252 Jan 23 01:17:57.828583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:17:57.952396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:18:00.638798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:18:00.669600 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:18:00.871287 systemd-networkd[1454]: docker0: Link UP Jan 23 01:18:00.915276 dockerd[1798]: time="2026-01-23T01:18:00.907124912Z" level=info msg="Loading containers: done." Jan 23 01:18:01.229812 dockerd[1798]: time="2026-01-23T01:18:01.228657208Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:18:01.231464 dockerd[1798]: time="2026-01-23T01:18:01.230507136Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:18:01.231464 dockerd[1798]: time="2026-01-23T01:18:01.230647985Z" level=info msg="Initializing buildkit" Jan 23 01:18:01.436404 dockerd[1798]: time="2026-01-23T01:18:01.435569890Z" level=info msg="Completed buildkit initialization" Jan 23 01:18:01.510382 dockerd[1798]: time="2026-01-23T01:18:01.504919572Z" level=info msg="Daemon has completed initialization" Jan 23 01:18:01.510382 dockerd[1798]: time="2026-01-23T01:18:01.505432240Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:18:01.512491 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:18:01.913515 kubelet[1961]: E0123 01:18:01.912414 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:18:01.927232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:18:01.927541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:18:01.929246 systemd[1]: kubelet.service: Consumed 2.496s CPU time, 108.8M memory peak. Jan 23 01:18:04.133455 update_engine[1551]: I20260123 01:18:04.129750 1551 update_attempter.cc:509] Updating boot flags... Jan 23 01:18:10.774398 containerd[1563]: time="2026-01-23T01:18:10.773464647Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 01:18:12.026521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 01:18:12.035228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:18:13.027652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765003513.mount: Deactivated successfully. Jan 23 01:18:14.666281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:18:14.767778 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:18:15.739680 kubelet[2068]: E0123 01:18:15.737896 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:18:15.747609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:18:15.748287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:18:15.749362 systemd[1]: kubelet.service: Consumed 2.654s CPU time, 108.3M memory peak. Jan 23 01:18:25.873358 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 01:18:25.890390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:18:26.226658 containerd[1563]: time="2026-01-23T01:18:26.225159199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:26.231823 containerd[1563]: time="2026-01-23T01:18:26.231785278Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 23 01:18:26.236102 containerd[1563]: time="2026-01-23T01:18:26.235333429Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:26.245174 containerd[1563]: time="2026-01-23T01:18:26.239967943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:26.245672 containerd[1563]: time="2026-01-23T01:18:26.245451913Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 15.470152998s" Jan 23 01:18:26.245672 containerd[1563]: time="2026-01-23T01:18:26.245581725Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 01:18:26.258209 containerd[1563]: time="2026-01-23T01:18:26.258150342Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 01:18:27.322442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:18:27.365328 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:18:27.958869 kubelet[2129]: E0123 01:18:27.957950 2129 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:18:27.966439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:18:27.966812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:18:27.972920 systemd[1]: kubelet.service: Consumed 1.197s CPU time, 110.1M memory peak. Jan 23 01:18:36.617192 containerd[1563]: time="2026-01-23T01:18:36.616616696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:36.621340 containerd[1563]: time="2026-01-23T01:18:36.621089037Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 23 01:18:36.624780 containerd[1563]: time="2026-01-23T01:18:36.624409872Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:36.642733 containerd[1563]: time="2026-01-23T01:18:36.642384095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:36.645307 containerd[1563]: time="2026-01-23T01:18:36.645149014Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 10.383000113s" Jan 23 01:18:36.645307 containerd[1563]: time="2026-01-23T01:18:36.645199819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 01:18:36.651369 containerd[1563]: time="2026-01-23T01:18:36.651243229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 01:18:38.037945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 23 01:18:38.055441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:18:39.706315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:18:40.017296 (kubelet)[2154]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:18:40.720383 kubelet[2154]: E0123 01:18:40.719648 2154 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:18:40.726401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:18:40.726797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:18:40.727752 systemd[1]: kubelet.service: Consumed 1.865s CPU time, 108.3M memory peak. Jan 23 01:18:44.249952 containerd[1563]: time="2026-01-23T01:18:44.248911018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:44.283227 containerd[1563]: time="2026-01-23T01:18:44.253331067Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 23 01:18:44.283227 containerd[1563]: time="2026-01-23T01:18:44.262946299Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:44.323134 containerd[1563]: time="2026-01-23T01:18:44.322706676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:44.325868 containerd[1563]: time="2026-01-23T01:18:44.325652904Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 7.674366313s" Jan 23 01:18:44.325868 containerd[1563]: time="2026-01-23T01:18:44.325739115Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 01:18:44.333274 containerd[1563]: time="2026-01-23T01:18:44.330345702Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 01:18:47.034939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726852855.mount: Deactivated successfully. Jan 23 01:18:48.784169 containerd[1563]: time="2026-01-23T01:18:48.783915120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:48.787913 containerd[1563]: time="2026-01-23T01:18:48.786850551Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 23 01:18:48.789182 containerd[1563]: time="2026-01-23T01:18:48.789124557Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:48.793714 containerd[1563]: time="2026-01-23T01:18:48.793587307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:48.794221 containerd[1563]: time="2026-01-23T01:18:48.793875512Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 4.462306027s" Jan 23 01:18:48.794221 containerd[1563]: time="2026-01-23T01:18:48.793903494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 01:18:48.797269 containerd[1563]: time="2026-01-23T01:18:48.796796277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 01:18:49.526241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221487867.mount: Deactivated successfully. Jan 23 01:18:50.771629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 23 01:18:50.775733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:18:51.227393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:18:51.240656 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:18:51.411817 kubelet[2229]: E0123 01:18:51.411603 2229 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:18:51.419668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:18:51.420693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:18:51.421416 systemd[1]: kubelet.service: Consumed 552ms CPU time, 109.9M memory peak. Jan 23 01:18:52.154747 containerd[1563]: time="2026-01-23T01:18:52.153963376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:52.159766 containerd[1563]: time="2026-01-23T01:18:52.157595556Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 23 01:18:52.159766 containerd[1563]: time="2026-01-23T01:18:52.159158014Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:52.168407 containerd[1563]: time="2026-01-23T01:18:52.167796271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:18:52.169778 containerd[1563]: time="2026-01-23T01:18:52.169397029Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.372477151s" Jan 23 01:18:52.169778 containerd[1563]: time="2026-01-23T01:18:52.169557498Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 01:18:52.174691 containerd[1563]: time="2026-01-23T01:18:52.174177360Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:18:57.316917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount575097148.mount: Deactivated successfully. Jan 23 01:18:57.349696 containerd[1563]: time="2026-01-23T01:18:57.349288323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:18:57.354666 containerd[1563]: time="2026-01-23T01:18:57.354317613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 23 01:18:57.359705 containerd[1563]: time="2026-01-23T01:18:57.357405862Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:18:57.363642 containerd[1563]: time="2026-01-23T01:18:57.363290668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:18:57.364437 containerd[1563]: time="2026-01-23T01:18:57.364184797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 5.1897732s" Jan 23 01:18:57.364437 containerd[1563]: time="2026-01-23T01:18:57.364301945Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:18:57.387221 containerd[1563]: time="2026-01-23T01:18:57.385744286Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 01:18:58.618645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891718946.mount: Deactivated successfully. Jan 23 01:19:01.523348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 23 01:19:01.532382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:19:03.948192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:19:04.026594 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:19:05.446951 kubelet[2304]: E0123 01:19:05.441583 2304 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:19:05.529914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:19:05.531916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:19:05.537170 systemd[1]: kubelet.service: Consumed 3.321s CPU time, 108.6M memory peak. Jan 23 01:19:15.532348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 23 01:19:15.552709 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:19:16.738569 containerd[1563]: time="2026-01-23T01:19:16.736168161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:19:16.741309 containerd[1563]: time="2026-01-23T01:19:16.739177992Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 23 01:19:16.747391 containerd[1563]: time="2026-01-23T01:19:16.747331262Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:19:16.765144 containerd[1563]: time="2026-01-23T01:19:16.763693080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:19:16.765669 containerd[1563]: time="2026-01-23T01:19:16.765632472Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 19.379834957s" Jan 23 01:19:16.765781 containerd[1563]: time="2026-01-23T01:19:16.765757135Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 01:19:17.197904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:19:17.255707 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:19:17.779717 kubelet[2330]: E0123 01:19:17.778264 2330 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:19:17.804578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:19:17.804907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:19:17.806851 systemd[1]: kubelet.service: Consumed 1.276s CPU time, 111.1M memory peak. Jan 23 01:19:25.456273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:19:25.456672 systemd[1]: kubelet.service: Consumed 1.276s CPU time, 111.1M memory peak. Jan 23 01:19:25.464755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:19:25.583923 systemd[1]: Reload requested from client PID 2365 ('systemctl') (unit session-7.scope)... Jan 23 01:19:25.585624 systemd[1]: Reloading... Jan 23 01:19:25.769321 zram_generator::config[2404]: No configuration found. Jan 23 01:19:26.365261 systemd[1]: Reloading finished in 775 ms. Jan 23 01:19:26.677958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:19:26.690421 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:19:26.693924 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:19:26.694822 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:19:26.695396 systemd[1]: kubelet.service: Consumed 331ms CPU time, 98.4M memory peak. Jan 23 01:19:26.701861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:19:27.373172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:19:27.407763 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:19:28.040787 kubelet[2457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:19:28.040787 kubelet[2457]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:19:28.040787 kubelet[2457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:19:28.045765 kubelet[2457]: I0123 01:19:28.043238 2457 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:19:28.608394 kubelet[2457]: I0123 01:19:28.606333 2457 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:19:28.615844 kubelet[2457]: I0123 01:19:28.612168 2457 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:19:28.615844 kubelet[2457]: I0123 01:19:28.613145 2457 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:19:28.788206 kubelet[2457]: E0123 01:19:28.787731 2457 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:19:28.789727 kubelet[2457]: I0123 01:19:28.789571 2457 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:19:28.851416 kubelet[2457]: I0123 01:19:28.850950 2457 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:19:28.911936 kubelet[2457]: I0123 01:19:28.911208 2457 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:19:28.916839 kubelet[2457]: I0123 01:19:28.916336 2457 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:19:28.917956 kubelet[2457]: I0123 01:19:28.916673 2457 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:19:28.921384 kubelet[2457]: I0123 01:19:28.918284 2457 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:19:28.921384 kubelet[2457]: I0123 01:19:28.918393 2457 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:19:28.921384 kubelet[2457]: I0123 01:19:28.919772 2457 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:19:28.951639 kubelet[2457]: I0123 01:19:28.950862 2457 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:19:28.955147 kubelet[2457]: I0123 01:19:28.952401 2457 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:19:28.955147 kubelet[2457]: I0123 01:19:28.952680 2457 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:19:28.955147 kubelet[2457]: I0123 01:19:28.953251 2457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:19:28.982304 kubelet[2457]: E0123 01:19:28.981800 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:19:28.988879 kubelet[2457]: E0123 01:19:28.987811 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:19:29.054546 kubelet[2457]: I0123 01:19:29.054213 2457 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:19:29.056865 kubelet[2457]: I0123 01:19:29.056357 2457 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:19:29.062866 kubelet[2457]: W0123 01:19:29.062644 2457 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:19:29.152966 kubelet[2457]: I0123 01:19:29.151760 2457 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:19:29.152966 kubelet[2457]: I0123 01:19:29.152594 2457 server.go:1289] "Started kubelet" Jan 23 01:19:29.171352 kubelet[2457]: I0123 01:19:29.163536 2457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:19:29.171352 kubelet[2457]: I0123 01:19:29.166925 2457 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:19:29.171352 kubelet[2457]: I0123 01:19:29.167431 2457 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:19:29.178822 kubelet[2457]: I0123 01:19:29.178672 2457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:19:29.191132 kubelet[2457]: I0123 01:19:29.188862 2457 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:19:29.191132 kubelet[2457]: I0123 01:19:29.189160 2457 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:19:29.203294 kubelet[2457]: I0123 01:19:29.203158 2457 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:19:29.205523 kubelet[2457]: E0123 01:19:29.204791 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:29.205640 kubelet[2457]: E0123 01:19:29.203810 2457 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188d376cd264ca86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 01:19:29.152289414 +0000 UTC m=+1.727661398,LastTimestamp:2026-01-23 01:19:29.152289414 +0000 UTC m=+1.727661398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 01:19:29.207255 kubelet[2457]: I0123 01:19:29.206902 2457 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:19:29.223643 kubelet[2457]: E0123 01:19:29.207536 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" Jan 23 01:19:29.223643 kubelet[2457]: I0123 01:19:29.211235 2457 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:19:29.229740 kubelet[2457]: I0123 01:19:29.229563 2457 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:19:29.230154 kubelet[2457]: I0123 01:19:29.229836 2457 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:19:29.230154 kubelet[2457]: E0123 01:19:29.229884 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:19:29.234330 kubelet[2457]: E0123 01:19:29.233806 2457 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:19:29.238858 kubelet[2457]: I0123 01:19:29.238730 2457 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:19:29.310134 kubelet[2457]: E0123 01:19:29.309571 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:29.363803 kubelet[2457]: I0123 01:19:29.363559 2457 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:19:29.363803 kubelet[2457]: I0123 01:19:29.363671 2457 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:19:29.363803 kubelet[2457]: I0123 01:19:29.363753 2457 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:19:29.409130 kubelet[2457]: I0123 01:19:29.408783 2457 policy_none.go:49] "None policy: Start" Jan 23 01:19:29.413323 kubelet[2457]: E0123 01:19:29.412877 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:29.414199 kubelet[2457]: I0123 01:19:29.413781 2457 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:19:29.414653 kubelet[2457]: I0123 01:19:29.414349 2457 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:19:29.421941 kubelet[2457]: E0123 01:19:29.420955 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" Jan 23 01:19:29.466653 kubelet[2457]: I0123 01:19:29.465832 2457 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:19:29.502829 kubelet[2457]: I0123 01:19:29.502344 2457 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:19:29.503278 kubelet[2457]: I0123 01:19:29.502935 2457 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:19:29.519194 kubelet[2457]: I0123 01:19:29.518602 2457 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:19:29.519194 kubelet[2457]: I0123 01:19:29.518723 2457 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:19:29.519194 kubelet[2457]: E0123 01:19:29.518795 2457 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:19:29.525824 kubelet[2457]: E0123 01:19:29.519303 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:29.557246 kubelet[2457]: E0123 01:19:29.556564 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:19:29.620258 kubelet[2457]: E0123 01:19:29.619873 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:29.630396 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:19:29.653243 kubelet[2457]: E0123 01:19:29.620307 2457 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:19:29.752610 kubelet[2457]: E0123 01:19:29.751527 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:29.768930 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:19:29.784768 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:19:29.824932 kubelet[2457]: E0123 01:19:29.824804 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" Jan 23 01:19:29.838156 kubelet[2457]: E0123 01:19:29.836409 2457 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:19:29.838262 kubelet[2457]: I0123 01:19:29.838164 2457 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:19:29.838706 kubelet[2457]: I0123 01:19:29.838422 2457 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:19:29.840755 kubelet[2457]: I0123 01:19:29.839962 2457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:19:29.845811 kubelet[2457]: E0123 01:19:29.845369 2457 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:19:29.848722 kubelet[2457]: E0123 01:19:29.847138 2457 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:19:29.890667 systemd[1]: Created slice kubepods-burstable-poda65d16ae0a9d45d13a38460d7150ab03.slice - libcontainer container kubepods-burstable-poda65d16ae0a9d45d13a38460d7150ab03.slice. Jan 23 01:19:29.925751 kubelet[2457]: I0123 01:19:29.925521 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a65d16ae0a9d45d13a38460d7150ab03-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a65d16ae0a9d45d13a38460d7150ab03\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:19:29.925905 kubelet[2457]: I0123 01:19:29.925732 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:19:29.929178 kubelet[2457]: I0123 01:19:29.926636 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:19:29.929178 kubelet[2457]: I0123 01:19:29.926677 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:19:29.929178 kubelet[2457]: I0123 01:19:29.926846 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:19:29.929178 kubelet[2457]: I0123 01:19:29.926877 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 01:19:29.929178 kubelet[2457]: I0123 01:19:29.928312 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a65d16ae0a9d45d13a38460d7150ab03-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a65d16ae0a9d45d13a38460d7150ab03\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:19:29.929560 kubelet[2457]: I0123 01:19:29.928347 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a65d16ae0a9d45d13a38460d7150ab03-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a65d16ae0a9d45d13a38460d7150ab03\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:19:29.929560 kubelet[2457]: I0123 01:19:29.928369 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:19:29.931940 kubelet[2457]: E0123 01:19:29.931920 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:29.946552 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 23 01:19:29.948396 kubelet[2457]: I0123 01:19:29.948359 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:19:29.951545 kubelet[2457]: E0123 01:19:29.951239 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 23 01:19:29.974110 kubelet[2457]: E0123 01:19:29.973216 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:29.981726 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 23 01:19:29.993401 kubelet[2457]: E0123 01:19:29.990653 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:30.157165 kubelet[2457]: I0123 01:19:30.156823 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:19:30.159392 kubelet[2457]: E0123 01:19:30.158879 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 23 01:19:30.236622 containerd[1563]: time="2026-01-23T01:19:30.236374825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a65d16ae0a9d45d13a38460d7150ab03,Namespace:kube-system,Attempt:0,}" Jan 23 01:19:30.275318 containerd[1563]: time="2026-01-23T01:19:30.274756193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 23 01:19:30.299415 kubelet[2457]: E0123 01:19:30.299364 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:19:30.308424 containerd[1563]: time="2026-01-23T01:19:30.308271579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 23 01:19:30.385553 containerd[1563]: time="2026-01-23T01:19:30.385351945Z" level=info msg="connecting to shim 32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34" address="unix:///run/containerd/s/37168291ce97d77872f0ea2ccc334f80407990cb64ea3d28170875b54f7724d6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:19:30.404118 containerd[1563]: time="2026-01-23T01:19:30.403814455Z" level=info msg="connecting to shim c92e4ff4c272d3b71f6fa92eaf52d02907a1c5f0fdfab40546e08635c7339aa7" address="unix:///run/containerd/s/b3169733a257d0412da698b1be4fb8b00557f4492c1dfa80af6ba852fcac4a15" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:19:30.439375 containerd[1563]: time="2026-01-23T01:19:30.438927373Z" level=info msg="connecting to shim 1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3" address="unix:///run/containerd/s/453dcdfbded842d12bb40a8cf6dbba32f7cafb07373a141a08c5ebac3a436a64" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:19:30.456949 kubelet[2457]: E0123 01:19:30.456898 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:19:30.915623 kubelet[2457]: E0123 01:19:30.914552 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="1.6s" Jan 23 01:19:30.915623 kubelet[2457]: E0123 01:19:30.914914 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:19:30.919828 kubelet[2457]: E0123 01:19:30.919369 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:19:30.920617 kubelet[2457]: I0123 01:19:30.920524 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:19:30.923841 kubelet[2457]: E0123 01:19:30.923811 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 23 01:19:30.957429 kubelet[2457]: E0123 01:19:30.956404 2457 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:19:31.076547 systemd[1]: Started cri-containerd-c92e4ff4c272d3b71f6fa92eaf52d02907a1c5f0fdfab40546e08635c7339aa7.scope - libcontainer container c92e4ff4c272d3b71f6fa92eaf52d02907a1c5f0fdfab40546e08635c7339aa7. Jan 23 01:19:31.125391 systemd[1]: Started cri-containerd-1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3.scope - libcontainer container 1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3. Jan 23 01:19:31.130325 systemd[1]: Started cri-containerd-32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34.scope - libcontainer container 32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34. Jan 23 01:19:31.417687 containerd[1563]: time="2026-01-23T01:19:31.417339653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a65d16ae0a9d45d13a38460d7150ab03,Namespace:kube-system,Attempt:0,} returns sandbox id \"c92e4ff4c272d3b71f6fa92eaf52d02907a1c5f0fdfab40546e08635c7339aa7\"" Jan 23 01:19:31.526359 containerd[1563]: time="2026-01-23T01:19:31.526254832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3\"" Jan 23 01:19:31.532845 containerd[1563]: time="2026-01-23T01:19:31.532817234Z" level=info msg="CreateContainer within sandbox \"c92e4ff4c272d3b71f6fa92eaf52d02907a1c5f0fdfab40546e08635c7339aa7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:19:31.544428 containerd[1563]: time="2026-01-23T01:19:31.544340297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34\"" Jan 23 01:19:31.556749 containerd[1563]: time="2026-01-23T01:19:31.556404701Z" level=info msg="CreateContainer within sandbox \"1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:19:31.569219 containerd[1563]: time="2026-01-23T01:19:31.566422959Z" level=info msg="CreateContainer within sandbox \"32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:19:31.612728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699405826.mount: Deactivated successfully. Jan 23 01:19:31.625299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848569166.mount: Deactivated successfully. Jan 23 01:19:31.640711 containerd[1563]: time="2026-01-23T01:19:31.639903680Z" level=info msg="Container 575f03fe524bcc6c1fe2664c472a35605694ae592ab51b98a3cf3cdbdf5c3393: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:19:31.652709 containerd[1563]: time="2026-01-23T01:19:31.649897785Z" level=info msg="Container a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:19:31.687817 containerd[1563]: time="2026-01-23T01:19:31.674168450Z" level=info msg="CreateContainer within sandbox \"c92e4ff4c272d3b71f6fa92eaf52d02907a1c5f0fdfab40546e08635c7339aa7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"575f03fe524bcc6c1fe2664c472a35605694ae592ab51b98a3cf3cdbdf5c3393\"" Jan 23 01:19:31.687817 containerd[1563]: time="2026-01-23T01:19:31.715751599Z" level=info msg="StartContainer for \"575f03fe524bcc6c1fe2664c472a35605694ae592ab51b98a3cf3cdbdf5c3393\"" Jan 23 01:19:31.687817 containerd[1563]: time="2026-01-23T01:19:31.726622225Z" level=info msg="connecting to shim 575f03fe524bcc6c1fe2664c472a35605694ae592ab51b98a3cf3cdbdf5c3393" address="unix:///run/containerd/s/b3169733a257d0412da698b1be4fb8b00557f4492c1dfa80af6ba852fcac4a15" protocol=ttrpc version=3 Jan 23 01:19:31.740955 kubelet[2457]: I0123 01:19:31.730547 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:19:31.740955 kubelet[2457]: E0123 01:19:31.735234 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" Jan 23 01:19:31.750140 containerd[1563]: time="2026-01-23T01:19:31.749211951Z" level=info msg="Container 6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:19:31.750140 containerd[1563]: time="2026-01-23T01:19:31.749576576Z" level=info msg="CreateContainer within sandbox \"1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08\"" Jan 23 01:19:31.751397 containerd[1563]: time="2026-01-23T01:19:31.751278972Z" level=info msg="StartContainer for \"a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08\"" Jan 23 01:19:31.753590 containerd[1563]: time="2026-01-23T01:19:31.753331160Z" level=info msg="connecting to shim a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08" address="unix:///run/containerd/s/453dcdfbded842d12bb40a8cf6dbba32f7cafb07373a141a08c5ebac3a436a64" protocol=ttrpc version=3 Jan 23 01:19:31.780572 containerd[1563]: time="2026-01-23T01:19:31.780428943Z" level=info msg="CreateContainer within sandbox \"32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a\"" Jan 23 01:19:31.786187 containerd[1563]: time="2026-01-23T01:19:31.785617201Z" level=info msg="StartContainer for \"6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a\"" Jan 23 01:19:31.797586 containerd[1563]: time="2026-01-23T01:19:31.796964754Z" level=info msg="connecting to shim 6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a" address="unix:///run/containerd/s/37168291ce97d77872f0ea2ccc334f80407990cb64ea3d28170875b54f7724d6" protocol=ttrpc version=3 Jan 23 01:19:31.842751 systemd[1]: Started cri-containerd-575f03fe524bcc6c1fe2664c472a35605694ae592ab51b98a3cf3cdbdf5c3393.scope - libcontainer container 575f03fe524bcc6c1fe2664c472a35605694ae592ab51b98a3cf3cdbdf5c3393. Jan 23 01:19:31.923624 systemd[1]: Started cri-containerd-6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a.scope - libcontainer container 6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a. Jan 23 01:19:32.064658 kubelet[2457]: E0123 01:19:32.064599 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:19:32.075612 systemd[1]: Started cri-containerd-a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08.scope - libcontainer container a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08. Jan 23 01:19:32.151788 containerd[1563]: time="2026-01-23T01:19:32.151745820Z" level=info msg="StartContainer for \"575f03fe524bcc6c1fe2664c472a35605694ae592ab51b98a3cf3cdbdf5c3393\" returns successfully" Jan 23 01:19:32.201890 containerd[1563]: time="2026-01-23T01:19:32.201687671Z" level=info msg="StartContainer for \"6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a\" returns successfully" Jan 23 01:19:32.321204 containerd[1563]: time="2026-01-23T01:19:32.320197274Z" level=info msg="StartContainer for \"a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08\" returns successfully" Jan 23 01:19:32.522284 kubelet[2457]: E0123 01:19:32.521564 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="3.2s" Jan 23 01:19:32.940408 kubelet[2457]: E0123 01:19:32.939198 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:19:32.980364 kubelet[2457]: E0123 01:19:32.980299 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:19:33.068718 kubelet[2457]: E0123 01:19:33.066822 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:33.073667 kubelet[2457]: E0123 01:19:33.073638 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:33.100812 kubelet[2457]: E0123 01:19:33.100777 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:33.343278 kubelet[2457]: I0123 01:19:33.342959 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:19:34.108356 kubelet[2457]: E0123 01:19:34.107897 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:34.116748 kubelet[2457]: E0123 01:19:34.116623 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:34.117617 kubelet[2457]: E0123 01:19:34.117329 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:35.181146 kubelet[2457]: E0123 01:19:35.180245 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:35.181146 kubelet[2457]: E0123 01:19:35.180538 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:19:35.181146 kubelet[2457]: E0123 01:19:35.180940 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:35.191201 kubelet[2457]: E0123 01:19:35.188838 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:19:36.259702 kubelet[2457]: E0123 01:19:36.258275 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:36.263713 kubelet[2457]: E0123 01:19:36.263679 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:19:36.322294 kubelet[2457]: E0123 01:19:36.321901 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:36.329265 kubelet[2457]: E0123 01:19:36.329160 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:19:39.869784 kubelet[2457]: E0123 01:19:39.867962 2457 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:19:42.745121 kubelet[2457]: E0123 01:19:42.744853 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:42.747196 kubelet[2457]: E0123 01:19:42.746835 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:19:43.353340 kubelet[2457]: E0123 01:19:43.352692 2457 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Jan 23 01:19:43.386950 kubelet[2457]: E0123 01:19:43.384889 2457 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.188d376cd264ca86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-23 01:19:29.152289414 +0000 UTC m=+1.727661398,LastTimestamp:2026-01-23 01:19:29.152289414 +0000 UTC m=+1.727661398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 23 01:19:43.523900 kubelet[2457]: E0123 01:19:43.522273 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:19:45.201542 kubelet[2457]: E0123 01:19:45.197435 2457 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:19:45.726746 kubelet[2457]: E0123 01:19:45.725956 2457 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jan 23 01:19:46.824348 kubelet[2457]: I0123 01:19:46.820956 2457 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:19:47.020818 kubelet[2457]: E0123 01:19:47.012389 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:47.020818 kubelet[2457]: E0123 01:19:47.014177 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:19:48.346617 kubelet[2457]: E0123 01:19:48.344592 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:19:48.855425 kubelet[2457]: E0123 01:19:48.854594 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:19:48.913231 kubelet[2457]: E0123 01:19:48.912845 2457 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:19:49.407641 kubelet[2457]: E0123 01:19:49.407296 2457 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 23 01:19:49.408710 kubelet[2457]: E0123 01:19:49.407909 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:19:49.434283 kubelet[2457]: I0123 01:19:49.433409 2457 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 01:19:49.434607 kubelet[2457]: E0123 01:19:49.434584 2457 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 23 01:19:50.824217 kubelet[2457]: E0123 01:19:50.824152 2457 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 23 01:19:52.135353 kubelet[2457]: E0123 01:19:52.135214 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:52.237397 kubelet[2457]: E0123 01:19:52.237255 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:52.343439 kubelet[2457]: E0123 01:19:52.341635 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:52.454214 kubelet[2457]: E0123 01:19:52.444778 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:52.546749 kubelet[2457]: E0123 01:19:52.545593 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:52.647252 kubelet[2457]: E0123 01:19:52.646660 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:52.747868 kubelet[2457]: E0123 01:19:52.747664 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:52.848229 kubelet[2457]: E0123 01:19:52.847908 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:52.948693 kubelet[2457]: E0123 01:19:52.948653 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.050154 kubelet[2457]: E0123 01:19:53.049845 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.150827 kubelet[2457]: E0123 01:19:53.150770 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.260840 kubelet[2457]: E0123 01:19:53.255760 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.357194 kubelet[2457]: E0123 01:19:53.356719 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.458265 kubelet[2457]: E0123 01:19:53.457333 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.561659 kubelet[2457]: E0123 01:19:53.559227 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.662893 kubelet[2457]: E0123 01:19:53.660968 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.765442 kubelet[2457]: E0123 01:19:53.764166 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.867574 kubelet[2457]: E0123 01:19:53.866909 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:53.974863 kubelet[2457]: E0123 01:19:53.972575 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:54.078624 kubelet[2457]: E0123 01:19:54.077751 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:54.195420 kubelet[2457]: E0123 01:19:54.195128 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:54.304273 kubelet[2457]: E0123 01:19:54.299820 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:54.406312 kubelet[2457]: E0123 01:19:54.404239 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:54.506333 kubelet[2457]: E0123 01:19:54.505684 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:54.607237 kubelet[2457]: E0123 01:19:54.606222 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:54.707810 kubelet[2457]: E0123 01:19:54.707720 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:54.809303 kubelet[2457]: E0123 01:19:54.809169 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:54.911911 kubelet[2457]: E0123 01:19:54.911770 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:55.027963 kubelet[2457]: E0123 01:19:55.025790 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:55.156911 kubelet[2457]: E0123 01:19:55.152873 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:55.274622 kubelet[2457]: E0123 01:19:55.266729 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:55.403616 kubelet[2457]: E0123 01:19:55.400899 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:55.504687 kubelet[2457]: E0123 01:19:55.502690 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:55.608418 kubelet[2457]: E0123 01:19:55.604843 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:55.710918 kubelet[2457]: E0123 01:19:55.710683 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:55.815154 kubelet[2457]: E0123 01:19:55.813604 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:55.915897 kubelet[2457]: E0123 01:19:55.915101 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:56.029300 kubelet[2457]: E0123 01:19:56.028604 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:56.133283 kubelet[2457]: E0123 01:19:56.132415 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:56.243444 kubelet[2457]: E0123 01:19:56.240914 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:56.363380 kubelet[2457]: E0123 01:19:56.361302 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:56.479309 kubelet[2457]: E0123 01:19:56.478252 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:56.601939 kubelet[2457]: E0123 01:19:56.587290 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:56.718636 kubelet[2457]: E0123 01:19:56.716627 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:56.817964 kubelet[2457]: E0123 01:19:56.817643 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:56.931322 kubelet[2457]: E0123 01:19:56.921345 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.025358 kubelet[2457]: E0123 01:19:57.024614 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.125789 kubelet[2457]: E0123 01:19:57.125400 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.231764 kubelet[2457]: E0123 01:19:57.226248 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.329573 kubelet[2457]: E0123 01:19:57.329272 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.430391 kubelet[2457]: E0123 01:19:57.429956 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.536783 kubelet[2457]: E0123 01:19:57.535738 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.636664 kubelet[2457]: E0123 01:19:57.636141 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.743851 kubelet[2457]: E0123 01:19:57.741334 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.848369 kubelet[2457]: E0123 01:19:57.848222 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:57.955967 kubelet[2457]: E0123 01:19:57.952602 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.053452 kubelet[2457]: E0123 01:19:58.053406 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.154289 kubelet[2457]: E0123 01:19:58.153845 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.255310 kubelet[2457]: E0123 01:19:58.254797 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.333562 systemd[1]: Reload requested from client PID 2748 ('systemctl') (unit session-7.scope)... Jan 23 01:19:58.333672 systemd[1]: Reloading... Jan 23 01:19:58.359194 kubelet[2457]: E0123 01:19:58.358930 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.461858 kubelet[2457]: E0123 01:19:58.460272 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.563918 kubelet[2457]: E0123 01:19:58.563871 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.666707 kubelet[2457]: E0123 01:19:58.666383 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.767813 kubelet[2457]: E0123 01:19:58.767577 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.803209 zram_generator::config[2787]: No configuration found. Jan 23 01:19:58.870953 kubelet[2457]: E0123 01:19:58.869435 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:58.976873 kubelet[2457]: E0123 01:19:58.976750 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:59.086749 kubelet[2457]: E0123 01:19:59.085821 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:59.203841 kubelet[2457]: E0123 01:19:59.202358 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:59.309205 kubelet[2457]: E0123 01:19:59.308423 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:59.415749 kubelet[2457]: E0123 01:19:59.408699 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:59.510847 kubelet[2457]: E0123 01:19:59.509920 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:59.612819 kubelet[2457]: E0123 01:19:59.612201 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:59.715740 kubelet[2457]: E0123 01:19:59.714213 2457 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 23 01:19:59.751349 systemd[1]: Reloading finished in 1416 ms. Jan 23 01:19:59.807761 kubelet[2457]: I0123 01:19:59.806170 2457 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 01:19:59.847346 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:19:59.900830 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:19:59.902716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:19:59.902920 systemd[1]: kubelet.service: Consumed 8.070s CPU time, 135.2M memory peak. Jan 23 01:19:59.916691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:20:00.863351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:20:00.930865 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:20:01.413159 kubelet[2838]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:20:01.413159 kubelet[2838]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:20:01.413159 kubelet[2838]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:20:01.413159 kubelet[2838]: I0123 01:20:01.412308 2838 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:20:01.435327 kubelet[2838]: I0123 01:20:01.435290 2838 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:20:01.436968 kubelet[2838]: I0123 01:20:01.436953 2838 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:20:01.437788 kubelet[2838]: I0123 01:20:01.437767 2838 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:20:01.444237 kubelet[2838]: I0123 01:20:01.443756 2838 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 01:20:01.460865 kubelet[2838]: I0123 01:20:01.460831 2838 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:20:01.467577 sudo[2854]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 23 01:20:01.469724 sudo[2854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 23 01:20:01.506181 kubelet[2838]: I0123 01:20:01.504735 2838 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:20:01.527869 kubelet[2838]: I0123 01:20:01.527685 2838 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:20:01.528200 kubelet[2838]: I0123 01:20:01.528174 2838 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:20:01.528412 kubelet[2838]: I0123 01:20:01.528199 2838 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:20:01.528412 kubelet[2838]: I0123 01:20:01.528332 2838 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:20:01.528412 kubelet[2838]: I0123 01:20:01.528340 2838 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:20:01.528412 kubelet[2838]: I0123 01:20:01.528384 2838 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:20:01.529085 kubelet[2838]: I0123 01:20:01.528891 2838 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:20:01.529709 kubelet[2838]: I0123 01:20:01.529240 2838 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:20:01.529709 kubelet[2838]: I0123 01:20:01.529349 2838 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:20:01.529709 kubelet[2838]: I0123 01:20:01.529367 2838 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:20:01.541144 kubelet[2838]: I0123 01:20:01.538879 2838 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:20:01.541144 kubelet[2838]: I0123 01:20:01.539645 2838 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:20:01.619427 kubelet[2838]: I0123 01:20:01.618826 2838 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:20:01.622114 kubelet[2838]: I0123 01:20:01.621911 2838 server.go:1289] "Started kubelet" Jan 23 01:20:01.626073 kubelet[2838]: I0123 01:20:01.625298 2838 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:20:01.638774 kubelet[2838]: I0123 01:20:01.638705 2838 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:20:01.661798 kubelet[2838]: I0123 01:20:01.660590 2838 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:20:01.661798 kubelet[2838]: I0123 01:20:01.661220 2838 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:20:01.669236 kubelet[2838]: I0123 01:20:01.664659 2838 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:20:01.669236 kubelet[2838]: I0123 01:20:01.665926 2838 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:20:01.669236 kubelet[2838]: I0123 01:20:01.669211 2838 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:20:01.676260 kubelet[2838]: I0123 01:20:01.669347 2838 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:20:01.676260 kubelet[2838]: I0123 01:20:01.669893 2838 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:20:01.676260 kubelet[2838]: E0123 01:20:01.674757 2838 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:20:01.698345 kubelet[2838]: I0123 01:20:01.696641 2838 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:20:01.698345 kubelet[2838]: I0123 01:20:01.697834 2838 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:20:01.711570 kubelet[2838]: I0123 01:20:01.710591 2838 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:20:02.058862 kubelet[2838]: I0123 01:20:02.058797 2838 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:20:02.080958 kubelet[2838]: I0123 01:20:02.080923 2838 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:20:02.101939 kubelet[2838]: I0123 01:20:02.083792 2838 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:20:02.101939 kubelet[2838]: I0123 01:20:02.083823 2838 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:20:02.102892 kubelet[2838]: I0123 01:20:02.102869 2838 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:20:02.103191 kubelet[2838]: I0123 01:20:02.102966 2838 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:20:02.103307 kubelet[2838]: I0123 01:20:02.103290 2838 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:20:02.103372 kubelet[2838]: I0123 01:20:02.103361 2838 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:20:02.103609 kubelet[2838]: E0123 01:20:02.103579 2838 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:20:02.128182 kubelet[2838]: I0123 01:20:02.128145 2838 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:20:02.128400 kubelet[2838]: I0123 01:20:02.128362 2838 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:20:02.128586 kubelet[2838]: I0123 01:20:02.128569 2838 policy_none.go:49] "None policy: Start" Jan 23 01:20:02.129195 kubelet[2838]: I0123 01:20:02.129178 2838 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:20:02.129297 kubelet[2838]: I0123 01:20:02.129282 2838 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:20:02.129710 kubelet[2838]: I0123 01:20:02.129678 2838 state_mem.go:75] "Updated machine memory state" Jan 23 01:20:02.187414 kubelet[2838]: E0123 01:20:02.187377 2838 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:20:02.188633 kubelet[2838]: I0123 01:20:02.188454 2838 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:20:02.209333 kubelet[2838]: I0123 01:20:02.208577 2838 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:20:02.210596 kubelet[2838]: I0123 01:20:02.210579 2838 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:20:02.212826 kubelet[2838]: I0123 01:20:02.212806 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 23 01:20:02.220332 kubelet[2838]: I0123 01:20:02.215391 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 23 01:20:02.223641 kubelet[2838]: I0123 01:20:02.215602 2838 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 23 01:20:02.228932 kubelet[2838]: E0123 01:20:02.227282 2838 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:20:02.279318 kubelet[2838]: I0123 01:20:02.278253 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a65d16ae0a9d45d13a38460d7150ab03-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a65d16ae0a9d45d13a38460d7150ab03\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:20:02.279318 kubelet[2838]: I0123 01:20:02.278308 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a65d16ae0a9d45d13a38460d7150ab03-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a65d16ae0a9d45d13a38460d7150ab03\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:20:02.279318 kubelet[2838]: I0123 01:20:02.278430 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:20:02.289129 kubelet[2838]: I0123 01:20:02.278457 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 23 01:20:02.322204 kubelet[2838]: I0123 01:20:02.314722 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a65d16ae0a9d45d13a38460d7150ab03-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a65d16ae0a9d45d13a38460d7150ab03\") " pod="kube-system/kube-apiserver-localhost" Jan 23 01:20:02.322204 kubelet[2838]: I0123 01:20:02.314893 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:20:02.322204 kubelet[2838]: I0123 01:20:02.315168 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:20:02.322204 kubelet[2838]: I0123 01:20:02.315194 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:20:02.322204 kubelet[2838]: I0123 01:20:02.315219 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 23 01:20:02.391791 kubelet[2838]: I0123 01:20:02.378676 2838 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 23 01:20:02.478229 kubelet[2838]: I0123 01:20:02.477334 2838 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 23 01:20:02.478229 kubelet[2838]: I0123 01:20:02.477737 2838 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 23 01:20:02.531147 kubelet[2838]: I0123 01:20:02.530587 2838 apiserver.go:52] "Watching apiserver" Jan 23 01:20:02.605942 kubelet[2838]: E0123 01:20:02.601661 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:02.640289 kubelet[2838]: E0123 01:20:02.639422 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:02.640289 kubelet[2838]: E0123 01:20:02.639760 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:02.691821 kubelet[2838]: I0123 01:20:02.674924 2838 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:20:03.244168 kubelet[2838]: E0123 01:20:03.243586 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:03.245264 kubelet[2838]: E0123 01:20:03.245239 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:03.249793 kubelet[2838]: E0123 01:20:03.249282 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:03.459580 kubelet[2838]: I0123 01:20:03.448907 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.448713253 podStartE2EDuration="1.448713253s" podCreationTimestamp="2026-01-23 01:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:20:03.150948512 +0000 UTC m=+2.125699457" watchObservedRunningTime="2026-01-23 01:20:03.448713253 +0000 UTC m=+2.423464219" Jan 23 01:20:03.779734 sudo[2854]: pam_unix(sudo:session): session closed for user root Jan 23 01:20:03.859215 kubelet[2838]: I0123 01:20:03.857896 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.857872091 podStartE2EDuration="1.857872091s" podCreationTimestamp="2026-01-23 01:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:20:03.445280555 +0000 UTC m=+2.420031501" watchObservedRunningTime="2026-01-23 01:20:03.857872091 +0000 UTC m=+2.832623016" Jan 23 01:20:03.863580 kubelet[2838]: I0123 01:20:03.863240 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.863225812 podStartE2EDuration="1.863225812s" podCreationTimestamp="2026-01-23 01:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:20:03.857655737 +0000 UTC m=+2.832406712" watchObservedRunningTime="2026-01-23 01:20:03.863225812 +0000 UTC m=+2.837976737" Jan 23 01:20:04.156575 kubelet[2838]: I0123 01:20:04.148342 2838 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:20:04.211313 containerd[1563]: time="2026-01-23T01:20:04.210666421Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:20:04.216359 kubelet[2838]: I0123 01:20:04.215920 2838 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:20:04.374114 kubelet[2838]: E0123 01:20:04.370636 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:04.374448 kubelet[2838]: E0123 01:20:04.370650 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:05.239183 kubelet[2838]: I0123 01:20:05.234682 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ada03265-c48d-4bcf-b396-fe0aec6e6075-xtables-lock\") pod \"kube-proxy-g7rqq\" (UID: \"ada03265-c48d-4bcf-b396-fe0aec6e6075\") " pod="kube-system/kube-proxy-g7rqq" Jan 23 01:20:05.239183 kubelet[2838]: I0123 01:20:05.234806 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs74l\" (UniqueName: \"kubernetes.io/projected/ada03265-c48d-4bcf-b396-fe0aec6e6075-kube-api-access-bs74l\") pod \"kube-proxy-g7rqq\" (UID: \"ada03265-c48d-4bcf-b396-fe0aec6e6075\") " pod="kube-system/kube-proxy-g7rqq" Jan 23 01:20:05.239183 kubelet[2838]: I0123 01:20:05.234838 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ada03265-c48d-4bcf-b396-fe0aec6e6075-kube-proxy\") pod \"kube-proxy-g7rqq\" (UID: \"ada03265-c48d-4bcf-b396-fe0aec6e6075\") " pod="kube-system/kube-proxy-g7rqq" Jan 23 01:20:05.239183 kubelet[2838]: I0123 01:20:05.234860 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ada03265-c48d-4bcf-b396-fe0aec6e6075-lib-modules\") pod \"kube-proxy-g7rqq\" (UID: \"ada03265-c48d-4bcf-b396-fe0aec6e6075\") " pod="kube-system/kube-proxy-g7rqq" Jan 23 01:20:05.240200 systemd[1]: Created slice kubepods-besteffort-podada03265_c48d_4bcf_b396_fe0aec6e6075.slice - libcontainer container kubepods-besteffort-podada03265_c48d_4bcf_b396_fe0aec6e6075.slice. Jan 23 01:20:05.387257 kubelet[2838]: E0123 01:20:05.387167 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:05.568181 kubelet[2838]: E0123 01:20:05.566238 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:05.573379 containerd[1563]: time="2026-01-23T01:20:05.571889608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7rqq,Uid:ada03265-c48d-4bcf-b396-fe0aec6e6075,Namespace:kube-system,Attempt:0,}" Jan 23 01:20:05.778340 containerd[1563]: time="2026-01-23T01:20:05.778193379Z" level=info msg="connecting to shim ea917658ea1f81d56ef20c96e780532ac27807cc525c33c2b352cfd5bf3cf7ee" address="unix:///run/containerd/s/1db9e61c1620a33215e77e7bbc88d88c75c231658663958f78e1ee561b6b2ac1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:20:06.026906 systemd[1]: Started cri-containerd-ea917658ea1f81d56ef20c96e780532ac27807cc525c33c2b352cfd5bf3cf7ee.scope - libcontainer container ea917658ea1f81d56ef20c96e780532ac27807cc525c33c2b352cfd5bf3cf7ee. Jan 23 01:20:06.250448 containerd[1563]: time="2026-01-23T01:20:06.249920937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g7rqq,Uid:ada03265-c48d-4bcf-b396-fe0aec6e6075,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea917658ea1f81d56ef20c96e780532ac27807cc525c33c2b352cfd5bf3cf7ee\"" Jan 23 01:20:06.260149 kubelet[2838]: E0123 01:20:06.257791 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:06.290602 containerd[1563]: time="2026-01-23T01:20:06.289193434Z" level=info msg="CreateContainer within sandbox \"ea917658ea1f81d56ef20c96e780532ac27807cc525c33c2b352cfd5bf3cf7ee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:20:06.393326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242544022.mount: Deactivated successfully. Jan 23 01:20:06.407938 containerd[1563]: time="2026-01-23T01:20:06.405879428Z" level=info msg="Container c7c70f45a0e820d78358ac9ff4588344ed8afb8ecb1168061e0ae2247dbdff71: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:20:06.413905 kubelet[2838]: E0123 01:20:06.411860 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:06.453234 containerd[1563]: time="2026-01-23T01:20:06.452897463Z" level=info msg="CreateContainer within sandbox \"ea917658ea1f81d56ef20c96e780532ac27807cc525c33c2b352cfd5bf3cf7ee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c7c70f45a0e820d78358ac9ff4588344ed8afb8ecb1168061e0ae2247dbdff71\"" Jan 23 01:20:06.460440 containerd[1563]: time="2026-01-23T01:20:06.460283967Z" level=info msg="StartContainer for \"c7c70f45a0e820d78358ac9ff4588344ed8afb8ecb1168061e0ae2247dbdff71\"" Jan 23 01:20:06.471137 containerd[1563]: time="2026-01-23T01:20:06.470770153Z" level=info msg="connecting to shim c7c70f45a0e820d78358ac9ff4588344ed8afb8ecb1168061e0ae2247dbdff71" address="unix:///run/containerd/s/1db9e61c1620a33215e77e7bbc88d88c75c231658663958f78e1ee561b6b2ac1" protocol=ttrpc version=3 Jan 23 01:20:06.602311 systemd[1]: Started cri-containerd-c7c70f45a0e820d78358ac9ff4588344ed8afb8ecb1168061e0ae2247dbdff71.scope - libcontainer container c7c70f45a0e820d78358ac9ff4588344ed8afb8ecb1168061e0ae2247dbdff71. Jan 23 01:20:06.987784 systemd[1]: Created slice kubepods-burstable-pod8f853f65_8007_42ea_8e4b_f009906b5cc0.slice - libcontainer container kubepods-burstable-pod8f853f65_8007_42ea_8e4b_f009906b5cc0.slice. Jan 23 01:20:07.008385 systemd[1]: Created slice kubepods-besteffort-pode55daa78_d9f7_467a_856a_5c3b45afc015.slice - libcontainer container kubepods-besteffort-pode55daa78_d9f7_467a_856a_5c3b45afc015.slice. Jan 23 01:20:07.068448 kubelet[2838]: I0123 01:20:07.068315 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-run\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.068448 kubelet[2838]: I0123 01:20:07.068446 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cni-path\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.069755 kubelet[2838]: I0123 01:20:07.068580 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-xtables-lock\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.069755 kubelet[2838]: I0123 01:20:07.068610 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5d6n\" (UniqueName: \"kubernetes.io/projected/8f853f65-8007-42ea-8e4b-f009906b5cc0-kube-api-access-r5d6n\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.069755 kubelet[2838]: I0123 01:20:07.068631 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-etc-cni-netd\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.069755 kubelet[2838]: I0123 01:20:07.068650 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f853f65-8007-42ea-8e4b-f009906b5cc0-clustermesh-secrets\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.069755 kubelet[2838]: I0123 01:20:07.068668 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-config-path\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.073683 kubelet[2838]: I0123 01:20:07.068686 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-host-proc-sys-kernel\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.073683 kubelet[2838]: I0123 01:20:07.068705 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-hostproc\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.073683 kubelet[2838]: I0123 01:20:07.068727 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-cgroup\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.073683 kubelet[2838]: I0123 01:20:07.068745 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-lib-modules\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.073683 kubelet[2838]: I0123 01:20:07.068775 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-host-proc-sys-net\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.073917 kubelet[2838]: I0123 01:20:07.068802 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvfsn\" (UniqueName: \"kubernetes.io/projected/e55daa78-d9f7-467a-856a-5c3b45afc015-kube-api-access-bvfsn\") pod \"cilium-operator-6c4d7847fc-h7k4z\" (UID: \"e55daa78-d9f7-467a-856a-5c3b45afc015\") " pod="kube-system/cilium-operator-6c4d7847fc-h7k4z" Jan 23 01:20:07.073917 kubelet[2838]: I0123 01:20:07.068831 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f853f65-8007-42ea-8e4b-f009906b5cc0-hubble-tls\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.073917 kubelet[2838]: I0123 01:20:07.068856 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e55daa78-d9f7-467a-856a-5c3b45afc015-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h7k4z\" (UID: \"e55daa78-d9f7-467a-856a-5c3b45afc015\") " pod="kube-system/cilium-operator-6c4d7847fc-h7k4z" Jan 23 01:20:07.073917 kubelet[2838]: I0123 01:20:07.068884 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-bpf-maps\") pod \"cilium-f4z87\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " pod="kube-system/cilium-f4z87" Jan 23 01:20:07.394284 containerd[1563]: time="2026-01-23T01:20:07.393892759Z" level=info msg="StartContainer for \"c7c70f45a0e820d78358ac9ff4588344ed8afb8ecb1168061e0ae2247dbdff71\" returns successfully" Jan 23 01:20:07.439708 kubelet[2838]: E0123 01:20:07.438914 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:07.605781 kubelet[2838]: E0123 01:20:07.603942 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:07.606212 containerd[1563]: time="2026-01-23T01:20:07.605395567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4z87,Uid:8f853f65-8007-42ea-8e4b-f009906b5cc0,Namespace:kube-system,Attempt:0,}" Jan 23 01:20:07.625824 kubelet[2838]: E0123 01:20:07.625343 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:07.630782 containerd[1563]: time="2026-01-23T01:20:07.630275641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h7k4z,Uid:e55daa78-d9f7-467a-856a-5c3b45afc015,Namespace:kube-system,Attempt:0,}" Jan 23 01:20:07.799623 containerd[1563]: time="2026-01-23T01:20:07.799456642Z" level=info msg="connecting to shim 05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40" address="unix:///run/containerd/s/3278c82c7fe63119d308ba27cd8253841255c2040e5b33932ded57d95079e821" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:20:07.844465 containerd[1563]: time="2026-01-23T01:20:07.843916895Z" level=info msg="connecting to shim 992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1" address="unix:///run/containerd/s/e80c3854b0fc2fb3f35275155b65f1306397b8889fb17be4a25668f0011f7376" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:20:07.986400 systemd[1]: Started cri-containerd-05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40.scope - libcontainer container 05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40. Jan 23 01:20:08.104720 systemd[1]: Started cri-containerd-992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1.scope - libcontainer container 992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1. Jan 23 01:20:08.404386 containerd[1563]: time="2026-01-23T01:20:08.403819268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4z87,Uid:8f853f65-8007-42ea-8e4b-f009906b5cc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\"" Jan 23 01:20:08.432177 kubelet[2838]: E0123 01:20:08.431867 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:08.476075 containerd[1563]: time="2026-01-23T01:20:08.474968574Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 23 01:20:08.499434 kubelet[2838]: E0123 01:20:08.499391 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:08.516927 kubelet[2838]: E0123 01:20:08.515652 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:08.551728 containerd[1563]: time="2026-01-23T01:20:08.551693011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h7k4z,Uid:e55daa78-d9f7-467a-856a-5c3b45afc015,Namespace:kube-system,Attempt:0,} returns sandbox id \"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\"" Jan 23 01:20:08.557403 kubelet[2838]: I0123 01:20:08.556247 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g7rqq" podStartSLOduration=4.556225751 podStartE2EDuration="4.556225751s" podCreationTimestamp="2026-01-23 01:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:20:07.531300121 +0000 UTC m=+6.506051086" watchObservedRunningTime="2026-01-23 01:20:08.556225751 +0000 UTC m=+7.530976676" Jan 23 01:20:08.575237 kubelet[2838]: E0123 01:20:08.571836 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:08.916570 kubelet[2838]: E0123 01:20:08.915805 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:09.555394 kubelet[2838]: E0123 01:20:09.548180 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:09.556656 kubelet[2838]: E0123 01:20:09.556431 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:27.430390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount459420080.mount: Deactivated successfully. Jan 23 01:20:32.795303 containerd[1563]: time="2026-01-23T01:20:32.794484983Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:20:32.797335 containerd[1563]: time="2026-01-23T01:20:32.796704328Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 23 01:20:32.799948 containerd[1563]: time="2026-01-23T01:20:32.799810105Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:20:32.804671 containerd[1563]: time="2026-01-23T01:20:32.804482865Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 24.316256053s" Jan 23 01:20:32.804671 containerd[1563]: time="2026-01-23T01:20:32.804612616Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 23 01:20:32.809431 containerd[1563]: time="2026-01-23T01:20:32.809381421Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 23 01:20:32.850467 containerd[1563]: time="2026-01-23T01:20:32.850158314Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:20:32.877352 containerd[1563]: time="2026-01-23T01:20:32.876394562Z" level=info msg="Container f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:20:32.906278 containerd[1563]: time="2026-01-23T01:20:32.905960100Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\"" Jan 23 01:20:32.908912 containerd[1563]: time="2026-01-23T01:20:32.908884238Z" level=info msg="StartContainer for \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\"" Jan 23 01:20:32.911036 containerd[1563]: time="2026-01-23T01:20:32.910796868Z" level=info msg="connecting to shim f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330" address="unix:///run/containerd/s/e80c3854b0fc2fb3f35275155b65f1306397b8889fb17be4a25668f0011f7376" protocol=ttrpc version=3 Jan 23 01:20:32.990689 systemd[1]: Started cri-containerd-f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330.scope - libcontainer container f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330. Jan 23 01:20:33.128596 containerd[1563]: time="2026-01-23T01:20:33.128282678Z" level=info msg="StartContainer for \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\" returns successfully" Jan 23 01:20:33.161265 systemd[1]: cri-containerd-f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330.scope: Deactivated successfully. Jan 23 01:20:33.164639 containerd[1563]: time="2026-01-23T01:20:33.164415941Z" level=info msg="received container exit event container_id:\"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\" id:\"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\" pid:3251 exited_at:{seconds:1769131233 nanos:163214734}" Jan 23 01:20:33.248850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330-rootfs.mount: Deactivated successfully. Jan 23 01:20:33.383821 kubelet[2838]: E0123 01:20:33.382393 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:34.391596 kubelet[2838]: E0123 01:20:34.391303 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:34.413225 containerd[1563]: time="2026-01-23T01:20:34.412950104Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:20:34.505210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595074981.mount: Deactivated successfully. Jan 23 01:20:34.511661 containerd[1563]: time="2026-01-23T01:20:34.509675080Z" level=info msg="Container c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:20:34.534583 containerd[1563]: time="2026-01-23T01:20:34.534345910Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\"" Jan 23 01:20:34.538478 containerd[1563]: time="2026-01-23T01:20:34.536884256Z" level=info msg="StartContainer for \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\"" Jan 23 01:20:34.538831 containerd[1563]: time="2026-01-23T01:20:34.538733829Z" level=info msg="connecting to shim c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0" address="unix:///run/containerd/s/e80c3854b0fc2fb3f35275155b65f1306397b8889fb17be4a25668f0011f7376" protocol=ttrpc version=3 Jan 23 01:20:34.628238 systemd[1]: Started cri-containerd-c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0.scope - libcontainer container c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0. Jan 23 01:20:34.841447 containerd[1563]: time="2026-01-23T01:20:34.841404066Z" level=info msg="StartContainer for \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\" returns successfully" Jan 23 01:20:34.900441 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:20:34.905174 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:20:34.912116 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:20:34.919595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:20:34.931847 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:20:34.937147 systemd[1]: cri-containerd-c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0.scope: Deactivated successfully. Jan 23 01:20:34.945465 containerd[1563]: time="2026-01-23T01:20:34.944258080Z" level=info msg="received container exit event container_id:\"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\" id:\"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\" pid:3310 exited_at:{seconds:1769131234 nanos:943436425}" Jan 23 01:20:35.081829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:20:35.219819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0-rootfs.mount: Deactivated successfully. Jan 23 01:20:35.447902 kubelet[2838]: E0123 01:20:35.445648 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:35.568415 containerd[1563]: time="2026-01-23T01:20:35.567634469Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:20:35.632868 containerd[1563]: time="2026-01-23T01:20:35.631394582Z" level=info msg="Container 284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:20:35.633893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676103188.mount: Deactivated successfully. Jan 23 01:20:35.677714 containerd[1563]: time="2026-01-23T01:20:35.676443756Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\"" Jan 23 01:20:35.681925 containerd[1563]: time="2026-01-23T01:20:35.681299317Z" level=info msg="StartContainer for \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\"" Jan 23 01:20:35.687253 containerd[1563]: time="2026-01-23T01:20:35.686906784Z" level=info msg="connecting to shim 284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8" address="unix:///run/containerd/s/e80c3854b0fc2fb3f35275155b65f1306397b8889fb17be4a25668f0011f7376" protocol=ttrpc version=3 Jan 23 01:20:35.773430 systemd[1]: Started cri-containerd-284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8.scope - libcontainer container 284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8. Jan 23 01:20:35.936746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311050561.mount: Deactivated successfully. Jan 23 01:20:36.047227 systemd[1]: cri-containerd-284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8.scope: Deactivated successfully. Jan 23 01:20:36.050457 containerd[1563]: time="2026-01-23T01:20:36.050190552Z" level=info msg="StartContainer for \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\" returns successfully" Jan 23 01:20:36.061206 containerd[1563]: time="2026-01-23T01:20:36.061163841Z" level=info msg="received container exit event container_id:\"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\" id:\"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\" pid:3361 exited_at:{seconds:1769131236 nanos:57709086}" Jan 23 01:20:36.202952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8-rootfs.mount: Deactivated successfully. Jan 23 01:20:36.459269 kubelet[2838]: E0123 01:20:36.458938 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:36.471340 containerd[1563]: time="2026-01-23T01:20:36.470355733Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:20:36.521452 containerd[1563]: time="2026-01-23T01:20:36.521336914Z" level=info msg="Container f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:20:36.564126 containerd[1563]: time="2026-01-23T01:20:36.563915259Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\"" Jan 23 01:20:36.571752 containerd[1563]: time="2026-01-23T01:20:36.571307901Z" level=info msg="StartContainer for \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\"" Jan 23 01:20:36.576122 containerd[1563]: time="2026-01-23T01:20:36.575326953Z" level=info msg="connecting to shim f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697" address="unix:///run/containerd/s/e80c3854b0fc2fb3f35275155b65f1306397b8889fb17be4a25668f0011f7376" protocol=ttrpc version=3 Jan 23 01:20:36.678165 systemd[1]: Started cri-containerd-f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697.scope - libcontainer container f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697. Jan 23 01:20:36.756608 containerd[1563]: time="2026-01-23T01:20:36.754845985Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:20:36.761622 containerd[1563]: time="2026-01-23T01:20:36.761330870Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 23 01:20:36.765595 containerd[1563]: time="2026-01-23T01:20:36.765335597Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:20:36.773646 containerd[1563]: time="2026-01-23T01:20:36.773604090Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.96381076s" Jan 23 01:20:36.773866 containerd[1563]: time="2026-01-23T01:20:36.773747348Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 23 01:20:36.799601 containerd[1563]: time="2026-01-23T01:20:36.798962322Z" level=info msg="CreateContainer within sandbox \"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 23 01:20:36.853189 containerd[1563]: time="2026-01-23T01:20:36.852220947Z" level=info msg="Container d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:20:36.877383 containerd[1563]: time="2026-01-23T01:20:36.877204040Z" level=info msg="CreateContainer within sandbox \"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\"" Jan 23 01:20:36.881657 containerd[1563]: time="2026-01-23T01:20:36.880244816Z" level=info msg="StartContainer for \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\"" Jan 23 01:20:36.893303 containerd[1563]: time="2026-01-23T01:20:36.892920596Z" level=info msg="connecting to shim d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2" address="unix:///run/containerd/s/3278c82c7fe63119d308ba27cd8253841255c2040e5b33932ded57d95079e821" protocol=ttrpc version=3 Jan 23 01:20:36.901732 systemd[1]: cri-containerd-f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697.scope: Deactivated successfully. Jan 23 01:20:36.911106 containerd[1563]: time="2026-01-23T01:20:36.910894490Z" level=info msg="StartContainer for \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\" returns successfully" Jan 23 01:20:36.917581 containerd[1563]: time="2026-01-23T01:20:36.917269616Z" level=info msg="received container exit event container_id:\"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\" id:\"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\" pid:3402 exited_at:{seconds:1769131236 nanos:916483044}" Jan 23 01:20:37.002901 systemd[1]: Started cri-containerd-d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2.scope - libcontainer container d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2. Jan 23 01:20:37.097823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697-rootfs.mount: Deactivated successfully. Jan 23 01:20:37.215809 containerd[1563]: time="2026-01-23T01:20:37.215681202Z" level=info msg="StartContainer for \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\" returns successfully" Jan 23 01:20:37.478218 kubelet[2838]: E0123 01:20:37.477399 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:37.511378 kubelet[2838]: E0123 01:20:37.511332 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:37.517838 containerd[1563]: time="2026-01-23T01:20:37.517358748Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:20:37.654479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747003788.mount: Deactivated successfully. Jan 23 01:20:37.661244 containerd[1563]: time="2026-01-23T01:20:37.655437750Z" level=info msg="Container 560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:20:37.717924 containerd[1563]: time="2026-01-23T01:20:37.717740315Z" level=info msg="CreateContainer within sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\"" Jan 23 01:20:37.729880 kubelet[2838]: I0123 01:20:37.729444 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h7k4z" podStartSLOduration=3.554680092 podStartE2EDuration="31.729420755s" podCreationTimestamp="2026-01-23 01:20:06 +0000 UTC" firstStartedPulling="2026-01-23 01:20:08.602903267 +0000 UTC m=+7.577654202" lastFinishedPulling="2026-01-23 01:20:36.777643939 +0000 UTC m=+35.752394865" observedRunningTime="2026-01-23 01:20:37.725964554 +0000 UTC m=+36.700715479" watchObservedRunningTime="2026-01-23 01:20:37.729420755 +0000 UTC m=+36.704171690" Jan 23 01:20:37.741657 containerd[1563]: time="2026-01-23T01:20:37.741349631Z" level=info msg="StartContainer for \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\"" Jan 23 01:20:37.756233 containerd[1563]: time="2026-01-23T01:20:37.754866261Z" level=info msg="connecting to shim 560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e" address="unix:///run/containerd/s/e80c3854b0fc2fb3f35275155b65f1306397b8889fb17be4a25668f0011f7376" protocol=ttrpc version=3 Jan 23 01:20:37.966179 systemd[1]: Started cri-containerd-560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e.scope - libcontainer container 560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e. Jan 23 01:20:38.401967 containerd[1563]: time="2026-01-23T01:20:38.401871355Z" level=info msg="StartContainer for \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\" returns successfully" Jan 23 01:20:38.576594 kubelet[2838]: E0123 01:20:38.576229 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:39.068126 kubelet[2838]: I0123 01:20:39.063956 2838 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:20:39.412492 systemd[1]: Created slice kubepods-burstable-podaf7d7f35_7df8_4733_b7d8_eaa6851ed445.slice - libcontainer container kubepods-burstable-podaf7d7f35_7df8_4733_b7d8_eaa6851ed445.slice. Jan 23 01:20:39.428421 kubelet[2838]: I0123 01:20:39.428268 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af7d7f35-7df8-4733-b7d8-eaa6851ed445-config-volume\") pod \"coredns-674b8bbfcf-69f6d\" (UID: \"af7d7f35-7df8-4733-b7d8-eaa6851ed445\") " pod="kube-system/coredns-674b8bbfcf-69f6d" Jan 23 01:20:39.428421 kubelet[2838]: I0123 01:20:39.428412 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pv8j\" (UniqueName: \"kubernetes.io/projected/af7d7f35-7df8-4733-b7d8-eaa6851ed445-kube-api-access-2pv8j\") pod \"coredns-674b8bbfcf-69f6d\" (UID: \"af7d7f35-7df8-4733-b7d8-eaa6851ed445\") " pod="kube-system/coredns-674b8bbfcf-69f6d" Jan 23 01:20:39.488942 systemd[1]: Created slice kubepods-burstable-poda2c0dee7_531b_481d_a401_9c81527e542c.slice - libcontainer container kubepods-burstable-poda2c0dee7_531b_481d_a401_9c81527e542c.slice. Jan 23 01:20:39.530497 kubelet[2838]: I0123 01:20:39.529959 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2c0dee7-531b-481d-a401-9c81527e542c-config-volume\") pod \"coredns-674b8bbfcf-xp2fr\" (UID: \"a2c0dee7-531b-481d-a401-9c81527e542c\") " pod="kube-system/coredns-674b8bbfcf-xp2fr" Jan 23 01:20:39.530497 kubelet[2838]: I0123 01:20:39.530361 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp5bv\" (UniqueName: \"kubernetes.io/projected/a2c0dee7-531b-481d-a401-9c81527e542c-kube-api-access-dp5bv\") pod \"coredns-674b8bbfcf-xp2fr\" (UID: \"a2c0dee7-531b-481d-a401-9c81527e542c\") " pod="kube-system/coredns-674b8bbfcf-xp2fr" Jan 23 01:20:39.598925 kubelet[2838]: E0123 01:20:39.598493 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:39.723927 kubelet[2838]: E0123 01:20:39.720947 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:39.729468 containerd[1563]: time="2026-01-23T01:20:39.729241129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-69f6d,Uid:af7d7f35-7df8-4733-b7d8-eaa6851ed445,Namespace:kube-system,Attempt:0,}" Jan 23 01:20:39.798213 kubelet[2838]: E0123 01:20:39.797738 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:39.799434 containerd[1563]: time="2026-01-23T01:20:39.799150063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xp2fr,Uid:a2c0dee7-531b-481d-a401-9c81527e542c,Namespace:kube-system,Attempt:0,}" Jan 23 01:20:39.846198 kubelet[2838]: I0123 01:20:39.845409 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f4z87" podStartSLOduration=9.486341398 podStartE2EDuration="33.845392808s" podCreationTimestamp="2026-01-23 01:20:06 +0000 UTC" firstStartedPulling="2026-01-23 01:20:08.44917832 +0000 UTC m=+7.423929266" lastFinishedPulling="2026-01-23 01:20:32.808229751 +0000 UTC m=+31.782980676" observedRunningTime="2026-01-23 01:20:39.840798834 +0000 UTC m=+38.815549759" watchObservedRunningTime="2026-01-23 01:20:39.845392808 +0000 UTC m=+38.820143724" Jan 23 01:20:40.606347 kubelet[2838]: E0123 01:20:40.605466 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:42.829341 systemd-networkd[1454]: cilium_host: Link UP Jan 23 01:20:42.830613 systemd-networkd[1454]: cilium_net: Link UP Jan 23 01:20:42.835812 systemd-networkd[1454]: cilium_host: Gained carrier Jan 23 01:20:42.837949 systemd-networkd[1454]: cilium_net: Gained carrier Jan 23 01:20:42.854890 systemd-networkd[1454]: cilium_host: Gained IPv6LL Jan 23 01:20:42.869476 systemd-networkd[1454]: cilium_net: Gained IPv6LL Jan 23 01:20:43.370837 systemd-networkd[1454]: cilium_vxlan: Link UP Jan 23 01:20:43.370851 systemd-networkd[1454]: cilium_vxlan: Gained carrier Jan 23 01:20:44.199292 kernel: NET: Registered PF_ALG protocol family Jan 23 01:20:45.007427 systemd-networkd[1454]: cilium_vxlan: Gained IPv6LL Jan 23 01:20:47.820732 systemd-networkd[1454]: lxc_health: Link UP Jan 23 01:20:47.849165 systemd-networkd[1454]: lxc_health: Gained carrier Jan 23 01:20:48.252916 systemd-networkd[1454]: lxcd12b759930ed: Link UP Jan 23 01:20:48.254941 systemd-networkd[1454]: lxc40f891dafcfc: Link UP Jan 23 01:20:48.285932 kernel: eth0: renamed from tmp31531 Jan 23 01:20:48.307610 kernel: eth0: renamed from tmpa7bf8 Jan 23 01:20:48.333256 systemd-networkd[1454]: lxcd12b759930ed: Gained carrier Jan 23 01:20:48.339153 systemd-networkd[1454]: lxc40f891dafcfc: Gained carrier Jan 23 01:20:49.299808 systemd-networkd[1454]: lxc_health: Gained IPv6LL Jan 23 01:20:49.354841 systemd-networkd[1454]: lxc40f891dafcfc: Gained IPv6LL Jan 23 01:20:49.482866 systemd-networkd[1454]: lxcd12b759930ed: Gained IPv6LL Jan 23 01:20:49.617804 kubelet[2838]: E0123 01:20:49.615508 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:49.772103 kubelet[2838]: E0123 01:20:49.771231 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:50.771141 kubelet[2838]: E0123 01:20:50.770474 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:20:56.282474 sudo[1760]: pam_unix(sudo:session): session closed for user root Jan 23 01:20:56.306157 sshd[1759]: Connection closed by 10.0.0.1 port 32852 Jan 23 01:20:56.313156 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jan 23 01:20:56.331491 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:20:56.335263 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:32852.service: Deactivated successfully. Jan 23 01:20:56.344928 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:20:56.346510 systemd[1]: session-7.scope: Consumed 25.338s CPU time, 224M memory peak. Jan 23 01:20:56.357252 systemd-logind[1549]: Removed session 7. Jan 23 01:21:03.975359 containerd[1563]: time="2026-01-23T01:21:03.974957342Z" level=info msg="connecting to shim a7bf8292a6f2864b03d322d540c58a35d02030467bf1d68f69a832f5a857004b" address="unix:///run/containerd/s/fac84011f2dcfe1bd12355d70668062cafb8f4d347665049cb767c3f1a2b8283" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:21:04.025329 containerd[1563]: time="2026-01-23T01:21:04.024356286Z" level=info msg="connecting to shim 315316f1417879912ef5fe185b32c6659a70cc7b8681d8980dcaff1fe95ea467" address="unix:///run/containerd/s/882831d10ce2d642521f12d817e4bbe6efc94520018d1a5451b42cf8999ba03a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:21:04.097283 systemd[1]: Started cri-containerd-a7bf8292a6f2864b03d322d540c58a35d02030467bf1d68f69a832f5a857004b.scope - libcontainer container a7bf8292a6f2864b03d322d540c58a35d02030467bf1d68f69a832f5a857004b. Jan 23 01:21:04.145819 systemd[1]: Started cri-containerd-315316f1417879912ef5fe185b32c6659a70cc7b8681d8980dcaff1fe95ea467.scope - libcontainer container 315316f1417879912ef5fe185b32c6659a70cc7b8681d8980dcaff1fe95ea467. Jan 23 01:21:04.181682 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:21:04.228291 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 23 01:21:04.334906 containerd[1563]: time="2026-01-23T01:21:04.334863624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-69f6d,Uid:af7d7f35-7df8-4733-b7d8-eaa6851ed445,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7bf8292a6f2864b03d322d540c58a35d02030467bf1d68f69a832f5a857004b\"" Jan 23 01:21:04.371371 kubelet[2838]: E0123 01:21:04.370136 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:04.388795 kubelet[2838]: E0123 01:21:04.388748 2838 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2c0dee7_531b_481d_a401_9c81527e542c.slice/cri-containerd-315316f1417879912ef5fe185b32c6659a70cc7b8681d8980dcaff1fe95ea467.scope\": RecentStats: unable to find data in memory cache]" Jan 23 01:21:04.404298 containerd[1563]: time="2026-01-23T01:21:04.403289569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xp2fr,Uid:a2c0dee7-531b-481d-a401-9c81527e542c,Namespace:kube-system,Attempt:0,} returns sandbox id \"315316f1417879912ef5fe185b32c6659a70cc7b8681d8980dcaff1fe95ea467\"" Jan 23 01:21:04.408778 kubelet[2838]: E0123 01:21:04.408275 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:04.416864 containerd[1563]: time="2026-01-23T01:21:04.416822248Z" level=info msg="CreateContainer within sandbox \"a7bf8292a6f2864b03d322d540c58a35d02030467bf1d68f69a832f5a857004b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:21:04.426929 containerd[1563]: time="2026-01-23T01:21:04.426754725Z" level=info msg="CreateContainer within sandbox \"315316f1417879912ef5fe185b32c6659a70cc7b8681d8980dcaff1fe95ea467\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:21:04.495460 containerd[1563]: time="2026-01-23T01:21:04.494863662Z" level=info msg="Container bd7ceb559ad0dffb6583112392a7b247050ab5393c61f2708550ed25c254abd4: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:21:04.520240 containerd[1563]: time="2026-01-23T01:21:04.518606842Z" level=info msg="Container 2655881ab4cf8e7de3481f390ebe7b851da2c76e6d02452037fba5da08517630: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:21:04.525349 containerd[1563]: time="2026-01-23T01:21:04.524412115Z" level=info msg="CreateContainer within sandbox \"a7bf8292a6f2864b03d322d540c58a35d02030467bf1d68f69a832f5a857004b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd7ceb559ad0dffb6583112392a7b247050ab5393c61f2708550ed25c254abd4\"" Jan 23 01:21:04.533366 containerd[1563]: time="2026-01-23T01:21:04.532846445Z" level=info msg="StartContainer for \"bd7ceb559ad0dffb6583112392a7b247050ab5393c61f2708550ed25c254abd4\"" Jan 23 01:21:04.542598 containerd[1563]: time="2026-01-23T01:21:04.542226793Z" level=info msg="connecting to shim bd7ceb559ad0dffb6583112392a7b247050ab5393c61f2708550ed25c254abd4" address="unix:///run/containerd/s/fac84011f2dcfe1bd12355d70668062cafb8f4d347665049cb767c3f1a2b8283" protocol=ttrpc version=3 Jan 23 01:21:04.564656 containerd[1563]: time="2026-01-23T01:21:04.564225315Z" level=info msg="CreateContainer within sandbox \"315316f1417879912ef5fe185b32c6659a70cc7b8681d8980dcaff1fe95ea467\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2655881ab4cf8e7de3481f390ebe7b851da2c76e6d02452037fba5da08517630\"" Jan 23 01:21:04.585893 containerd[1563]: time="2026-01-23T01:21:04.584687917Z" level=info msg="StartContainer for \"2655881ab4cf8e7de3481f390ebe7b851da2c76e6d02452037fba5da08517630\"" Jan 23 01:21:04.610335 systemd[1]: Started cri-containerd-bd7ceb559ad0dffb6583112392a7b247050ab5393c61f2708550ed25c254abd4.scope - libcontainer container bd7ceb559ad0dffb6583112392a7b247050ab5393c61f2708550ed25c254abd4. Jan 23 01:21:04.610965 containerd[1563]: time="2026-01-23T01:21:04.610838278Z" level=info msg="connecting to shim 2655881ab4cf8e7de3481f390ebe7b851da2c76e6d02452037fba5da08517630" address="unix:///run/containerd/s/882831d10ce2d642521f12d817e4bbe6efc94520018d1a5451b42cf8999ba03a" protocol=ttrpc version=3 Jan 23 01:21:04.708762 systemd[1]: Started cri-containerd-2655881ab4cf8e7de3481f390ebe7b851da2c76e6d02452037fba5da08517630.scope - libcontainer container 2655881ab4cf8e7de3481f390ebe7b851da2c76e6d02452037fba5da08517630. Jan 23 01:21:04.843952 containerd[1563]: time="2026-01-23T01:21:04.843878966Z" level=info msg="StartContainer for \"bd7ceb559ad0dffb6583112392a7b247050ab5393c61f2708550ed25c254abd4\" returns successfully" Jan 23 01:21:04.908244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040172353.mount: Deactivated successfully. Jan 23 01:21:04.916226 containerd[1563]: time="2026-01-23T01:21:04.916185806Z" level=info msg="StartContainer for \"2655881ab4cf8e7de3481f390ebe7b851da2c76e6d02452037fba5da08517630\" returns successfully" Jan 23 01:21:10.157927 kubelet[2838]: E0123 01:21:10.149267 2838 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.023s" Jan 23 01:21:10.710862 kubelet[2838]: E0123 01:21:10.709618 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:11.413715 kubelet[2838]: E0123 01:21:11.412896 2838 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.103s" Jan 23 01:21:11.722158 kubelet[2838]: E0123 01:21:11.671633 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:14.531900 kubelet[2838]: E0123 01:21:14.241866 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:21.389620 systemd[1]: cri-containerd-6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a.scope: Deactivated successfully. Jan 23 01:21:21.391945 systemd[1]: cri-containerd-6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a.scope: Consumed 13.708s CPU time, 61M memory peak, 8.3M read from disk. Jan 23 01:21:21.766664 containerd[1563]: time="2026-01-23T01:21:21.764735448Z" level=info msg="received container exit event container_id:\"6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a\" id:\"6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a\" pid:2680 exit_status:1 exited_at:{seconds:1769131281 nanos:734928828}" Jan 23 01:21:21.769133 kubelet[2838]: E0123 01:21:21.768428 2838 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.99s" Jan 23 01:21:22.132467 systemd[1]: cri-containerd-a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08.scope: Deactivated successfully. Jan 23 01:21:22.201892 systemd[1]: cri-containerd-a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08.scope: Consumed 8.244s CPU time, 20.3M memory peak, 200K read from disk. Jan 23 01:21:22.306946 kubelet[2838]: E0123 01:21:22.301834 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:22.352398 containerd[1563]: time="2026-01-23T01:21:22.352231316Z" level=info msg="received container exit event container_id:\"a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08\" id:\"a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08\" pid:2689 exit_status:1 exited_at:{seconds:1769131282 nanos:329772847}" Jan 23 01:21:22.366677 kubelet[2838]: E0123 01:21:22.366645 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:22.466876 kubelet[2838]: I0123 01:21:22.466152 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xp2fr" podStartSLOduration=75.465955899 podStartE2EDuration="1m15.465955899s" podCreationTimestamp="2026-01-23 01:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:21:22.42623762 +0000 UTC m=+81.400988574" watchObservedRunningTime="2026-01-23 01:21:22.465955899 +0000 UTC m=+81.440706824" Jan 23 01:21:22.611180 kubelet[2838]: I0123 01:21:22.609743 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-69f6d" podStartSLOduration=75.609716328 podStartE2EDuration="1m15.609716328s" podCreationTimestamp="2026-01-23 01:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:21:22.601870081 +0000 UTC m=+81.576621037" watchObservedRunningTime="2026-01-23 01:21:22.609716328 +0000 UTC m=+81.584467263" Jan 23 01:21:22.814624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a-rootfs.mount: Deactivated successfully. Jan 23 01:21:22.856956 kubelet[2838]: E0123 01:21:22.856825 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:22.872199 kubelet[2838]: E0123 01:21:22.870719 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:22.877349 kubelet[2838]: E0123 01:21:22.876499 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:22.918748 systemd[1]: cri-containerd-d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2.scope: Deactivated successfully. Jan 23 01:21:22.919468 systemd[1]: cri-containerd-d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2.scope: Consumed 1.807s CPU time, 28.3M memory peak, 4K written to disk. Jan 23 01:21:22.954897 containerd[1563]: time="2026-01-23T01:21:22.954741853Z" level=info msg="received container exit event container_id:\"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\" id:\"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\" pid:3437 exit_status:1 exited_at:{seconds:1769131282 nanos:948794882}" Jan 23 01:21:23.035584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08-rootfs.mount: Deactivated successfully. Jan 23 01:21:23.105395 kubelet[2838]: E0123 01:21:23.104935 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:23.307238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2-rootfs.mount: Deactivated successfully. Jan 23 01:21:23.868378 kubelet[2838]: I0123 01:21:23.867725 2838 scope.go:117] "RemoveContainer" containerID="a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08" Jan 23 01:21:23.868378 kubelet[2838]: E0123 01:21:23.867936 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:23.892926 containerd[1563]: time="2026-01-23T01:21:23.892608710Z" level=info msg="CreateContainer within sandbox \"1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 01:21:23.893379 kubelet[2838]: I0123 01:21:23.893135 2838 scope.go:117] "RemoveContainer" containerID="6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a" Jan 23 01:21:23.893379 kubelet[2838]: E0123 01:21:23.893234 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:23.905926 containerd[1563]: time="2026-01-23T01:21:23.905811986Z" level=info msg="CreateContainer within sandbox \"32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 01:21:23.914893 kubelet[2838]: E0123 01:21:23.914769 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:23.916182 kubelet[2838]: E0123 01:21:23.915365 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:23.918827 kubelet[2838]: I0123 01:21:23.918463 2838 scope.go:117] "RemoveContainer" containerID="d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2" Jan 23 01:21:23.922685 kubelet[2838]: E0123 01:21:23.921915 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:23.935864 containerd[1563]: time="2026-01-23T01:21:23.934519996Z" level=info msg="CreateContainer within sandbox \"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 23 01:21:23.961388 containerd[1563]: time="2026-01-23T01:21:23.960924842Z" level=info msg="Container 97fa94f8408610e24402cf4db88a4a1b327fe4bf9f0bf0cffee20f10f8874760: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:21:23.979217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094697424.mount: Deactivated successfully. Jan 23 01:21:24.004666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45127874.mount: Deactivated successfully. Jan 23 01:21:24.007420 containerd[1563]: time="2026-01-23T01:21:24.005917098Z" level=info msg="Container 7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:21:24.018623 containerd[1563]: time="2026-01-23T01:21:24.018464313Z" level=info msg="Container 926c1ef071143f91319390c4ddd172a0cc39a8e9c0fc6f0b1a318d3ca1f80787: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:21:24.056386 containerd[1563]: time="2026-01-23T01:21:24.053966205Z" level=info msg="CreateContainer within sandbox \"1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"97fa94f8408610e24402cf4db88a4a1b327fe4bf9f0bf0cffee20f10f8874760\"" Jan 23 01:21:24.067611 containerd[1563]: time="2026-01-23T01:21:24.063221661Z" level=info msg="CreateContainer within sandbox \"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\"" Jan 23 01:21:24.069614 containerd[1563]: time="2026-01-23T01:21:24.069554339Z" level=info msg="StartContainer for \"97fa94f8408610e24402cf4db88a4a1b327fe4bf9f0bf0cffee20f10f8874760\"" Jan 23 01:21:24.069839 containerd[1563]: time="2026-01-23T01:21:24.069807226Z" level=info msg="StartContainer for \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\"" Jan 23 01:21:24.093551 containerd[1563]: time="2026-01-23T01:21:24.089204342Z" level=info msg="connecting to shim 97fa94f8408610e24402cf4db88a4a1b327fe4bf9f0bf0cffee20f10f8874760" address="unix:///run/containerd/s/453dcdfbded842d12bb40a8cf6dbba32f7cafb07373a141a08c5ebac3a436a64" protocol=ttrpc version=3 Jan 23 01:21:24.101793 containerd[1563]: time="2026-01-23T01:21:24.101407110Z" level=info msg="connecting to shim 7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3" address="unix:///run/containerd/s/3278c82c7fe63119d308ba27cd8253841255c2040e5b33932ded57d95079e821" protocol=ttrpc version=3 Jan 23 01:21:24.117187 containerd[1563]: time="2026-01-23T01:21:24.115798784Z" level=info msg="CreateContainer within sandbox \"32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"926c1ef071143f91319390c4ddd172a0cc39a8e9c0fc6f0b1a318d3ca1f80787\"" Jan 23 01:21:24.120611 containerd[1563]: time="2026-01-23T01:21:24.119853460Z" level=info msg="StartContainer for \"926c1ef071143f91319390c4ddd172a0cc39a8e9c0fc6f0b1a318d3ca1f80787\"" Jan 23 01:21:24.144694 containerd[1563]: time="2026-01-23T01:21:24.144204755Z" level=info msg="connecting to shim 926c1ef071143f91319390c4ddd172a0cc39a8e9c0fc6f0b1a318d3ca1f80787" address="unix:///run/containerd/s/37168291ce97d77872f0ea2ccc334f80407990cb64ea3d28170875b54f7724d6" protocol=ttrpc version=3 Jan 23 01:21:24.224574 systemd[1]: Started cri-containerd-7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3.scope - libcontainer container 7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3. Jan 23 01:21:24.320172 systemd[1]: Started cri-containerd-926c1ef071143f91319390c4ddd172a0cc39a8e9c0fc6f0b1a318d3ca1f80787.scope - libcontainer container 926c1ef071143f91319390c4ddd172a0cc39a8e9c0fc6f0b1a318d3ca1f80787. Jan 23 01:21:24.348480 systemd[1]: Started cri-containerd-97fa94f8408610e24402cf4db88a4a1b327fe4bf9f0bf0cffee20f10f8874760.scope - libcontainer container 97fa94f8408610e24402cf4db88a4a1b327fe4bf9f0bf0cffee20f10f8874760. Jan 23 01:21:24.542437 containerd[1563]: time="2026-01-23T01:21:24.540453260Z" level=info msg="StartContainer for \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\" returns successfully" Jan 23 01:21:24.660181 containerd[1563]: time="2026-01-23T01:21:24.659926235Z" level=info msg="StartContainer for \"97fa94f8408610e24402cf4db88a4a1b327fe4bf9f0bf0cffee20f10f8874760\" returns successfully" Jan 23 01:21:24.698685 containerd[1563]: time="2026-01-23T01:21:24.698560033Z" level=info msg="StartContainer for \"926c1ef071143f91319390c4ddd172a0cc39a8e9c0fc6f0b1a318d3ca1f80787\" returns successfully" Jan 23 01:21:24.979698 kubelet[2838]: E0123 01:21:24.976687 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:25.012538 kubelet[2838]: E0123 01:21:25.011938 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:25.035847 kubelet[2838]: E0123 01:21:25.035747 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:26.047168 kubelet[2838]: E0123 01:21:26.045465 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:27.104861 kubelet[2838]: E0123 01:21:27.104824 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:28.508184 kubelet[2838]: E0123 01:21:28.507561 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:33.960422 kubelet[2838]: E0123 01:21:33.958684 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:38.564488 kubelet[2838]: E0123 01:21:38.563965 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:44.005287 kubelet[2838]: E0123 01:21:44.004630 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:44.329465 kubelet[2838]: E0123 01:21:44.328869 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:21:56.111308 kubelet[2838]: E0123 01:21:56.110622 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:22:26.106614 kubelet[2838]: E0123 01:22:26.106355 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:22:26.114257 kubelet[2838]: E0123 01:22:26.109626 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:22:31.107208 kubelet[2838]: E0123 01:22:31.106299 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:22:32.116134 kubelet[2838]: E0123 01:22:32.115695 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:22:34.106886 kubelet[2838]: E0123 01:22:34.106558 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:22:57.106409 kubelet[2838]: E0123 01:22:57.106211 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:23:04.111171 kubelet[2838]: E0123 01:23:04.110514 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:23:11.108177 kubelet[2838]: E0123 01:23:11.107273 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:23:15.754395 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:38086.service - OpenSSH per-connection server daemon (10.0.0.1:38086). Jan 23 01:23:16.101551 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 38086 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:23:16.118842 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:23:16.159308 systemd-logind[1549]: New session 8 of user core. Jan 23 01:23:16.195308 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:23:17.047498 sshd[4502]: Connection closed by 10.0.0.1 port 38086 Jan 23 01:23:17.053468 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Jan 23 01:23:17.072321 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:38086.service: Deactivated successfully. Jan 23 01:23:17.083607 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:23:17.098917 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:23:17.105492 systemd-logind[1549]: Removed session 8. Jan 23 01:23:22.128522 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:38090.service - OpenSSH per-connection server daemon (10.0.0.1:38090). Jan 23 01:23:22.445251 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 38090 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:23:22.461480 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:23:22.533539 systemd-logind[1549]: New session 9 of user core. Jan 23 01:23:22.561213 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:23:23.135876 sshd[4522]: Connection closed by 10.0.0.1 port 38090 Jan 23 01:23:23.138510 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Jan 23 01:23:23.160304 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:38090.service: Deactivated successfully. Jan 23 01:23:23.169826 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:23:23.174325 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:23:23.184514 systemd-logind[1549]: Removed session 9. Jan 23 01:23:28.168504 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:52370.service - OpenSSH per-connection server daemon (10.0.0.1:52370). Jan 23 01:23:28.341242 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 52370 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:23:28.346883 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:23:28.369291 systemd-logind[1549]: New session 10 of user core. Jan 23 01:23:28.393739 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:23:28.816402 sshd[4542]: Connection closed by 10.0.0.1 port 52370 Jan 23 01:23:28.817235 sshd-session[4539]: pam_unix(sshd:session): session closed for user core Jan 23 01:23:28.826772 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:52370.service: Deactivated successfully. Jan 23 01:23:28.832555 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:23:28.844351 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:23:28.859270 systemd-logind[1549]: Removed session 10. Jan 23 01:23:33.106262 kubelet[2838]: E0123 01:23:33.105372 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:23:33.900535 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:48992.service - OpenSSH per-connection server daemon (10.0.0.1:48992). Jan 23 01:23:34.110363 kubelet[2838]: E0123 01:23:34.108467 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:23:34.143712 sshd[4557]: Accepted publickey for core from 10.0.0.1 port 48992 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:23:34.155328 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:23:34.188559 systemd-logind[1549]: New session 11 of user core. Jan 23 01:23:34.223190 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:23:34.786351 sshd[4560]: Connection closed by 10.0.0.1 port 48992 Jan 23 01:23:34.786921 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Jan 23 01:23:34.801372 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:48992.service: Deactivated successfully. Jan 23 01:23:34.806518 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:23:34.813477 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:23:34.818230 systemd-logind[1549]: Removed session 11. Jan 23 01:23:39.851498 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:48994.service - OpenSSH per-connection server daemon (10.0.0.1:48994). Jan 23 01:23:40.167948 sshd[4578]: Accepted publickey for core from 10.0.0.1 port 48994 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:23:40.181451 sshd-session[4578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:23:40.243859 systemd-logind[1549]: New session 12 of user core. Jan 23 01:23:40.259920 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:23:41.044927 sshd[4581]: Connection closed by 10.0.0.1 port 48994 Jan 23 01:23:41.046849 sshd-session[4578]: pam_unix(sshd:session): session closed for user core Jan 23 01:23:41.060503 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:48994.service: Deactivated successfully. Jan 23 01:23:41.080565 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:23:41.114428 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:23:41.128722 systemd-logind[1549]: Removed session 12. Jan 23 01:23:46.130758 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:47560.service - OpenSSH per-connection server daemon (10.0.0.1:47560). Jan 23 01:23:46.398539 sshd[4597]: Accepted publickey for core from 10.0.0.1 port 47560 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:23:46.402732 sshd-session[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:23:46.439954 systemd-logind[1549]: New session 13 of user core. Jan 23 01:23:46.481343 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:23:47.106308 sshd[4600]: Connection closed by 10.0.0.1 port 47560 Jan 23 01:23:47.107278 sshd-session[4597]: pam_unix(sshd:session): session closed for user core Jan 23 01:23:47.129733 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:47560.service: Deactivated successfully. Jan 23 01:23:47.144892 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:23:47.160958 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:23:47.181429 systemd-logind[1549]: Removed session 13. Jan 23 01:23:52.112241 kubelet[2838]: E0123 01:23:52.112196 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:23:52.152393 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:47570.service - OpenSSH per-connection server daemon (10.0.0.1:47570). Jan 23 01:23:52.476395 sshd[4615]: Accepted publickey for core from 10.0.0.1 port 47570 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:23:52.478826 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:23:52.540866 systemd-logind[1549]: New session 14 of user core. Jan 23 01:23:52.551378 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:23:53.121315 sshd[4618]: Connection closed by 10.0.0.1 port 47570 Jan 23 01:23:53.120244 sshd-session[4615]: pam_unix(sshd:session): session closed for user core Jan 23 01:23:53.142456 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:23:53.144890 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:47570.service: Deactivated successfully. Jan 23 01:23:53.153810 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:23:53.166541 systemd-logind[1549]: Removed session 14. Jan 23 01:23:55.106968 kubelet[2838]: E0123 01:23:55.106910 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:23:58.164372 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:53434.service - OpenSSH per-connection server daemon (10.0.0.1:53434). Jan 23 01:23:58.389133 sshd[4633]: Accepted publickey for core from 10.0.0.1 port 53434 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:23:58.397384 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:23:58.428503 systemd-logind[1549]: New session 15 of user core. Jan 23 01:23:58.437528 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:23:59.016215 sshd[4636]: Connection closed by 10.0.0.1 port 53434 Jan 23 01:23:59.016876 sshd-session[4633]: pam_unix(sshd:session): session closed for user core Jan 23 01:23:59.033909 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:53434.service: Deactivated successfully. Jan 23 01:23:59.042689 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:23:59.049201 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:23:59.060362 systemd-logind[1549]: Removed session 15. Jan 23 01:24:03.120376 kubelet[2838]: E0123 01:24:03.117384 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:24:04.117196 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:42082.service - OpenSSH per-connection server daemon (10.0.0.1:42082). Jan 23 01:24:04.421761 sshd[4655]: Accepted publickey for core from 10.0.0.1 port 42082 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:04.435530 sshd-session[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:04.495895 systemd-logind[1549]: New session 16 of user core. Jan 23 01:24:04.515368 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:24:05.008195 sshd[4658]: Connection closed by 10.0.0.1 port 42082 Jan 23 01:24:05.007832 sshd-session[4655]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:05.024270 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:42082.service: Deactivated successfully. Jan 23 01:24:05.030212 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:24:05.034407 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:24:05.043179 systemd-logind[1549]: Removed session 16. Jan 23 01:24:10.054815 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:42086.service - OpenSSH per-connection server daemon (10.0.0.1:42086). Jan 23 01:24:10.419408 sshd[4677]: Accepted publickey for core from 10.0.0.1 port 42086 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:10.425890 sshd-session[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:10.463495 systemd-logind[1549]: New session 17 of user core. Jan 23 01:24:10.487667 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:24:11.063808 sshd[4680]: Connection closed by 10.0.0.1 port 42086 Jan 23 01:24:11.062698 sshd-session[4677]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:11.073317 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:42086.service: Deactivated successfully. Jan 23 01:24:11.079743 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:24:11.085242 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:24:11.091814 systemd-logind[1549]: Removed session 17. Jan 23 01:24:16.101638 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:40346.service - OpenSSH per-connection server daemon (10.0.0.1:40346). Jan 23 01:24:16.251132 sshd[4694]: Accepted publickey for core from 10.0.0.1 port 40346 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:16.253954 sshd-session[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:16.279123 systemd-logind[1549]: New session 18 of user core. Jan 23 01:24:16.293896 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:24:16.591888 sshd[4697]: Connection closed by 10.0.0.1 port 40346 Jan 23 01:24:16.591623 sshd-session[4694]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:16.606800 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:40346.service: Deactivated successfully. Jan 23 01:24:16.611157 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:24:16.614878 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:24:16.621230 systemd-logind[1549]: Removed session 18. Jan 23 01:24:18.116273 kubelet[2838]: E0123 01:24:18.109593 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:24:20.116721 kubelet[2838]: E0123 01:24:20.113869 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:24:21.618380 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:40368.service - OpenSSH per-connection server daemon (10.0.0.1:40368). Jan 23 01:24:21.757835 sshd[4711]: Accepted publickey for core from 10.0.0.1 port 40368 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:21.761209 sshd-session[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:21.792830 systemd-logind[1549]: New session 19 of user core. Jan 23 01:24:21.815321 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:24:22.225546 sshd[4714]: Connection closed by 10.0.0.1 port 40368 Jan 23 01:24:22.226253 sshd-session[4711]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:22.235226 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:40368.service: Deactivated successfully. Jan 23 01:24:22.240752 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:24:22.246231 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:24:22.261338 systemd-logind[1549]: Removed session 19. Jan 23 01:24:27.261221 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:48382.service - OpenSSH per-connection server daemon (10.0.0.1:48382). Jan 23 01:24:27.411884 sshd[4730]: Accepted publickey for core from 10.0.0.1 port 48382 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:27.413496 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:27.442798 systemd-logind[1549]: New session 20 of user core. Jan 23 01:24:27.457227 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 01:24:27.729303 sshd[4733]: Connection closed by 10.0.0.1 port 48382 Jan 23 01:24:27.730374 sshd-session[4730]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:27.743208 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:48382.service: Deactivated successfully. Jan 23 01:24:27.750126 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 01:24:27.756909 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Jan 23 01:24:27.762280 systemd-logind[1549]: Removed session 20. Jan 23 01:24:31.453762 containerd[1563]: time="2026-01-23T01:24:31.419141842Z" level=warning msg="container event discarded" container=c92e4ff4c272d3b71f6fa92eaf52d02907a1c5f0fdfab40546e08635c7339aa7 type=CONTAINER_CREATED_EVENT Jan 23 01:24:31.453762 containerd[1563]: time="2026-01-23T01:24:31.452954716Z" level=warning msg="container event discarded" container=c92e4ff4c272d3b71f6fa92eaf52d02907a1c5f0fdfab40546e08635c7339aa7 type=CONTAINER_STARTED_EVENT Jan 23 01:24:31.537710 containerd[1563]: time="2026-01-23T01:24:31.537542662Z" level=warning msg="container event discarded" container=1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3 type=CONTAINER_CREATED_EVENT Jan 23 01:24:31.537710 containerd[1563]: time="2026-01-23T01:24:31.537669557Z" level=warning msg="container event discarded" container=1779de13f860b776dc0cf629ee9c24a88608b53379ef7aebd271112fa937c8a3 type=CONTAINER_STARTED_EVENT Jan 23 01:24:31.555301 containerd[1563]: time="2026-01-23T01:24:31.555223862Z" level=warning msg="container event discarded" container=32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34 type=CONTAINER_CREATED_EVENT Jan 23 01:24:31.555653 containerd[1563]: time="2026-01-23T01:24:31.555604442Z" level=warning msg="container event discarded" container=32526bdebea9ce22d613a01a78909669a4d51055a9cb32bb4418620295b78f34 type=CONTAINER_STARTED_EVENT Jan 23 01:24:31.676444 containerd[1563]: time="2026-01-23T01:24:31.676165705Z" level=warning msg="container event discarded" container=575f03fe524bcc6c1fe2664c472a35605694ae592ab51b98a3cf3cdbdf5c3393 type=CONTAINER_CREATED_EVENT Jan 23 01:24:31.726716 containerd[1563]: time="2026-01-23T01:24:31.726323391Z" level=warning msg="container event discarded" container=a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08 type=CONTAINER_CREATED_EVENT Jan 23 01:24:31.794578 containerd[1563]: time="2026-01-23T01:24:31.794491901Z" level=warning msg="container event discarded" container=6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a type=CONTAINER_CREATED_EVENT Jan 23 01:24:32.159573 containerd[1563]: time="2026-01-23T01:24:32.156885990Z" level=warning msg="container event discarded" container=575f03fe524bcc6c1fe2664c472a35605694ae592ab51b98a3cf3cdbdf5c3393 type=CONTAINER_STARTED_EVENT Jan 23 01:24:32.211852 containerd[1563]: time="2026-01-23T01:24:32.210962990Z" level=warning msg="container event discarded" container=6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a type=CONTAINER_STARTED_EVENT Jan 23 01:24:32.320880 containerd[1563]: time="2026-01-23T01:24:32.320263972Z" level=warning msg="container event discarded" container=a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08 type=CONTAINER_STARTED_EVENT Jan 23 01:24:32.765126 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:58554.service - OpenSSH per-connection server daemon (10.0.0.1:58554). Jan 23 01:24:32.926108 sshd[4748]: Accepted publickey for core from 10.0.0.1 port 58554 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:32.930621 sshd-session[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:32.956168 systemd-logind[1549]: New session 21 of user core. Jan 23 01:24:32.984542 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 01:24:33.405570 sshd[4751]: Connection closed by 10.0.0.1 port 58554 Jan 23 01:24:33.406951 sshd-session[4748]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:33.427458 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:58554.service: Deactivated successfully. Jan 23 01:24:33.428347 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Jan 23 01:24:33.439249 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 01:24:33.453816 systemd-logind[1549]: Removed session 21. Jan 23 01:24:38.423722 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:58578.service - OpenSSH per-connection server daemon (10.0.0.1:58578). Jan 23 01:24:38.560254 sshd[4766]: Accepted publickey for core from 10.0.0.1 port 58578 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:38.564940 sshd-session[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:38.582741 systemd-logind[1549]: New session 22 of user core. Jan 23 01:24:38.611715 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 01:24:39.046694 sshd[4771]: Connection closed by 10.0.0.1 port 58578 Jan 23 01:24:39.050240 sshd-session[4766]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:39.107086 kubelet[2838]: E0123 01:24:39.105609 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:24:39.109776 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:58578.service: Deactivated successfully. Jan 23 01:24:39.120147 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 01:24:39.131825 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Jan 23 01:24:39.153582 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:58592.service - OpenSSH per-connection server daemon (10.0.0.1:58592). Jan 23 01:24:39.159341 systemd-logind[1549]: Removed session 22. Jan 23 01:24:39.335774 sshd[4786]: Accepted publickey for core from 10.0.0.1 port 58592 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:39.338894 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:39.361908 systemd-logind[1549]: New session 23 of user core. Jan 23 01:24:39.374220 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 01:24:39.918614 sshd[4789]: Connection closed by 10.0.0.1 port 58592 Jan 23 01:24:39.915619 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:39.946886 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:58592.service: Deactivated successfully. Jan 23 01:24:39.952772 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 01:24:39.957146 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Jan 23 01:24:39.969314 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:58594.service - OpenSSH per-connection server daemon (10.0.0.1:58594). Jan 23 01:24:39.977607 systemd-logind[1549]: Removed session 23. Jan 23 01:24:40.109958 kubelet[2838]: E0123 01:24:40.109760 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:24:40.154549 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 58594 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:40.158936 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:40.206189 systemd-logind[1549]: New session 24 of user core. Jan 23 01:24:40.217231 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 01:24:40.580955 sshd[4803]: Connection closed by 10.0.0.1 port 58594 Jan 23 01:24:40.579195 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:40.602885 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:58594.service: Deactivated successfully. Jan 23 01:24:40.606182 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 01:24:40.608718 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Jan 23 01:24:40.613594 systemd-logind[1549]: Removed session 24. Jan 23 01:24:45.631190 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:41024.service - OpenSSH per-connection server daemon (10.0.0.1:41024). Jan 23 01:24:45.813526 sshd[4818]: Accepted publickey for core from 10.0.0.1 port 41024 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:45.820463 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:45.859155 systemd-logind[1549]: New session 25 of user core. Jan 23 01:24:45.910124 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 01:24:46.427742 sshd[4821]: Connection closed by 10.0.0.1 port 41024 Jan 23 01:24:46.428801 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:46.448435 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:41024.service: Deactivated successfully. Jan 23 01:24:46.465283 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 01:24:46.471643 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Jan 23 01:24:46.478547 systemd-logind[1549]: Removed session 25. Jan 23 01:24:51.475916 systemd[1]: Started sshd@25-10.0.0.71:22-10.0.0.1:41054.service - OpenSSH per-connection server daemon (10.0.0.1:41054). Jan 23 01:24:51.660210 sshd[4834]: Accepted publickey for core from 10.0.0.1 port 41054 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:51.664518 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:51.709779 systemd-logind[1549]: New session 26 of user core. Jan 23 01:24:51.734860 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 01:24:52.219621 sshd[4837]: Connection closed by 10.0.0.1 port 41054 Jan 23 01:24:52.223739 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:52.241826 systemd[1]: sshd@25-10.0.0.71:22-10.0.0.1:41054.service: Deactivated successfully. Jan 23 01:24:52.252765 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 01:24:52.257765 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Jan 23 01:24:52.265160 systemd-logind[1549]: Removed session 26. Jan 23 01:24:57.109764 kubelet[2838]: E0123 01:24:57.107276 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:24:57.269203 systemd[1]: Started sshd@26-10.0.0.71:22-10.0.0.1:60556.service - OpenSSH per-connection server daemon (10.0.0.1:60556). Jan 23 01:24:57.497135 sshd[4851]: Accepted publickey for core from 10.0.0.1 port 60556 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:24:57.501515 sshd-session[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:24:57.536568 systemd-logind[1549]: New session 27 of user core. Jan 23 01:24:57.546680 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 01:24:57.964548 sshd[4854]: Connection closed by 10.0.0.1 port 60556 Jan 23 01:24:57.969568 sshd-session[4851]: pam_unix(sshd:session): session closed for user core Jan 23 01:24:57.998254 systemd[1]: sshd@26-10.0.0.71:22-10.0.0.1:60556.service: Deactivated successfully. Jan 23 01:24:58.002901 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 01:24:58.010936 systemd-logind[1549]: Session 27 logged out. Waiting for processes to exit. Jan 23 01:24:58.018437 systemd-logind[1549]: Removed session 27. Jan 23 01:25:03.014926 systemd[1]: Started sshd@27-10.0.0.71:22-10.0.0.1:54146.service - OpenSSH per-connection server daemon (10.0.0.1:54146). Jan 23 01:25:03.169693 sshd[4869]: Accepted publickey for core from 10.0.0.1 port 54146 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:03.176473 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:03.217884 systemd-logind[1549]: New session 28 of user core. Jan 23 01:25:03.235653 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 01:25:03.649880 sshd[4872]: Connection closed by 10.0.0.1 port 54146 Jan 23 01:25:03.650616 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:03.663478 systemd[1]: sshd@27-10.0.0.71:22-10.0.0.1:54146.service: Deactivated successfully. Jan 23 01:25:03.670570 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 01:25:03.682250 systemd-logind[1549]: Session 28 logged out. Waiting for processes to exit. Jan 23 01:25:03.699616 systemd-logind[1549]: Removed session 28. Jan 23 01:25:06.262094 containerd[1563]: time="2026-01-23T01:25:06.261730894Z" level=warning msg="container event discarded" container=ea917658ea1f81d56ef20c96e780532ac27807cc525c33c2b352cfd5bf3cf7ee type=CONTAINER_CREATED_EVENT Jan 23 01:25:06.262094 containerd[1563]: time="2026-01-23T01:25:06.261815893Z" level=warning msg="container event discarded" container=ea917658ea1f81d56ef20c96e780532ac27807cc525c33c2b352cfd5bf3cf7ee type=CONTAINER_STARTED_EVENT Jan 23 01:25:06.465268 containerd[1563]: time="2026-01-23T01:25:06.464887663Z" level=warning msg="container event discarded" container=c7c70f45a0e820d78358ac9ff4588344ed8afb8ecb1168061e0ae2247dbdff71 type=CONTAINER_CREATED_EVENT Jan 23 01:25:07.105145 kubelet[2838]: E0123 01:25:07.104924 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:25:07.394587 containerd[1563]: time="2026-01-23T01:25:07.393738924Z" level=warning msg="container event discarded" container=c7c70f45a0e820d78358ac9ff4588344ed8afb8ecb1168061e0ae2247dbdff71 type=CONTAINER_STARTED_EVENT Jan 23 01:25:08.114266 kubelet[2838]: E0123 01:25:08.112396 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:25:08.415851 containerd[1563]: time="2026-01-23T01:25:08.414636434Z" level=warning msg="container event discarded" container=992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1 type=CONTAINER_CREATED_EVENT Jan 23 01:25:08.415851 containerd[1563]: time="2026-01-23T01:25:08.414777626Z" level=warning msg="container event discarded" container=992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1 type=CONTAINER_STARTED_EVENT Jan 23 01:25:08.563548 containerd[1563]: time="2026-01-23T01:25:08.563164380Z" level=warning msg="container event discarded" container=05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40 type=CONTAINER_CREATED_EVENT Jan 23 01:25:08.563548 containerd[1563]: time="2026-01-23T01:25:08.563227447Z" level=warning msg="container event discarded" container=05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40 type=CONTAINER_STARTED_EVENT Jan 23 01:25:08.683668 systemd[1]: Started sshd@28-10.0.0.71:22-10.0.0.1:54166.service - OpenSSH per-connection server daemon (10.0.0.1:54166). Jan 23 01:25:08.895438 sshd[4889]: Accepted publickey for core from 10.0.0.1 port 54166 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:08.899788 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:08.934255 systemd-logind[1549]: New session 29 of user core. Jan 23 01:25:08.959263 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 01:25:09.364880 sshd[4892]: Connection closed by 10.0.0.1 port 54166 Jan 23 01:25:09.367537 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:09.380839 systemd[1]: sshd@28-10.0.0.71:22-10.0.0.1:54166.service: Deactivated successfully. Jan 23 01:25:09.388880 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 01:25:09.398772 systemd-logind[1549]: Session 29 logged out. Waiting for processes to exit. Jan 23 01:25:09.403547 systemd-logind[1549]: Removed session 29. Jan 23 01:25:14.441450 systemd[1]: Started sshd@29-10.0.0.71:22-10.0.0.1:49428.service - OpenSSH per-connection server daemon (10.0.0.1:49428). Jan 23 01:25:14.727624 sshd[4905]: Accepted publickey for core from 10.0.0.1 port 49428 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:14.737597 sshd-session[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:14.804215 systemd-logind[1549]: New session 30 of user core. Jan 23 01:25:14.815613 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 23 01:25:15.480581 sshd[4908]: Connection closed by 10.0.0.1 port 49428 Jan 23 01:25:15.481843 sshd-session[4905]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:15.529938 systemd-logind[1549]: Session 30 logged out. Waiting for processes to exit. Jan 23 01:25:15.531708 systemd[1]: sshd@29-10.0.0.71:22-10.0.0.1:49428.service: Deactivated successfully. Jan 23 01:25:15.548578 systemd[1]: session-30.scope: Deactivated successfully. Jan 23 01:25:15.588632 systemd-logind[1549]: Removed session 30. Jan 23 01:25:20.548671 systemd[1]: Started sshd@30-10.0.0.71:22-10.0.0.1:49452.service - OpenSSH per-connection server daemon (10.0.0.1:49452). Jan 23 01:25:20.811952 sshd[4922]: Accepted publickey for core from 10.0.0.1 port 49452 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:20.821629 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:20.844589 systemd-logind[1549]: New session 31 of user core. Jan 23 01:25:20.859622 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 23 01:25:21.306490 sshd[4925]: Connection closed by 10.0.0.1 port 49452 Jan 23 01:25:21.307800 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:21.316438 systemd[1]: sshd@30-10.0.0.71:22-10.0.0.1:49452.service: Deactivated successfully. Jan 23 01:25:21.337963 systemd[1]: session-31.scope: Deactivated successfully. Jan 23 01:25:21.344707 systemd-logind[1549]: Session 31 logged out. Waiting for processes to exit. Jan 23 01:25:21.354497 systemd-logind[1549]: Removed session 31. Jan 23 01:25:22.109844 kubelet[2838]: E0123 01:25:22.109142 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:25:26.339724 systemd[1]: Started sshd@31-10.0.0.71:22-10.0.0.1:47040.service - OpenSSH per-connection server daemon (10.0.0.1:47040). Jan 23 01:25:26.477484 sshd[4938]: Accepted publickey for core from 10.0.0.1 port 47040 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:26.480586 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:26.507892 systemd-logind[1549]: New session 32 of user core. Jan 23 01:25:26.521402 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 23 01:25:26.919777 sshd[4941]: Connection closed by 10.0.0.1 port 47040 Jan 23 01:25:26.920597 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:26.934474 systemd[1]: sshd@31-10.0.0.71:22-10.0.0.1:47040.service: Deactivated successfully. Jan 23 01:25:26.939537 systemd[1]: session-32.scope: Deactivated successfully. Jan 23 01:25:26.943815 systemd-logind[1549]: Session 32 logged out. Waiting for processes to exit. Jan 23 01:25:26.951558 systemd-logind[1549]: Removed session 32. Jan 23 01:25:31.958595 systemd[1]: Started sshd@32-10.0.0.71:22-10.0.0.1:47064.service - OpenSSH per-connection server daemon (10.0.0.1:47064). Jan 23 01:25:32.145913 sshd[4955]: Accepted publickey for core from 10.0.0.1 port 47064 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:32.150850 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:32.175584 systemd-logind[1549]: New session 33 of user core. Jan 23 01:25:32.191215 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 23 01:25:32.558790 sshd[4958]: Connection closed by 10.0.0.1 port 47064 Jan 23 01:25:32.559472 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:32.576519 systemd[1]: sshd@32-10.0.0.71:22-10.0.0.1:47064.service: Deactivated successfully. Jan 23 01:25:32.583780 systemd[1]: session-33.scope: Deactivated successfully. Jan 23 01:25:32.602955 systemd-logind[1549]: Session 33 logged out. Waiting for processes to exit. Jan 23 01:25:32.610431 systemd-logind[1549]: Removed session 33. Jan 23 01:25:32.915481 containerd[1563]: time="2026-01-23T01:25:32.914948411Z" level=warning msg="container event discarded" container=f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330 type=CONTAINER_CREATED_EVENT Jan 23 01:25:33.136923 containerd[1563]: time="2026-01-23T01:25:33.136226548Z" level=warning msg="container event discarded" container=f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330 type=CONTAINER_STARTED_EVENT Jan 23 01:25:33.508837 containerd[1563]: time="2026-01-23T01:25:33.508744904Z" level=warning msg="container event discarded" container=f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330 type=CONTAINER_STOPPED_EVENT Jan 23 01:25:34.548641 containerd[1563]: time="2026-01-23T01:25:34.548465063Z" level=warning msg="container event discarded" container=c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0 type=CONTAINER_CREATED_EVENT Jan 23 01:25:34.848378 containerd[1563]: time="2026-01-23T01:25:34.847863469Z" level=warning msg="container event discarded" container=c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0 type=CONTAINER_STARTED_EVENT Jan 23 01:25:35.107426 kubelet[2838]: E0123 01:25:35.106916 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:25:35.383721 containerd[1563]: time="2026-01-23T01:25:35.382851966Z" level=warning msg="container event discarded" container=c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0 type=CONTAINER_STOPPED_EVENT Jan 23 01:25:35.682798 containerd[1563]: time="2026-01-23T01:25:35.681921682Z" level=warning msg="container event discarded" container=284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8 type=CONTAINER_CREATED_EVENT Jan 23 01:25:36.056920 containerd[1563]: time="2026-01-23T01:25:36.056841393Z" level=warning msg="container event discarded" container=284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8 type=CONTAINER_STARTED_EVENT Jan 23 01:25:36.264467 containerd[1563]: time="2026-01-23T01:25:36.264384241Z" level=warning msg="container event discarded" container=284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8 type=CONTAINER_STOPPED_EVENT Jan 23 01:25:36.574566 containerd[1563]: time="2026-01-23T01:25:36.574397905Z" level=warning msg="container event discarded" container=f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697 type=CONTAINER_CREATED_EVENT Jan 23 01:25:36.885381 containerd[1563]: time="2026-01-23T01:25:36.884596709Z" level=warning msg="container event discarded" container=d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2 type=CONTAINER_CREATED_EVENT Jan 23 01:25:36.921478 containerd[1563]: time="2026-01-23T01:25:36.921391198Z" level=warning msg="container event discarded" container=f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697 type=CONTAINER_STARTED_EVENT Jan 23 01:25:37.199795 containerd[1563]: time="2026-01-23T01:25:37.197775665Z" level=warning msg="container event discarded" container=f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697 type=CONTAINER_STOPPED_EVENT Jan 23 01:25:37.224205 containerd[1563]: time="2026-01-23T01:25:37.223236094Z" level=warning msg="container event discarded" container=d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2 type=CONTAINER_STARTED_EVENT Jan 23 01:25:37.595861 systemd[1]: Started sshd@33-10.0.0.71:22-10.0.0.1:33414.service - OpenSSH per-connection server daemon (10.0.0.1:33414). Jan 23 01:25:37.726537 containerd[1563]: time="2026-01-23T01:25:37.726400984Z" level=warning msg="container event discarded" container=560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e type=CONTAINER_CREATED_EVENT Jan 23 01:25:37.761931 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 33414 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:37.764459 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:37.779875 systemd-logind[1549]: New session 34 of user core. Jan 23 01:25:37.796610 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 23 01:25:38.164764 sshd[4976]: Connection closed by 10.0.0.1 port 33414 Jan 23 01:25:38.166852 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:38.203390 systemd[1]: sshd@33-10.0.0.71:22-10.0.0.1:33414.service: Deactivated successfully. Jan 23 01:25:38.209773 systemd[1]: session-34.scope: Deactivated successfully. Jan 23 01:25:38.220164 systemd-logind[1549]: Session 34 logged out. Waiting for processes to exit. Jan 23 01:25:38.223700 systemd[1]: Started sshd@34-10.0.0.71:22-10.0.0.1:33436.service - OpenSSH per-connection server daemon (10.0.0.1:33436). Jan 23 01:25:38.242229 systemd-logind[1549]: Removed session 34. Jan 23 01:25:38.411570 sshd[4990]: Accepted publickey for core from 10.0.0.1 port 33436 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:38.412644 containerd[1563]: time="2026-01-23T01:25:38.412384602Z" level=warning msg="container event discarded" container=560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e type=CONTAINER_STARTED_EVENT Jan 23 01:25:38.418251 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:38.457657 systemd-logind[1549]: New session 35 of user core. Jan 23 01:25:38.485500 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 23 01:25:39.816698 sshd[4994]: Connection closed by 10.0.0.1 port 33436 Jan 23 01:25:39.821725 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:39.842861 systemd[1]: sshd@34-10.0.0.71:22-10.0.0.1:33436.service: Deactivated successfully. Jan 23 01:25:39.852856 systemd[1]: session-35.scope: Deactivated successfully. Jan 23 01:25:39.860475 systemd-logind[1549]: Session 35 logged out. Waiting for processes to exit. Jan 23 01:25:39.866364 systemd[1]: Started sshd@35-10.0.0.71:22-10.0.0.1:33442.service - OpenSSH per-connection server daemon (10.0.0.1:33442). Jan 23 01:25:39.870810 systemd-logind[1549]: Removed session 35. Jan 23 01:25:40.027763 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 33442 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:40.037934 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:40.061753 systemd-logind[1549]: New session 36 of user core. Jan 23 01:25:40.073535 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 23 01:25:41.955371 sshd[5009]: Connection closed by 10.0.0.1 port 33442 Jan 23 01:25:41.955938 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:41.971497 systemd[1]: sshd@35-10.0.0.71:22-10.0.0.1:33442.service: Deactivated successfully. Jan 23 01:25:41.978740 systemd[1]: session-36.scope: Deactivated successfully. Jan 23 01:25:41.980682 systemd[1]: session-36.scope: Consumed 1.239s CPU time, 43.4M memory peak. Jan 23 01:25:41.990718 systemd-logind[1549]: Session 36 logged out. Waiting for processes to exit. Jan 23 01:25:42.002345 systemd[1]: Started sshd@36-10.0.0.71:22-10.0.0.1:33444.service - OpenSSH per-connection server daemon (10.0.0.1:33444). Jan 23 01:25:42.013672 systemd-logind[1549]: Removed session 36. Jan 23 01:25:42.193153 sshd[5029]: Accepted publickey for core from 10.0.0.1 port 33444 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:42.196730 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:42.218416 systemd-logind[1549]: New session 37 of user core. Jan 23 01:25:42.250369 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 23 01:25:43.053357 sshd[5033]: Connection closed by 10.0.0.1 port 33444 Jan 23 01:25:43.059485 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:43.077846 systemd[1]: sshd@36-10.0.0.71:22-10.0.0.1:33444.service: Deactivated successfully. Jan 23 01:25:43.084479 systemd[1]: session-37.scope: Deactivated successfully. Jan 23 01:25:43.092158 systemd-logind[1549]: Session 37 logged out. Waiting for processes to exit. Jan 23 01:25:43.122645 systemd[1]: Started sshd@37-10.0.0.71:22-10.0.0.1:56198.service - OpenSSH per-connection server daemon (10.0.0.1:56198). Jan 23 01:25:43.136192 systemd-logind[1549]: Removed session 37. Jan 23 01:25:43.246575 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 56198 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:43.249355 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:43.282386 systemd-logind[1549]: New session 38 of user core. Jan 23 01:25:43.291415 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 23 01:25:43.681814 sshd[5048]: Connection closed by 10.0.0.1 port 56198 Jan 23 01:25:43.683417 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:43.700476 systemd[1]: sshd@37-10.0.0.71:22-10.0.0.1:56198.service: Deactivated successfully. Jan 23 01:25:43.706651 systemd[1]: session-38.scope: Deactivated successfully. Jan 23 01:25:43.713355 systemd-logind[1549]: Session 38 logged out. Waiting for processes to exit. Jan 23 01:25:43.720938 systemd-logind[1549]: Removed session 38. Jan 23 01:25:48.115793 kubelet[2838]: E0123 01:25:48.114924 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:25:48.714358 systemd[1]: Started sshd@38-10.0.0.71:22-10.0.0.1:56214.service - OpenSSH per-connection server daemon (10.0.0.1:56214). Jan 23 01:25:48.901416 sshd[5061]: Accepted publickey for core from 10.0.0.1 port 56214 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:48.904603 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:48.933813 systemd-logind[1549]: New session 39 of user core. Jan 23 01:25:48.961451 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 23 01:25:49.512815 sshd[5066]: Connection closed by 10.0.0.1 port 56214 Jan 23 01:25:49.513476 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:49.523751 systemd[1]: sshd@38-10.0.0.71:22-10.0.0.1:56214.service: Deactivated successfully. Jan 23 01:25:49.539444 systemd[1]: session-39.scope: Deactivated successfully. Jan 23 01:25:49.546890 systemd-logind[1549]: Session 39 logged out. Waiting for processes to exit. Jan 23 01:25:49.559428 systemd-logind[1549]: Removed session 39. Jan 23 01:25:54.541783 systemd[1]: Started sshd@39-10.0.0.71:22-10.0.0.1:52966.service - OpenSSH per-connection server daemon (10.0.0.1:52966). Jan 23 01:25:54.738104 sshd[5082]: Accepted publickey for core from 10.0.0.1 port 52966 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:25:54.742711 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:25:54.777689 systemd-logind[1549]: New session 40 of user core. Jan 23 01:25:54.800880 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 23 01:25:55.146090 sshd[5085]: Connection closed by 10.0.0.1 port 52966 Jan 23 01:25:55.145319 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Jan 23 01:25:55.167719 systemd[1]: sshd@39-10.0.0.71:22-10.0.0.1:52966.service: Deactivated successfully. Jan 23 01:25:55.172649 systemd[1]: session-40.scope: Deactivated successfully. Jan 23 01:25:55.178733 systemd-logind[1549]: Session 40 logged out. Waiting for processes to exit. Jan 23 01:25:55.185602 systemd-logind[1549]: Removed session 40. Jan 23 01:26:00.201502 systemd[1]: Started sshd@40-10.0.0.71:22-10.0.0.1:52998.service - OpenSSH per-connection server daemon (10.0.0.1:52998). Jan 23 01:26:00.423611 sshd[5099]: Accepted publickey for core from 10.0.0.1 port 52998 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:00.429358 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:00.451470 systemd-logind[1549]: New session 41 of user core. Jan 23 01:26:00.470667 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 23 01:26:00.909293 sshd[5102]: Connection closed by 10.0.0.1 port 52998 Jan 23 01:26:00.906578 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:00.923764 systemd[1]: sshd@40-10.0.0.71:22-10.0.0.1:52998.service: Deactivated successfully. Jan 23 01:26:00.929659 systemd[1]: session-41.scope: Deactivated successfully. Jan 23 01:26:00.946906 systemd-logind[1549]: Session 41 logged out. Waiting for processes to exit. Jan 23 01:26:00.963888 systemd-logind[1549]: Removed session 41. Jan 23 01:26:04.348521 containerd[1563]: time="2026-01-23T01:26:04.345277736Z" level=warning msg="container event discarded" container=a7bf8292a6f2864b03d322d540c58a35d02030467bf1d68f69a832f5a857004b type=CONTAINER_CREATED_EVENT Jan 23 01:26:04.348521 containerd[1563]: time="2026-01-23T01:26:04.345558360Z" level=warning msg="container event discarded" container=a7bf8292a6f2864b03d322d540c58a35d02030467bf1d68f69a832f5a857004b type=CONTAINER_STARTED_EVENT Jan 23 01:26:04.415569 containerd[1563]: time="2026-01-23T01:26:04.413882687Z" level=warning msg="container event discarded" container=315316f1417879912ef5fe185b32c6659a70cc7b8681d8980dcaff1fe95ea467 type=CONTAINER_CREATED_EVENT Jan 23 01:26:04.415569 containerd[1563]: time="2026-01-23T01:26:04.414359076Z" level=warning msg="container event discarded" container=315316f1417879912ef5fe185b32c6659a70cc7b8681d8980dcaff1fe95ea467 type=CONTAINER_STARTED_EVENT Jan 23 01:26:04.535766 containerd[1563]: time="2026-01-23T01:26:04.535464056Z" level=warning msg="container event discarded" container=bd7ceb559ad0dffb6583112392a7b247050ab5393c61f2708550ed25c254abd4 type=CONTAINER_CREATED_EVENT Jan 23 01:26:04.571650 containerd[1563]: time="2026-01-23T01:26:04.570598009Z" level=warning msg="container event discarded" container=2655881ab4cf8e7de3481f390ebe7b851da2c76e6d02452037fba5da08517630 type=CONTAINER_CREATED_EVENT Jan 23 01:26:04.854245 containerd[1563]: time="2026-01-23T01:26:04.852457978Z" level=warning msg="container event discarded" container=bd7ceb559ad0dffb6583112392a7b247050ab5393c61f2708550ed25c254abd4 type=CONTAINER_STARTED_EVENT Jan 23 01:26:04.921667 containerd[1563]: time="2026-01-23T01:26:04.921452604Z" level=warning msg="container event discarded" container=2655881ab4cf8e7de3481f390ebe7b851da2c76e6d02452037fba5da08517630 type=CONTAINER_STARTED_EVENT Jan 23 01:26:05.112817 kubelet[2838]: E0123 01:26:05.112377 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:05.949674 systemd[1]: Started sshd@41-10.0.0.71:22-10.0.0.1:58472.service - OpenSSH per-connection server daemon (10.0.0.1:58472). Jan 23 01:26:06.169812 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 58472 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:06.177745 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:06.205660 systemd-logind[1549]: New session 42 of user core. Jan 23 01:26:06.220840 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 23 01:26:06.603870 sshd[5120]: Connection closed by 10.0.0.1 port 58472 Jan 23 01:26:06.605589 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:06.621735 systemd[1]: sshd@41-10.0.0.71:22-10.0.0.1:58472.service: Deactivated successfully. Jan 23 01:26:06.626534 systemd[1]: session-42.scope: Deactivated successfully. Jan 23 01:26:06.630401 systemd-logind[1549]: Session 42 logged out. Waiting for processes to exit. Jan 23 01:26:06.642529 systemd-logind[1549]: Removed session 42. Jan 23 01:26:09.111486 kubelet[2838]: E0123 01:26:09.111396 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:11.650533 systemd[1]: Started sshd@42-10.0.0.71:22-10.0.0.1:58492.service - OpenSSH per-connection server daemon (10.0.0.1:58492). Jan 23 01:26:11.817356 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 58492 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:11.827505 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:11.862896 systemd-logind[1549]: New session 43 of user core. Jan 23 01:26:11.883834 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 23 01:26:12.251544 sshd[5139]: Connection closed by 10.0.0.1 port 58492 Jan 23 01:26:12.253886 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:12.273460 systemd[1]: sshd@42-10.0.0.71:22-10.0.0.1:58492.service: Deactivated successfully. Jan 23 01:26:12.279683 systemd[1]: session-43.scope: Deactivated successfully. Jan 23 01:26:12.290308 systemd-logind[1549]: Session 43 logged out. Waiting for processes to exit. Jan 23 01:26:12.301943 systemd-logind[1549]: Removed session 43. Jan 23 01:26:17.109512 kubelet[2838]: E0123 01:26:17.106731 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:17.285514 systemd[1]: Started sshd@43-10.0.0.71:22-10.0.0.1:35280.service - OpenSSH per-connection server daemon (10.0.0.1:35280). Jan 23 01:26:17.414802 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 35280 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:17.419335 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:17.432906 systemd-logind[1549]: New session 44 of user core. Jan 23 01:26:17.449344 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 23 01:26:17.685515 sshd[5156]: Connection closed by 10.0.0.1 port 35280 Jan 23 01:26:17.686311 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:17.694286 systemd[1]: sshd@43-10.0.0.71:22-10.0.0.1:35280.service: Deactivated successfully. Jan 23 01:26:17.698611 systemd[1]: session-44.scope: Deactivated successfully. Jan 23 01:26:17.701116 systemd-logind[1549]: Session 44 logged out. Waiting for processes to exit. Jan 23 01:26:17.704829 systemd-logind[1549]: Removed session 44. Jan 23 01:26:22.714492 systemd[1]: Started sshd@44-10.0.0.71:22-10.0.0.1:33780.service - OpenSSH per-connection server daemon (10.0.0.1:33780). Jan 23 01:26:22.831487 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 33780 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:22.834964 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:22.858473 systemd-logind[1549]: New session 45 of user core. Jan 23 01:26:22.872683 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 23 01:26:22.934764 containerd[1563]: time="2026-01-23T01:26:22.934649199Z" level=warning msg="container event discarded" container=6f4a98c0dbb0e34ac7d2cca1008fa64ce2198d1d542d665a4994300ba5d1615a type=CONTAINER_STOPPED_EVENT Jan 23 01:26:23.103504 containerd[1563]: time="2026-01-23T01:26:23.102859791Z" level=warning msg="container event discarded" container=a0a3e7f7d5737d49bc413bf26022500609df33d0b9a4220a7266d81fb0be3f08 type=CONTAINER_STOPPED_EVENT Jan 23 01:26:23.145750 sshd[5175]: Connection closed by 10.0.0.1 port 33780 Jan 23 01:26:23.146503 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:23.153211 systemd[1]: sshd@44-10.0.0.71:22-10.0.0.1:33780.service: Deactivated successfully. Jan 23 01:26:23.156445 systemd[1]: session-45.scope: Deactivated successfully. Jan 23 01:26:23.158819 systemd-logind[1549]: Session 45 logged out. Waiting for processes to exit. Jan 23 01:26:23.161950 systemd-logind[1549]: Removed session 45. Jan 23 01:26:23.343314 containerd[1563]: time="2026-01-23T01:26:23.341850369Z" level=warning msg="container event discarded" container=d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2 type=CONTAINER_STOPPED_EVENT Jan 23 01:26:24.049629 containerd[1563]: time="2026-01-23T01:26:24.049532117Z" level=warning msg="container event discarded" container=97fa94f8408610e24402cf4db88a4a1b327fe4bf9f0bf0cffee20f10f8874760 type=CONTAINER_CREATED_EVENT Jan 23 01:26:24.068896 containerd[1563]: time="2026-01-23T01:26:24.068593941Z" level=warning msg="container event discarded" container=7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3 type=CONTAINER_CREATED_EVENT Jan 23 01:26:24.122499 containerd[1563]: time="2026-01-23T01:26:24.122427381Z" level=warning msg="container event discarded" container=926c1ef071143f91319390c4ddd172a0cc39a8e9c0fc6f0b1a318d3ca1f80787 type=CONTAINER_CREATED_EVENT Jan 23 01:26:24.546472 containerd[1563]: time="2026-01-23T01:26:24.546306706Z" level=warning msg="container event discarded" container=7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3 type=CONTAINER_STARTED_EVENT Jan 23 01:26:24.665467 containerd[1563]: time="2026-01-23T01:26:24.664564877Z" level=warning msg="container event discarded" container=97fa94f8408610e24402cf4db88a4a1b327fe4bf9f0bf0cffee20f10f8874760 type=CONTAINER_STARTED_EVENT Jan 23 01:26:24.706445 containerd[1563]: time="2026-01-23T01:26:24.705676425Z" level=warning msg="container event discarded" container=926c1ef071143f91319390c4ddd172a0cc39a8e9c0fc6f0b1a318d3ca1f80787 type=CONTAINER_STARTED_EVENT Jan 23 01:26:28.177810 systemd[1]: Started sshd@45-10.0.0.71:22-10.0.0.1:33786.service - OpenSSH per-connection server daemon (10.0.0.1:33786). Jan 23 01:26:28.299827 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 33786 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:28.303322 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:28.322209 systemd-logind[1549]: New session 46 of user core. Jan 23 01:26:28.337573 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 23 01:26:28.690481 sshd[5192]: Connection closed by 10.0.0.1 port 33786 Jan 23 01:26:28.690742 sshd-session[5189]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:28.707455 systemd[1]: sshd@45-10.0.0.71:22-10.0.0.1:33786.service: Deactivated successfully. Jan 23 01:26:28.710725 systemd[1]: session-46.scope: Deactivated successfully. Jan 23 01:26:28.713404 systemd-logind[1549]: Session 46 logged out. Waiting for processes to exit. Jan 23 01:26:28.719398 systemd[1]: Started sshd@46-10.0.0.71:22-10.0.0.1:33810.service - OpenSSH per-connection server daemon (10.0.0.1:33810). Jan 23 01:26:28.722557 systemd-logind[1549]: Removed session 46. Jan 23 01:26:28.840729 sshd[5206]: Accepted publickey for core from 10.0.0.1 port 33810 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:28.846582 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:28.864396 systemd-logind[1549]: New session 47 of user core. Jan 23 01:26:28.871638 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 23 01:26:30.663258 containerd[1563]: time="2026-01-23T01:26:30.662690345Z" level=info msg="StopContainer for \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\" with timeout 30 (s)" Jan 23 01:26:30.672561 containerd[1563]: time="2026-01-23T01:26:30.670568925Z" level=info msg="Stop container \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\" with signal terminated" Jan 23 01:26:30.735287 systemd[1]: cri-containerd-7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3.scope: Deactivated successfully. Jan 23 01:26:30.738576 systemd[1]: cri-containerd-7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3.scope: Consumed 2.562s CPU time, 30.3M memory peak, 4K written to disk. Jan 23 01:26:30.742657 containerd[1563]: time="2026-01-23T01:26:30.742487569Z" level=info msg="received container exit event container_id:\"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\" id:\"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\" pid:4388 exited_at:{seconds:1769131590 nanos:737439607}" Jan 23 01:26:30.775906 containerd[1563]: time="2026-01-23T01:26:30.775844626Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:26:30.796208 containerd[1563]: time="2026-01-23T01:26:30.795588566Z" level=info msg="StopContainer for \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\" with timeout 2 (s)" Jan 23 01:26:30.798413 containerd[1563]: time="2026-01-23T01:26:30.796888019Z" level=info msg="Stop container \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\" with signal terminated" Jan 23 01:26:30.848814 systemd-networkd[1454]: lxc_health: Link DOWN Jan 23 01:26:30.849408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3-rootfs.mount: Deactivated successfully. Jan 23 01:26:30.850702 systemd-networkd[1454]: lxc_health: Lost carrier Jan 23 01:26:30.896398 systemd[1]: cri-containerd-560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e.scope: Deactivated successfully. Jan 23 01:26:30.896845 systemd[1]: cri-containerd-560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e.scope: Consumed 30.971s CPU time, 130.8M memory peak, 644K read from disk, 13.3M written to disk. Jan 23 01:26:30.903762 containerd[1563]: time="2026-01-23T01:26:30.903354574Z" level=info msg="received container exit event container_id:\"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\" id:\"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\" pid:3474 exited_at:{seconds:1769131590 nanos:902954015}" Jan 23 01:26:30.903893 containerd[1563]: time="2026-01-23T01:26:30.903789696Z" level=info msg="StopContainer for \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\" returns successfully" Jan 23 01:26:30.906416 containerd[1563]: time="2026-01-23T01:26:30.906387066Z" level=info msg="StopPodSandbox for \"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\"" Jan 23 01:26:30.908680 containerd[1563]: time="2026-01-23T01:26:30.908318989Z" level=info msg="Container to stop \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:26:30.908680 containerd[1563]: time="2026-01-23T01:26:30.908414497Z" level=info msg="Container to stop \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:26:30.958339 systemd[1]: cri-containerd-05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40.scope: Deactivated successfully. Jan 23 01:26:30.973198 containerd[1563]: time="2026-01-23T01:26:30.972881957Z" level=info msg="received sandbox exit event container_id:\"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\" id:\"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\" exit_status:137 exited_at:{seconds:1769131590 nanos:971375132}" monitor_name=podsandbox Jan 23 01:26:31.016849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e-rootfs.mount: Deactivated successfully. Jan 23 01:26:31.050194 containerd[1563]: time="2026-01-23T01:26:31.048784650Z" level=info msg="StopContainer for \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\" returns successfully" Jan 23 01:26:31.054765 containerd[1563]: time="2026-01-23T01:26:31.054564718Z" level=info msg="StopPodSandbox for \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\"" Jan 23 01:26:31.054765 containerd[1563]: time="2026-01-23T01:26:31.054723063Z" level=info msg="Container to stop \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:26:31.054765 containerd[1563]: time="2026-01-23T01:26:31.054742438Z" level=info msg="Container to stop \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:26:31.054765 containerd[1563]: time="2026-01-23T01:26:31.054754100Z" level=info msg="Container to stop \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:26:31.054765 containerd[1563]: time="2026-01-23T01:26:31.054764920Z" level=info msg="Container to stop \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:26:31.055359 containerd[1563]: time="2026-01-23T01:26:31.054778286Z" level=info msg="Container to stop \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 23 01:26:31.081234 containerd[1563]: time="2026-01-23T01:26:31.080631921Z" level=info msg="received sandbox exit event container_id:\"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" id:\"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" exit_status:137 exited_at:{seconds:1769131591 nanos:79853806}" monitor_name=podsandbox Jan 23 01:26:31.084296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40-rootfs.mount: Deactivated successfully. Jan 23 01:26:31.088508 systemd[1]: cri-containerd-992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1.scope: Deactivated successfully. Jan 23 01:26:31.101473 containerd[1563]: time="2026-01-23T01:26:31.101427093Z" level=info msg="shim disconnected" id=05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40 namespace=k8s.io Jan 23 01:26:31.103362 containerd[1563]: time="2026-01-23T01:26:31.101692878Z" level=warning msg="cleaning up after shim disconnected" id=05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40 namespace=k8s.io Jan 23 01:26:31.106221 kubelet[2838]: E0123 01:26:31.105768 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:31.129352 containerd[1563]: time="2026-01-23T01:26:31.101711754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:26:31.199578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40-shm.mount: Deactivated successfully. Jan 23 01:26:31.201584 containerd[1563]: time="2026-01-23T01:26:31.201544096Z" level=info msg="TearDown network for sandbox \"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\" successfully" Jan 23 01:26:31.201845 containerd[1563]: time="2026-01-23T01:26:31.201726457Z" level=info msg="StopPodSandbox for \"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\" returns successfully" Jan 23 01:26:31.217603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1-rootfs.mount: Deactivated successfully. Jan 23 01:26:31.221513 containerd[1563]: time="2026-01-23T01:26:31.220965026Z" level=info msg="received sandbox container exit event sandbox_id:\"05537b0724f6bf4ff9a4c7ecaeb60a5a347a18578f0929eb9c518623fe493d40\" exit_status:137 exited_at:{seconds:1769131590 nanos:971375132}" monitor_name=criService Jan 23 01:26:31.249368 containerd[1563]: time="2026-01-23T01:26:31.248543971Z" level=info msg="shim disconnected" id=992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1 namespace=k8s.io Jan 23 01:26:31.249368 containerd[1563]: time="2026-01-23T01:26:31.248591640Z" level=warning msg="cleaning up after shim disconnected" id=992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1 namespace=k8s.io Jan 23 01:26:31.249368 containerd[1563]: time="2026-01-23T01:26:31.248608411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 01:26:31.293475 kubelet[2838]: I0123 01:26:31.293437 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvfsn\" (UniqueName: \"kubernetes.io/projected/e55daa78-d9f7-467a-856a-5c3b45afc015-kube-api-access-bvfsn\") pod \"e55daa78-d9f7-467a-856a-5c3b45afc015\" (UID: \"e55daa78-d9f7-467a-856a-5c3b45afc015\") " Jan 23 01:26:31.293685 kubelet[2838]: I0123 01:26:31.293673 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e55daa78-d9f7-467a-856a-5c3b45afc015-cilium-config-path\") pod \"e55daa78-d9f7-467a-856a-5c3b45afc015\" (UID: \"e55daa78-d9f7-467a-856a-5c3b45afc015\") " Jan 23 01:26:31.301641 kubelet[2838]: I0123 01:26:31.301608 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e55daa78-d9f7-467a-856a-5c3b45afc015-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e55daa78-d9f7-467a-856a-5c3b45afc015" (UID: "e55daa78-d9f7-467a-856a-5c3b45afc015"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:26:31.307893 kubelet[2838]: I0123 01:26:31.307812 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55daa78-d9f7-467a-856a-5c3b45afc015-kube-api-access-bvfsn" (OuterVolumeSpecName: "kube-api-access-bvfsn") pod "e55daa78-d9f7-467a-856a-5c3b45afc015" (UID: "e55daa78-d9f7-467a-856a-5c3b45afc015"). InnerVolumeSpecName "kube-api-access-bvfsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:26:31.314504 containerd[1563]: time="2026-01-23T01:26:31.314396452Z" level=info msg="received sandbox container exit event sandbox_id:\"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" exit_status:137 exited_at:{seconds:1769131591 nanos:79853806}" monitor_name=criService Jan 23 01:26:31.315554 containerd[1563]: time="2026-01-23T01:26:31.315427507Z" level=info msg="TearDown network for sandbox \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" successfully" Jan 23 01:26:31.315554 containerd[1563]: time="2026-01-23T01:26:31.315529978Z" level=info msg="StopPodSandbox for \"992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1\" returns successfully" Jan 23 01:26:31.395181 kubelet[2838]: I0123 01:26:31.394904 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-cgroup\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.395516 kubelet[2838]: I0123 01:26:31.395365 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cni-path\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.395516 kubelet[2838]: I0123 01:26:31.395395 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-xtables-lock\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.395516 kubelet[2838]: I0123 01:26:31.395419 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-bpf-maps\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.395516 kubelet[2838]: I0123 01:26:31.395414 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.395516 kubelet[2838]: I0123 01:26:31.395443 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-etc-cni-netd\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.395516 kubelet[2838]: I0123 01:26:31.395481 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-config-path\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.395745 kubelet[2838]: I0123 01:26:31.395514 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f853f65-8007-42ea-8e4b-f009906b5cc0-hubble-tls\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.395745 kubelet[2838]: I0123 01:26:31.395486 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.395745 kubelet[2838]: I0123 01:26:31.395499 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cni-path" (OuterVolumeSpecName: "cni-path") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.395745 kubelet[2838]: I0123 01:26:31.395517 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.395745 kubelet[2838]: I0123 01:26:31.395527 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.395854 kubelet[2838]: I0123 01:26:31.395586 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.396066 kubelet[2838]: I0123 01:26:31.395537 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-host-proc-sys-net\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.396591 kubelet[2838]: I0123 01:26:31.396499 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-run\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.397226 kubelet[2838]: I0123 01:26:31.397050 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-host-proc-sys-kernel\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.397226 kubelet[2838]: I0123 01:26:31.397082 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5d6n\" (UniqueName: \"kubernetes.io/projected/8f853f65-8007-42ea-8e4b-f009906b5cc0-kube-api-access-r5d6n\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.397226 kubelet[2838]: I0123 01:26:31.397172 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-lib-modules\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.397226 kubelet[2838]: I0123 01:26:31.397197 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f853f65-8007-42ea-8e4b-f009906b5cc0-clustermesh-secrets\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.397226 kubelet[2838]: I0123 01:26:31.397213 2838 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-hostproc\") pod \"8f853f65-8007-42ea-8e4b-f009906b5cc0\" (UID: \"8f853f65-8007-42ea-8e4b-f009906b5cc0\") " Jan 23 01:26:31.397364 kubelet[2838]: I0123 01:26:31.397273 2838 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.397364 kubelet[2838]: I0123 01:26:31.397282 2838 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bvfsn\" (UniqueName: \"kubernetes.io/projected/e55daa78-d9f7-467a-856a-5c3b45afc015-kube-api-access-bvfsn\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.397364 kubelet[2838]: I0123 01:26:31.397291 2838 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.397364 kubelet[2838]: I0123 01:26:31.397299 2838 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e55daa78-d9f7-467a-856a-5c3b45afc015-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.397364 kubelet[2838]: I0123 01:26:31.397308 2838 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.397364 kubelet[2838]: I0123 01:26:31.397315 2838 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.397364 kubelet[2838]: I0123 01:26:31.397322 2838 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.397364 kubelet[2838]: I0123 01:26:31.397329 2838 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.397527 kubelet[2838]: I0123 01:26:31.397356 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-hostproc" (OuterVolumeSpecName: "hostproc") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.397527 kubelet[2838]: I0123 01:26:31.397375 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.397527 kubelet[2838]: I0123 01:26:31.397388 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.398511 kubelet[2838]: I0123 01:26:31.398293 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 01:26:31.406190 kubelet[2838]: I0123 01:26:31.405461 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f853f65-8007-42ea-8e4b-f009906b5cc0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:26:31.406317 kubelet[2838]: I0123 01:26:31.406277 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:26:31.411537 kubelet[2838]: I0123 01:26:31.411487 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f853f65-8007-42ea-8e4b-f009906b5cc0-kube-api-access-r5d6n" (OuterVolumeSpecName: "kube-api-access-r5d6n") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "kube-api-access-r5d6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:26:31.413397 kubelet[2838]: I0123 01:26:31.412888 2838 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f853f65-8007-42ea-8e4b-f009906b5cc0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8f853f65-8007-42ea-8e4b-f009906b5cc0" (UID: "8f853f65-8007-42ea-8e4b-f009906b5cc0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:26:31.498732 kubelet[2838]: I0123 01:26:31.498443 2838 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f853f65-8007-42ea-8e4b-f009906b5cc0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.498732 kubelet[2838]: I0123 01:26:31.498554 2838 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.498732 kubelet[2838]: I0123 01:26:31.498568 2838 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.498732 kubelet[2838]: I0123 01:26:31.498581 2838 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r5d6n\" (UniqueName: \"kubernetes.io/projected/8f853f65-8007-42ea-8e4b-f009906b5cc0-kube-api-access-r5d6n\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.498732 kubelet[2838]: I0123 01:26:31.498596 2838 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.498732 kubelet[2838]: I0123 01:26:31.498606 2838 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f853f65-8007-42ea-8e4b-f009906b5cc0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.498732 kubelet[2838]: I0123 01:26:31.498617 2838 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f853f65-8007-42ea-8e4b-f009906b5cc0-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.498732 kubelet[2838]: I0123 01:26:31.498628 2838 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f853f65-8007-42ea-8e4b-f009906b5cc0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 23 01:26:31.799742 kubelet[2838]: E0123 01:26:31.798366 2838 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:26:31.849471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-992c1f1cfee7c39bf594c7b4c73c35c343387a9c650071041ff64eed03b972f1-shm.mount: Deactivated successfully. Jan 23 01:26:31.849632 systemd[1]: var-lib-kubelet-pods-e55daa78\x2dd9f7\x2d467a\x2d856a\x2d5c3b45afc015-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbvfsn.mount: Deactivated successfully. Jan 23 01:26:31.849741 systemd[1]: var-lib-kubelet-pods-8f853f65\x2d8007\x2d42ea\x2d8e4b\x2df009906b5cc0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 23 01:26:31.849837 systemd[1]: var-lib-kubelet-pods-8f853f65\x2d8007\x2d42ea\x2d8e4b\x2df009906b5cc0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 23 01:26:31.850309 systemd[1]: var-lib-kubelet-pods-8f853f65\x2d8007\x2d42ea\x2d8e4b\x2df009906b5cc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr5d6n.mount: Deactivated successfully. Jan 23 01:26:31.870684 kubelet[2838]: I0123 01:26:31.870478 2838 scope.go:117] "RemoveContainer" containerID="560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e" Jan 23 01:26:31.877881 containerd[1563]: time="2026-01-23T01:26:31.877671516Z" level=info msg="RemoveContainer for \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\"" Jan 23 01:26:31.896289 systemd[1]: Removed slice kubepods-burstable-pod8f853f65_8007_42ea_8e4b_f009906b5cc0.slice - libcontainer container kubepods-burstable-pod8f853f65_8007_42ea_8e4b_f009906b5cc0.slice. Jan 23 01:26:31.896450 systemd[1]: kubepods-burstable-pod8f853f65_8007_42ea_8e4b_f009906b5cc0.slice: Consumed 31.460s CPU time, 131.2M memory peak, 751K read from disk, 13.3M written to disk. Jan 23 01:26:31.948685 systemd[1]: Removed slice kubepods-besteffort-pode55daa78_d9f7_467a_856a_5c3b45afc015.slice - libcontainer container kubepods-besteffort-pode55daa78_d9f7_467a_856a_5c3b45afc015.slice. Jan 23 01:26:31.948815 systemd[1]: kubepods-besteffort-pode55daa78_d9f7_467a_856a_5c3b45afc015.slice: Consumed 4.493s CPU time, 30.5M memory peak, 8K written to disk. Jan 23 01:26:31.958435 containerd[1563]: time="2026-01-23T01:26:31.957516043Z" level=info msg="RemoveContainer for \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\" returns successfully" Jan 23 01:26:31.959216 kubelet[2838]: I0123 01:26:31.958951 2838 scope.go:117] "RemoveContainer" containerID="f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697" Jan 23 01:26:31.971702 containerd[1563]: time="2026-01-23T01:26:31.970643888Z" level=info msg="RemoveContainer for \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\"" Jan 23 01:26:31.991334 containerd[1563]: time="2026-01-23T01:26:31.991267459Z" level=info msg="RemoveContainer for \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\" returns successfully" Jan 23 01:26:31.992260 kubelet[2838]: I0123 01:26:31.991579 2838 scope.go:117] "RemoveContainer" containerID="284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8" Jan 23 01:26:31.999234 containerd[1563]: time="2026-01-23T01:26:31.998810098Z" level=info msg="RemoveContainer for \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\"" Jan 23 01:26:32.021341 containerd[1563]: time="2026-01-23T01:26:32.021270214Z" level=info msg="RemoveContainer for \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\" returns successfully" Jan 23 01:26:32.022247 kubelet[2838]: I0123 01:26:32.021894 2838 scope.go:117] "RemoveContainer" containerID="c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0" Jan 23 01:26:32.026640 containerd[1563]: time="2026-01-23T01:26:32.026617260Z" level=info msg="RemoveContainer for \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\"" Jan 23 01:26:32.038646 containerd[1563]: time="2026-01-23T01:26:32.037653424Z" level=info msg="RemoveContainer for \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\" returns successfully" Jan 23 01:26:32.039647 kubelet[2838]: I0123 01:26:32.039598 2838 scope.go:117] "RemoveContainer" containerID="f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330" Jan 23 01:26:32.044539 containerd[1563]: time="2026-01-23T01:26:32.044312700Z" level=info msg="RemoveContainer for \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\"" Jan 23 01:26:32.056208 containerd[1563]: time="2026-01-23T01:26:32.055767530Z" level=info msg="RemoveContainer for \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\" returns successfully" Jan 23 01:26:32.057557 kubelet[2838]: I0123 01:26:32.056576 2838 scope.go:117] "RemoveContainer" containerID="560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e" Jan 23 01:26:32.057639 containerd[1563]: time="2026-01-23T01:26:32.057470876Z" level=error msg="ContainerStatus for \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\": not found" Jan 23 01:26:32.058312 kubelet[2838]: E0123 01:26:32.057934 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\": not found" containerID="560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e" Jan 23 01:26:32.058312 kubelet[2838]: I0123 01:26:32.058204 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e"} err="failed to get container status \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\": rpc error: code = NotFound desc = an error occurred when try to find container \"560f70fab0e2f33e556fc9c8ecf893fa9a4384dde9018c9aab4fa78e2c96134e\": not found" Jan 23 01:26:32.058312 kubelet[2838]: I0123 01:26:32.058263 2838 scope.go:117] "RemoveContainer" containerID="f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697" Jan 23 01:26:32.061933 containerd[1563]: time="2026-01-23T01:26:32.061646302Z" level=error msg="ContainerStatus for \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\": not found" Jan 23 01:26:32.062542 kubelet[2838]: E0123 01:26:32.062457 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\": not found" containerID="f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697" Jan 23 01:26:32.062591 kubelet[2838]: I0123 01:26:32.062551 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697"} err="failed to get container status \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6defe9657229661bc140ff650d59319de48459d43f6bf3b4623d9588a93e697\": not found" Jan 23 01:26:32.062591 kubelet[2838]: I0123 01:26:32.062579 2838 scope.go:117] "RemoveContainer" containerID="284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8" Jan 23 01:26:32.063463 containerd[1563]: time="2026-01-23T01:26:32.062830943Z" level=error msg="ContainerStatus for \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\": not found" Jan 23 01:26:32.063877 kubelet[2838]: E0123 01:26:32.063313 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\": not found" containerID="284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8" Jan 23 01:26:32.063877 kubelet[2838]: I0123 01:26:32.063360 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8"} err="failed to get container status \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"284fe25df5c560bc0d54c17bf42d447ceaeaef9a9ee994f6c3cd176da13624f8\": not found" Jan 23 01:26:32.063877 kubelet[2838]: I0123 01:26:32.063396 2838 scope.go:117] "RemoveContainer" containerID="c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0" Jan 23 01:26:32.064492 containerd[1563]: time="2026-01-23T01:26:32.064367227Z" level=error msg="ContainerStatus for \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\": not found" Jan 23 01:26:32.065619 kubelet[2838]: E0123 01:26:32.065585 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\": not found" containerID="c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0" Jan 23 01:26:32.065671 kubelet[2838]: I0123 01:26:32.065618 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0"} err="failed to get container status \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"c48488cae241ca2f96441795f8907a48275d46ecffb9ce6e5594e7cff4f5e3f0\": not found" Jan 23 01:26:32.065671 kubelet[2838]: I0123 01:26:32.065645 2838 scope.go:117] "RemoveContainer" containerID="f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330" Jan 23 01:26:32.066429 containerd[1563]: time="2026-01-23T01:26:32.066275594Z" level=error msg="ContainerStatus for \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\": not found" Jan 23 01:26:32.069239 kubelet[2838]: E0123 01:26:32.066949 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\": not found" containerID="f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330" Jan 23 01:26:32.069239 kubelet[2838]: I0123 01:26:32.067386 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330"} err="failed to get container status \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\": rpc error: code = NotFound desc = an error occurred when try to find container \"f481937c2e6e1199f746450b066b5bd882d8d61986188623d635425f649be330\": not found" Jan 23 01:26:32.069239 kubelet[2838]: I0123 01:26:32.067408 2838 scope.go:117] "RemoveContainer" containerID="7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3" Jan 23 01:26:32.070847 containerd[1563]: time="2026-01-23T01:26:32.070666903Z" level=info msg="RemoveContainer for \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\"" Jan 23 01:26:32.081966 containerd[1563]: time="2026-01-23T01:26:32.081851770Z" level=info msg="RemoveContainer for \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\" returns successfully" Jan 23 01:26:32.084575 kubelet[2838]: I0123 01:26:32.084474 2838 scope.go:117] "RemoveContainer" containerID="d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2" Jan 23 01:26:32.090500 containerd[1563]: time="2026-01-23T01:26:32.090470680Z" level=info msg="RemoveContainer for \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\"" Jan 23 01:26:32.102395 containerd[1563]: time="2026-01-23T01:26:32.101816467Z" level=info msg="RemoveContainer for \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\" returns successfully" Jan 23 01:26:32.102690 kubelet[2838]: I0123 01:26:32.102536 2838 scope.go:117] "RemoveContainer" containerID="7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3" Jan 23 01:26:32.104436 containerd[1563]: time="2026-01-23T01:26:32.104382028Z" level=error msg="ContainerStatus for \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\": not found" Jan 23 01:26:32.105453 kubelet[2838]: E0123 01:26:32.105412 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\": not found" containerID="7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3" Jan 23 01:26:32.105854 kubelet[2838]: E0123 01:26:32.105542 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-69f6d" podUID="af7d7f35-7df8-4733-b7d8-eaa6851ed445" Jan 23 01:26:32.105854 kubelet[2838]: I0123 01:26:32.105551 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3"} err="failed to get container status \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"7d11634a8371b63b60676d9eb657c1fe6d745105cf49460e57dc067377f366c3\": not found" Jan 23 01:26:32.105854 kubelet[2838]: I0123 01:26:32.105609 2838 scope.go:117] "RemoveContainer" containerID="d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2" Jan 23 01:26:32.106524 containerd[1563]: time="2026-01-23T01:26:32.106485031Z" level=error msg="ContainerStatus for \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\": not found" Jan 23 01:26:32.106778 kubelet[2838]: E0123 01:26:32.106706 2838 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\": not found" containerID="d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2" Jan 23 01:26:32.106778 kubelet[2838]: I0123 01:26:32.106741 2838 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2"} err="failed to get container status \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7422c329ca2ca820e7cc1f2619f5ad40463e80dc3c5d4dd163d38ca8648d0e2\": not found" Jan 23 01:26:32.110483 kubelet[2838]: I0123 01:26:32.110457 2838 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f853f65-8007-42ea-8e4b-f009906b5cc0" path="/var/lib/kubelet/pods/8f853f65-8007-42ea-8e4b-f009906b5cc0/volumes" Jan 23 01:26:32.112405 kubelet[2838]: I0123 01:26:32.112380 2838 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55daa78-d9f7-467a-856a-5c3b45afc015" path="/var/lib/kubelet/pods/e55daa78-d9f7-467a-856a-5c3b45afc015/volumes" Jan 23 01:26:32.492312 sshd[5209]: Connection closed by 10.0.0.1 port 33810 Jan 23 01:26:32.495617 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:32.511531 systemd[1]: sshd@46-10.0.0.71:22-10.0.0.1:33810.service: Deactivated successfully. Jan 23 01:26:32.519498 systemd[1]: session-47.scope: Deactivated successfully. Jan 23 01:26:32.523819 systemd-logind[1549]: Session 47 logged out. Waiting for processes to exit. Jan 23 01:26:32.530392 systemd[1]: Started sshd@47-10.0.0.71:22-10.0.0.1:34206.service - OpenSSH per-connection server daemon (10.0.0.1:34206). Jan 23 01:26:32.534837 systemd-logind[1549]: Removed session 47. Jan 23 01:26:32.734558 sshd[5356]: Accepted publickey for core from 10.0.0.1 port 34206 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:32.738443 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:32.764218 systemd-logind[1549]: New session 48 of user core. Jan 23 01:26:32.779392 systemd[1]: Started session-48.scope - Session 48 of User core. Jan 23 01:26:34.072789 sshd[5359]: Connection closed by 10.0.0.1 port 34206 Jan 23 01:26:34.073582 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:34.107338 kubelet[2838]: E0123 01:26:34.105837 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-69f6d" podUID="af7d7f35-7df8-4733-b7d8-eaa6851ed445" Jan 23 01:26:34.109338 systemd[1]: sshd@47-10.0.0.71:22-10.0.0.1:34206.service: Deactivated successfully. Jan 23 01:26:34.123347 systemd[1]: session-48.scope: Deactivated successfully. Jan 23 01:26:34.126253 systemd-logind[1549]: Session 48 logged out. Waiting for processes to exit. Jan 23 01:26:34.139349 systemd[1]: Started sshd@48-10.0.0.71:22-10.0.0.1:34222.service - OpenSSH per-connection server daemon (10.0.0.1:34222). Jan 23 01:26:34.146933 systemd-logind[1549]: Removed session 48. Jan 23 01:26:34.264450 systemd[1]: Created slice kubepods-burstable-podd316e628_5f1a_4225_b622_7f3f72aaa626.slice - libcontainer container kubepods-burstable-podd316e628_5f1a_4225_b622_7f3f72aaa626.slice. Jan 23 01:26:34.280960 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 34222 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:34.288677 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:34.306932 systemd-logind[1549]: New session 49 of user core. Jan 23 01:26:34.318363 systemd[1]: Started session-49.scope - Session 49 of User core. Jan 23 01:26:34.343231 kubelet[2838]: I0123 01:26:34.340846 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-hostproc\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343231 kubelet[2838]: I0123 01:26:34.342692 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-etc-cni-netd\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343231 kubelet[2838]: I0123 01:26:34.342733 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d316e628-5f1a-4225-b622-7f3f72aaa626-clustermesh-secrets\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343231 kubelet[2838]: I0123 01:26:34.342760 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-cni-path\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343231 kubelet[2838]: I0123 01:26:34.342868 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-host-proc-sys-net\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343231 kubelet[2838]: I0123 01:26:34.342895 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-bpf-maps\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343933 kubelet[2838]: I0123 01:26:34.342913 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-lib-modules\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343933 kubelet[2838]: I0123 01:26:34.342932 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-host-proc-sys-kernel\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343933 kubelet[2838]: I0123 01:26:34.342955 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d316e628-5f1a-4225-b622-7f3f72aaa626-hubble-tls\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343933 kubelet[2838]: I0123 01:26:34.343666 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v25sb\" (UniqueName: \"kubernetes.io/projected/d316e628-5f1a-4225-b622-7f3f72aaa626-kube-api-access-v25sb\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343933 kubelet[2838]: I0123 01:26:34.343699 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-cilium-cgroup\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.343933 kubelet[2838]: I0123 01:26:34.343723 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-xtables-lock\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.346455 kubelet[2838]: I0123 01:26:34.343742 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d316e628-5f1a-4225-b622-7f3f72aaa626-cilium-run\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.346455 kubelet[2838]: I0123 01:26:34.343771 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d316e628-5f1a-4225-b622-7f3f72aaa626-cilium-ipsec-secrets\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.346455 kubelet[2838]: I0123 01:26:34.343880 2838 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d316e628-5f1a-4225-b622-7f3f72aaa626-cilium-config-path\") pod \"cilium-wnrgk\" (UID: \"d316e628-5f1a-4225-b622-7f3f72aaa626\") " pod="kube-system/cilium-wnrgk" Jan 23 01:26:34.417775 sshd[5376]: Connection closed by 10.0.0.1 port 34222 Jan 23 01:26:34.418608 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:34.450602 systemd[1]: sshd@48-10.0.0.71:22-10.0.0.1:34222.service: Deactivated successfully. Jan 23 01:26:34.458555 systemd[1]: session-49.scope: Deactivated successfully. Jan 23 01:26:34.469376 systemd-logind[1549]: Session 49 logged out. Waiting for processes to exit. Jan 23 01:26:34.471278 systemd[1]: Started sshd@49-10.0.0.71:22-10.0.0.1:34236.service - OpenSSH per-connection server daemon (10.0.0.1:34236). Jan 23 01:26:34.493745 systemd-logind[1549]: Removed session 49. Jan 23 01:26:34.581927 kubelet[2838]: E0123 01:26:34.581742 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:34.600562 containerd[1563]: time="2026-01-23T01:26:34.583948378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wnrgk,Uid:d316e628-5f1a-4225-b622-7f3f72aaa626,Namespace:kube-system,Attempt:0,}" Jan 23 01:26:34.653395 sshd[5386]: Accepted publickey for core from 10.0.0.1 port 34236 ssh2: RSA SHA256:DbK6l1/8QoFpoD2mnerYulPPe3j8Bedze0KTmxI1z+4 Jan 23 01:26:34.664857 sshd-session[5386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:26:34.727744 systemd-logind[1549]: New session 50 of user core. Jan 23 01:26:34.732967 containerd[1563]: time="2026-01-23T01:26:34.732353553Z" level=info msg="connecting to shim 7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417" address="unix:///run/containerd/s/cf9383b94092f04cd840f5b9a70f2a91c4be32263fb92a4c940d84f795609f0c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:26:34.740525 systemd[1]: Started session-50.scope - Session 50 of User core. Jan 23 01:26:34.860516 systemd[1]: Started cri-containerd-7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417.scope - libcontainer container 7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417. Jan 23 01:26:35.022741 containerd[1563]: time="2026-01-23T01:26:35.022609479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wnrgk,Uid:d316e628-5f1a-4225-b622-7f3f72aaa626,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\"" Jan 23 01:26:35.027291 kubelet[2838]: E0123 01:26:35.026849 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:35.063947 containerd[1563]: time="2026-01-23T01:26:35.063609824Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 23 01:26:35.117422 containerd[1563]: time="2026-01-23T01:26:35.116638003Z" level=info msg="Container d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:26:35.150476 containerd[1563]: time="2026-01-23T01:26:35.149844282Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c\"" Jan 23 01:26:35.152882 containerd[1563]: time="2026-01-23T01:26:35.152854307Z" level=info msg="StartContainer for \"d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c\"" Jan 23 01:26:35.160368 containerd[1563]: time="2026-01-23T01:26:35.160207399Z" level=info msg="connecting to shim d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c" address="unix:///run/containerd/s/cf9383b94092f04cd840f5b9a70f2a91c4be32263fb92a4c940d84f795609f0c" protocol=ttrpc version=3 Jan 23 01:26:35.254462 systemd[1]: Started cri-containerd-d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c.scope - libcontainer container d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c. Jan 23 01:26:35.426223 containerd[1563]: time="2026-01-23T01:26:35.424654894Z" level=info msg="StartContainer for \"d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c\" returns successfully" Jan 23 01:26:35.524858 systemd[1]: cri-containerd-d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c.scope: Deactivated successfully. Jan 23 01:26:35.549692 containerd[1563]: time="2026-01-23T01:26:35.549634110Z" level=info msg="received container exit event container_id:\"d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c\" id:\"d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c\" pid:5459 exited_at:{seconds:1769131595 nanos:546279334}" Jan 23 01:26:35.648359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2ccaa78704ea97df5145c83f39f5798c7db2121df6d9727c814cf7c834abd7c-rootfs.mount: Deactivated successfully. Jan 23 01:26:35.954348 kubelet[2838]: E0123 01:26:35.951298 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:36.007314 containerd[1563]: time="2026-01-23T01:26:36.006464993Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 23 01:26:36.033748 containerd[1563]: time="2026-01-23T01:26:36.032784468Z" level=info msg="Container 6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:26:36.080877 containerd[1563]: time="2026-01-23T01:26:36.080828229Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd\"" Jan 23 01:26:36.097956 containerd[1563]: time="2026-01-23T01:26:36.096816354Z" level=info msg="StartContainer for \"6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd\"" Jan 23 01:26:36.112411 kubelet[2838]: E0123 01:26:36.107801 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-69f6d" podUID="af7d7f35-7df8-4733-b7d8-eaa6851ed445" Jan 23 01:26:36.123385 containerd[1563]: time="2026-01-23T01:26:36.123330385Z" level=info msg="connecting to shim 6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd" address="unix:///run/containerd/s/cf9383b94092f04cd840f5b9a70f2a91c4be32263fb92a4c940d84f795609f0c" protocol=ttrpc version=3 Jan 23 01:26:36.222885 systemd[1]: Started cri-containerd-6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd.scope - libcontainer container 6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd. Jan 23 01:26:36.356878 containerd[1563]: time="2026-01-23T01:26:36.356663273Z" level=info msg="StartContainer for \"6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd\" returns successfully" Jan 23 01:26:36.437728 containerd[1563]: time="2026-01-23T01:26:36.435533304Z" level=info msg="received container exit event container_id:\"6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd\" id:\"6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd\" pid:5506 exited_at:{seconds:1769131596 nanos:435341887}" Jan 23 01:26:36.436440 systemd[1]: cri-containerd-6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd.scope: Deactivated successfully. Jan 23 01:26:36.600739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fbbc4a8e765d470b550732671af29f3bfda32048b15bf9a45a62d506177ccbd-rootfs.mount: Deactivated successfully. Jan 23 01:26:36.803467 kubelet[2838]: E0123 01:26:36.803394 2838 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 23 01:26:36.973256 kubelet[2838]: E0123 01:26:36.972568 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:37.030395 containerd[1563]: time="2026-01-23T01:26:37.029615442Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 23 01:26:37.087629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544631644.mount: Deactivated successfully. Jan 23 01:26:37.097333 containerd[1563]: time="2026-01-23T01:26:37.091713313Z" level=info msg="Container 1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:26:37.120594 containerd[1563]: time="2026-01-23T01:26:37.120394938Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73\"" Jan 23 01:26:37.125024 containerd[1563]: time="2026-01-23T01:26:37.124615240Z" level=info msg="StartContainer for \"1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73\"" Jan 23 01:26:37.128596 containerd[1563]: time="2026-01-23T01:26:37.127927987Z" level=info msg="connecting to shim 1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73" address="unix:///run/containerd/s/cf9383b94092f04cd840f5b9a70f2a91c4be32263fb92a4c940d84f795609f0c" protocol=ttrpc version=3 Jan 23 01:26:37.238741 systemd[1]: Started cri-containerd-1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73.scope - libcontainer container 1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73. Jan 23 01:26:37.469538 containerd[1563]: time="2026-01-23T01:26:37.469408200Z" level=info msg="StartContainer for \"1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73\" returns successfully" Jan 23 01:26:37.483530 systemd[1]: cri-containerd-1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73.scope: Deactivated successfully. Jan 23 01:26:37.492334 containerd[1563]: time="2026-01-23T01:26:37.491943327Z" level=info msg="received container exit event container_id:\"1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73\" id:\"1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73\" pid:5550 exited_at:{seconds:1769131597 nanos:491571484}" Jan 23 01:26:37.596959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fbeccbc91d473ce8cd3b1dbbc92b07c85e7a3a28ddb9769ae31135e221eda73-rootfs.mount: Deactivated successfully. Jan 23 01:26:38.017933 kubelet[2838]: E0123 01:26:38.017828 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:38.036145 containerd[1563]: time="2026-01-23T01:26:38.035483904Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 23 01:26:38.086253 containerd[1563]: time="2026-01-23T01:26:38.085839549Z" level=info msg="Container 159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:26:38.109920 kubelet[2838]: E0123 01:26:38.107963 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-69f6d" podUID="af7d7f35-7df8-4733-b7d8-eaa6851ed445" Jan 23 01:26:38.134381 containerd[1563]: time="2026-01-23T01:26:38.133329037Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c\"" Jan 23 01:26:38.151619 containerd[1563]: time="2026-01-23T01:26:38.150578727Z" level=info msg="StartContainer for \"159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c\"" Jan 23 01:26:38.161242 containerd[1563]: time="2026-01-23T01:26:38.160551890Z" level=info msg="connecting to shim 159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c" address="unix:///run/containerd/s/cf9383b94092f04cd840f5b9a70f2a91c4be32263fb92a4c940d84f795609f0c" protocol=ttrpc version=3 Jan 23 01:26:38.282268 systemd[1]: Started cri-containerd-159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c.scope - libcontainer container 159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c. Jan 23 01:26:38.457574 systemd[1]: cri-containerd-159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c.scope: Deactivated successfully. Jan 23 01:26:38.468353 containerd[1563]: time="2026-01-23T01:26:38.467905075Z" level=info msg="received container exit event container_id:\"159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c\" id:\"159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c\" pid:5591 exited_at:{seconds:1769131598 nanos:457449754}" Jan 23 01:26:38.507336 containerd[1563]: time="2026-01-23T01:26:38.506899185Z" level=info msg="StartContainer for \"159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c\" returns successfully" Jan 23 01:26:38.598588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-159a8f5ed84a9d2cbb86bf055989ae9fd3c541a8c8981a580cab10dc6a496c4c-rootfs.mount: Deactivated successfully. Jan 23 01:26:38.651472 kubelet[2838]: I0123 01:26:38.650771 2838 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T01:26:38Z","lastTransitionTime":"2026-01-23T01:26:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 23 01:26:39.043451 kubelet[2838]: E0123 01:26:39.043243 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:39.080178 containerd[1563]: time="2026-01-23T01:26:39.079892467Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 23 01:26:39.150593 containerd[1563]: time="2026-01-23T01:26:39.149531353Z" level=info msg="Container 9eead922f83c79f283e57f9e050dcdbe5ba0d66a74daa5fd820730eb68eceeef: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:26:39.176456 containerd[1563]: time="2026-01-23T01:26:39.175879233Z" level=info msg="CreateContainer within sandbox \"7b21f350d133fd6212c3fa42b3923130c9aa4c2914a79640b46d4d4f67c45417\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9eead922f83c79f283e57f9e050dcdbe5ba0d66a74daa5fd820730eb68eceeef\"" Jan 23 01:26:39.180603 containerd[1563]: time="2026-01-23T01:26:39.180421665Z" level=info msg="StartContainer for \"9eead922f83c79f283e57f9e050dcdbe5ba0d66a74daa5fd820730eb68eceeef\"" Jan 23 01:26:39.185390 containerd[1563]: time="2026-01-23T01:26:39.181904330Z" level=info msg="connecting to shim 9eead922f83c79f283e57f9e050dcdbe5ba0d66a74daa5fd820730eb68eceeef" address="unix:///run/containerd/s/cf9383b94092f04cd840f5b9a70f2a91c4be32263fb92a4c940d84f795609f0c" protocol=ttrpc version=3 Jan 23 01:26:39.275814 systemd[1]: Started cri-containerd-9eead922f83c79f283e57f9e050dcdbe5ba0d66a74daa5fd820730eb68eceeef.scope - libcontainer container 9eead922f83c79f283e57f9e050dcdbe5ba0d66a74daa5fd820730eb68eceeef. Jan 23 01:26:39.531348 containerd[1563]: time="2026-01-23T01:26:39.530940737Z" level=info msg="StartContainer for \"9eead922f83c79f283e57f9e050dcdbe5ba0d66a74daa5fd820730eb68eceeef\" returns successfully" Jan 23 01:26:40.126790 kubelet[2838]: E0123 01:26:40.125549 2838 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-69f6d" podUID="af7d7f35-7df8-4733-b7d8-eaa6851ed445" Jan 23 01:26:41.013495 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jan 23 01:26:41.090809 kubelet[2838]: E0123 01:26:41.084670 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:42.123639 kubelet[2838]: E0123 01:26:42.123424 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:42.589368 kubelet[2838]: E0123 01:26:42.588954 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:45.106567 kubelet[2838]: E0123 01:26:45.104704 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:48.509793 systemd-networkd[1454]: lxc_health: Link UP Jan 23 01:26:48.532213 systemd-networkd[1454]: lxc_health: Gained carrier Jan 23 01:26:48.603464 kubelet[2838]: E0123 01:26:48.601908 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:48.658178 kubelet[2838]: I0123 01:26:48.656870 2838 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wnrgk" podStartSLOduration=14.656849937 podStartE2EDuration="14.656849937s" podCreationTimestamp="2026-01-23 01:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:26:41.145920466 +0000 UTC m=+400.120671392" watchObservedRunningTime="2026-01-23 01:26:48.656849937 +0000 UTC m=+407.631600862" Jan 23 01:26:49.146430 kubelet[2838]: E0123 01:26:49.145595 2838 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 23 01:26:50.188442 systemd-networkd[1454]: lxc_health: Gained IPv6LL Jan 23 01:26:54.558848 sshd[5409]: Connection closed by 10.0.0.1 port 34236 Jan 23 01:26:54.560949 sshd-session[5386]: pam_unix(sshd:session): session closed for user core Jan 23 01:26:54.580694 systemd[1]: sshd@49-10.0.0.71:22-10.0.0.1:34236.service: Deactivated successfully. Jan 23 01:26:54.602279 systemd[1]: session-50.scope: Deactivated successfully. Jan 23 01:26:54.607344 systemd-logind[1549]: Session 50 logged out. Waiting for processes to exit. Jan 23 01:26:54.610865 systemd-logind[1549]: Removed session 50.