Jan 17 00:37:46.718576 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:37:46.718604 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:37:46.718620 kernel: BIOS-provided physical RAM map: Jan 17 00:37:46.718629 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:37:46.718638 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 00:37:46.718646 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 00:37:46.718657 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 00:37:46.718666 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 00:37:46.718675 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 00:37:46.718684 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 00:37:46.718696 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 00:37:46.718705 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 00:37:46.718714 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 00:37:46.718724 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 00:37:46.718735 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 00:37:46.718744 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 00:37:46.718757 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 00:37:46.718767 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 00:37:46.718776 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 00:37:46.718786 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:37:46.718796 kernel: NX (Execute Disable) protection: active Jan 17 00:37:46.718805 kernel: APIC: Static calls initialized Jan 17 00:37:46.718815 kernel: efi: EFI v2.7 by EDK II Jan 17 00:37:46.718824 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jan 17 00:37:46.718834 kernel: SMBIOS 2.8 present. Jan 17 00:37:46.718844 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 00:37:46.718853 kernel: Hypervisor detected: KVM Jan 17 00:37:46.718866 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:37:46.718876 kernel: kvm-clock: using sched offset of 14375512451 cycles Jan 17 00:37:46.718886 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:37:46.718896 kernel: tsc: Detected 2445.424 MHz processor Jan 17 00:37:46.718906 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:37:46.718916 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:37:46.718926 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 00:37:46.718936 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:37:46.718946 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:37:46.719079 kernel: Using GB pages for direct mapping Jan 17 00:37:46.719091 kernel: Secure boot disabled Jan 17 00:37:46.719172 kernel: ACPI: Early table checksum verification disabled Jan 17 00:37:46.719184 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 00:37:46.719201 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:37:46.719212 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:37:46.719223 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:37:46.719236 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 00:37:46.719247 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:37:46.719257 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:37:46.719268 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:37:46.719278 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:37:46.719289 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:37:46.719299 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 00:37:46.719313 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jan 17 00:37:46.719323 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 00:37:46.719334 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 00:37:46.719344 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 00:37:46.719355 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 00:37:46.719365 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 00:37:46.719375 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 00:37:46.719386 kernel: No NUMA configuration found Jan 17 00:37:46.719396 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 00:37:46.719409 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 00:37:46.719420 kernel: Zone ranges: Jan 17 00:37:46.719430 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:37:46.719440 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 00:37:46.719451 kernel: Normal empty Jan 17 00:37:46.719461 kernel: Movable zone start for each node Jan 17 00:37:46.719471 kernel: Early memory node ranges Jan 17 00:37:46.719482 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:37:46.719492 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 00:37:46.719502 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 00:37:46.719515 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 00:37:46.719526 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 00:37:46.719536 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 00:37:46.719547 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 00:37:46.719557 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:37:46.719567 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:37:46.719578 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 00:37:46.719589 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:37:46.719599 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 00:37:46.719613 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 00:37:46.719623 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 00:37:46.719634 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:37:46.719644 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:37:46.719654 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:37:46.719665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:37:46.719675 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:37:46.719685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:37:46.719696 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:37:46.719709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:37:46.719720 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:37:46.719730 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 00:37:46.719740 kernel: TSC deadline timer available Jan 17 00:37:46.719751 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 00:37:46.719761 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:37:46.719771 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 00:37:46.719782 kernel: kvm-guest: setup PV sched yield Jan 17 00:37:46.719792 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 00:37:46.719805 kernel: Booting paravirtualized kernel on KVM Jan 17 00:37:46.719816 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:37:46.719826 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 00:37:46.719837 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Jan 17 00:37:46.719847 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Jan 17 00:37:46.719857 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 00:37:46.719867 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:37:46.719878 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:37:46.719889 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:37:46.719903 kernel: random: crng init done Jan 17 00:37:46.719913 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:37:46.719924 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:37:46.719934 kernel: Fallback order for Node 0: 0 Jan 17 00:37:46.719944 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 00:37:46.720005 kernel: Policy zone: DMA32 Jan 17 00:37:46.720067 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:37:46.720080 kernel: Memory: 2400616K/2567000K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 166124K reserved, 0K cma-reserved) Jan 17 00:37:46.720097 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 00:37:46.720108 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:37:46.720173 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:37:46.720184 kernel: Dynamic Preempt: voluntary Jan 17 00:37:46.720195 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:37:46.720223 kernel: rcu: RCU event tracing is enabled. Jan 17 00:37:46.720237 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 00:37:46.720248 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:37:46.720259 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:37:46.720270 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:37:46.720281 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:37:46.720292 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 00:37:46.720307 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 00:37:46.720318 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:37:46.720328 kernel: Console: colour dummy device 80x25 Jan 17 00:37:46.720339 kernel: printk: console [ttyS0] enabled Jan 17 00:37:46.720350 kernel: ACPI: Core revision 20230628 Jan 17 00:37:46.720364 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 00:37:46.720375 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:37:46.720386 kernel: x2apic enabled Jan 17 00:37:46.720397 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:37:46.720408 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 00:37:46.720419 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 00:37:46.720430 kernel: kvm-guest: setup PV IPIs Jan 17 00:37:46.720441 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 00:37:46.720601 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 00:37:46.720869 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Jan 17 00:37:46.720885 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:37:46.720897 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 00:37:46.720908 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 00:37:46.721075 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:37:46.721089 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:37:46.721149 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:37:46.721160 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:37:46.721171 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 00:37:46.721316 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 00:37:46.721328 kernel: active return thunk: srso_alias_return_thunk Jan 17 00:37:46.721339 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 00:37:46.721350 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 17 00:37:46.721361 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:37:46.721372 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:37:46.721384 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:37:46.721395 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:37:46.721410 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:37:46.721421 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 00:37:46.721432 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:37:46.721444 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:37:46.721455 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:37:46.721466 kernel: landlock: Up and running. Jan 17 00:37:46.721477 kernel: SELinux: Initializing. Jan 17 00:37:46.721488 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:37:46.721499 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:37:46.721514 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Jan 17 00:37:46.721525 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:37:46.721536 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:37:46.721547 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 00:37:46.721558 kernel: Performance Events: PMU not available due to virtualization, using software events only. Jan 17 00:37:46.721569 kernel: signal: max sigframe size: 1776 Jan 17 00:37:46.721580 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:37:46.721592 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:37:46.721603 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:37:46.721617 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:37:46.721628 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:37:46.721639 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 00:37:46.721650 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 00:37:46.721661 kernel: smpboot: Max logical packages: 1 Jan 17 00:37:46.721672 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Jan 17 00:37:46.721683 kernel: devtmpfs: initialized Jan 17 00:37:46.721694 kernel: x86/mm: Memory block size: 128MB Jan 17 00:37:46.721705 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 00:37:46.721719 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 00:37:46.721730 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 00:37:46.721741 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 00:37:46.721752 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 00:37:46.721764 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:37:46.721775 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 00:37:46.721786 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:37:46.721797 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:37:46.721808 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:37:46.721822 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:37:46.721833 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:37:46.722228 kernel: audit: type=2000 audit(1768610263.702:1): state=initialized audit_enabled=0 res=1 Jan 17 00:37:46.722245 kernel: cpuidle: using governor menu Jan 17 00:37:46.722258 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:37:46.722269 kernel: dca service started, version 1.12.1 Jan 17 00:37:46.722281 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:37:46.722293 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:37:46.722310 kernel: PCI: Using configuration type 1 for base access Jan 17 00:37:46.722322 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:37:46.722333 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:37:46.722348 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:37:46.722360 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:37:46.722372 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:37:46.722382 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:37:46.722394 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:37:46.722406 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:37:46.722423 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:37:46.722432 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:37:46.722445 kernel: ACPI: Interpreter enabled Jan 17 00:37:46.722456 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 00:37:46.722468 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:37:46.722478 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:37:46.722491 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:37:46.722502 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:37:46.722514 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:37:46.723179 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:37:46.723401 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 00:37:46.723596 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 00:37:46.723615 kernel: PCI host bridge to bus 0000:00 Jan 17 00:37:46.723889 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:37:46.724208 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:37:46.724391 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:37:46.724576 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 00:37:46.724753 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:37:46.724928 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 00:37:46.725342 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:37:46.725823 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:37:46.726230 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 00:37:46.726430 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 00:37:46.726776 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 00:37:46.727291 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 00:37:46.727428 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 00:37:46.727579 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:37:46.727889 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 00:37:46.729639 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 00:37:46.729785 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 00:37:46.729935 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 00:37:46.730567 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:37:46.730747 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 00:37:46.730901 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 00:37:46.731673 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 00:37:46.733535 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:37:46.733758 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 00:37:46.737107 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 00:37:46.738202 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 00:37:46.738407 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 00:37:46.738720 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:37:46.740401 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:37:46.740750 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:37:46.741066 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 00:37:46.741258 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 00:37:46.741569 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:37:46.741776 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 00:37:46.741794 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:37:46.741805 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:37:46.741816 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:37:46.741832 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:37:46.741843 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:37:46.741853 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:37:46.741864 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:37:46.741874 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:37:46.741884 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:37:46.741896 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:37:46.741909 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:37:46.741920 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:37:46.741938 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:37:46.741950 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:37:46.742087 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:37:46.742101 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:37:46.742115 kernel: iommu: Default domain type: Translated Jan 17 00:37:46.742126 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:37:46.742138 kernel: efivars: Registered efivars operations Jan 17 00:37:46.742150 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:37:46.742164 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:37:46.742183 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 00:37:46.742196 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 00:37:46.742209 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 00:37:46.742220 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 00:37:46.742423 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:37:46.744308 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:37:46.744519 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:37:46.744538 kernel: vgaarb: loaded Jan 17 00:37:46.744558 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 00:37:46.744572 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 00:37:46.744584 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:37:46.744597 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:37:46.744610 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:37:46.744622 kernel: pnp: PnP ACPI init Jan 17 00:37:46.745282 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:37:46.745304 kernel: pnp: PnP ACPI: found 6 devices Jan 17 00:37:46.745318 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:37:46.745338 kernel: NET: Registered PF_INET protocol family Jan 17 00:37:46.745351 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:37:46.745364 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:37:46.745376 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:37:46.745388 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:37:46.745400 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:37:46.745414 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:37:46.745426 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:37:46.745444 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:37:46.745457 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:37:46.745470 kernel: NET: Registered PF_XDP protocol family Jan 17 00:37:46.745676 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 00:37:46.745886 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 00:37:46.746314 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:37:46.746502 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:37:46.746692 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:37:46.746827 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 00:37:46.747362 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:37:46.747545 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 00:37:46.747564 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:37:46.747632 kernel: Initialise system trusted keyrings Jan 17 00:37:46.747646 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:37:46.747656 kernel: Key type asymmetric registered Jan 17 00:37:46.747667 kernel: Asymmetric key parser 'x509' registered Jan 17 00:37:46.747679 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:37:46.747698 kernel: io scheduler mq-deadline registered Jan 17 00:37:46.747710 kernel: io scheduler kyber registered Jan 17 00:37:46.747720 kernel: io scheduler bfq registered Jan 17 00:37:46.747732 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:37:46.747745 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:37:46.747757 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:37:46.747767 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:37:46.747779 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:37:46.747791 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:37:46.747808 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:37:46.747821 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:37:46.747832 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:37:46.748278 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 00:37:46.749406 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 00:37:46.749427 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:37:46.749614 kernel: rtc_cmos 00:04: setting system clock to 2026-01-17T00:37:45 UTC (1768610265) Jan 17 00:37:46.749801 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 00:37:46.749828 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 00:37:46.749842 kernel: efifb: probing for efifb Jan 17 00:37:46.749854 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 00:37:46.749866 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 00:37:46.749878 kernel: efifb: scrolling: redraw Jan 17 00:37:46.749890 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 00:37:46.749902 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 00:37:46.749914 kernel: fb0: EFI VGA frame buffer device Jan 17 00:37:46.749925 kernel: pstore: Using crash dump compression: deflate Jan 17 00:37:46.749942 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:37:46.750104 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:37:46.750123 kernel: Segment Routing with IPv6 Jan 17 00:37:46.750135 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:37:46.750147 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:37:46.750157 kernel: Key type dns_resolver registered Jan 17 00:37:46.750170 kernel: IPI shorthand broadcast: enabled Jan 17 00:37:46.750212 kernel: sched_clock: Marking stable (2017048958, 459037648)->(3155537133, -679450527) Jan 17 00:37:46.750229 kernel: registered taskstats version 1 Jan 17 00:37:46.750247 kernel: Loading compiled-in X.509 certificates Jan 17 00:37:46.750258 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:37:46.750271 kernel: Key type .fscrypt registered Jan 17 00:37:46.750283 kernel: Key type fscrypt-provisioning registered Jan 17 00:37:46.750294 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:37:46.750307 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:37:46.750319 kernel: ima: No architecture policies found Jan 17 00:37:46.750331 kernel: clk: Disabling unused clocks Jan 17 00:37:46.750342 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:37:46.750360 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:37:46.750373 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:37:46.750383 kernel: Run /init as init process Jan 17 00:37:46.750396 kernel: with arguments: Jan 17 00:37:46.750408 kernel: /init Jan 17 00:37:46.750419 kernel: with environment: Jan 17 00:37:46.750432 kernel: HOME=/ Jan 17 00:37:46.750443 kernel: TERM=linux Jan 17 00:37:46.750458 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:37:46.750479 systemd[1]: Detected virtualization kvm. Jan 17 00:37:46.750492 systemd[1]: Detected architecture x86-64. Jan 17 00:37:46.750504 systemd[1]: Running in initrd. Jan 17 00:37:46.750517 systemd[1]: No hostname configured, using default hostname. Jan 17 00:37:46.750529 systemd[1]: Hostname set to . Jan 17 00:37:46.750542 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:37:46.750559 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:37:46.750572 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:37:46.750585 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:37:46.750598 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:37:46.750611 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:37:46.750625 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:37:46.750646 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:37:46.750661 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:37:46.750674 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:37:46.750688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:37:46.750700 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:37:46.750713 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:37:46.750731 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:37:46.750742 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:37:46.750756 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:37:46.750769 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:37:46.750781 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:37:46.750794 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:37:46.750807 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:37:46.750820 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:37:46.750832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:37:46.750850 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:37:46.750863 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:37:46.750875 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:37:46.750889 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:37:46.750902 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:37:46.750913 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:37:46.750927 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:37:46.750940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:37:46.750951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:37:46.751121 systemd-journald[194]: Collecting audit messages is disabled. Jan 17 00:37:46.751153 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:37:46.751167 systemd-journald[194]: Journal started Jan 17 00:37:46.751198 systemd-journald[194]: Runtime Journal (/run/log/journal/2c119fc373ba4e899de136a802170b64) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:37:46.773329 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:37:46.775060 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:37:46.793809 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:37:46.826535 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:37:46.836794 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:37:46.861312 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:37:46.911328 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:37:46.969440 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:37:46.995761 systemd-modules-load[195]: Inserted module 'overlay' Jan 17 00:37:47.002398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:37:47.022263 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:37:47.068550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:37:47.129820 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:37:47.139317 kernel: Bridge firewalling registered Jan 17 00:37:47.139777 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 17 00:37:47.147802 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:37:47.163528 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:37:47.190291 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:37:47.200322 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:37:47.241951 dracut-cmdline[226]: dracut-dracut-053 Jan 17 00:37:47.242541 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:37:47.254569 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:37:47.298454 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:37:47.368370 systemd-resolved[244]: Positive Trust Anchors: Jan 17 00:37:47.368419 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:37:47.368461 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:37:47.372256 systemd-resolved[244]: Defaulting to hostname 'linux'. Jan 17 00:37:47.373843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:37:47.441455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:37:47.601094 kernel: SCSI subsystem initialized Jan 17 00:37:47.613536 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:37:47.642437 kernel: iscsi: registered transport (tcp) Jan 17 00:37:47.697723 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:37:47.697795 kernel: QLogic iSCSI HBA Driver Jan 17 00:37:47.816408 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:37:47.846237 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:37:47.902097 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:37:47.902179 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:37:47.907375 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:37:47.982150 kernel: raid6: avx2x4 gen() 18897 MB/s Jan 17 00:37:48.001003 kernel: raid6: avx2x2 gen() 18552 MB/s Jan 17 00:37:48.024139 kernel: raid6: avx2x1 gen() 10229 MB/s Jan 17 00:37:48.024224 kernel: raid6: using algorithm avx2x4 gen() 18897 MB/s Jan 17 00:37:48.045181 kernel: raid6: .... xor() 2339 MB/s, rmw enabled Jan 17 00:37:48.045253 kernel: raid6: using avx2x2 recovery algorithm Jan 17 00:37:48.111758 kernel: xor: automatically using best checksumming function avx Jan 17 00:37:48.579930 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:37:48.609936 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:37:48.641477 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:37:48.673619 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 17 00:37:48.680183 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:37:48.731907 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:37:48.760544 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 17 00:37:48.870649 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:37:48.888368 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:37:49.065803 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:37:49.114288 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:37:49.152641 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:37:49.162197 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:37:49.169765 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:37:49.177539 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:37:49.184418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:37:49.282537 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 00:37:49.285632 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:37:49.314704 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 00:37:49.308400 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:37:49.308639 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:37:49.368421 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:37:49.368460 kernel: GPT:9289727 != 19775487 Jan 17 00:37:49.368489 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:37:49.368504 kernel: GPT:9289727 != 19775487 Jan 17 00:37:49.369663 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:37:49.369694 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:37:49.334094 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:37:49.386257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:37:49.386542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:37:49.392263 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:37:49.443454 kernel: libata version 3.00 loaded. Jan 17 00:37:49.449181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:37:49.458331 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:37:49.507171 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:37:49.531224 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:37:49.561439 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (459) Jan 17 00:37:49.576472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:37:49.591508 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Jan 17 00:37:49.605136 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:37:49.612137 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:37:49.612351 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:37:49.672199 kernel: AES CTR mode by8 optimization enabled Jan 17 00:37:49.672243 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:37:49.672500 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:37:49.672521 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:37:49.672732 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:37:49.683095 kernel: scsi host0: ahci Jan 17 00:37:49.686293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:37:49.698363 kernel: scsi host1: ahci Jan 17 00:37:49.698649 kernel: scsi host2: ahci Jan 17 00:37:49.701718 kernel: scsi host3: ahci Jan 17 00:37:49.713777 kernel: scsi host4: ahci Jan 17 00:37:49.714351 kernel: scsi host5: ahci Jan 17 00:37:49.720319 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 00:37:49.729875 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 00:37:49.729933 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 00:37:49.738083 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 00:37:49.743468 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 00:37:49.762223 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 00:37:49.776400 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:37:49.781212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:37:49.806679 disk-uuid[560]: Primary Header is updated. Jan 17 00:37:49.806679 disk-uuid[560]: Secondary Entries is updated. Jan 17 00:37:49.806679 disk-uuid[560]: Secondary Header is updated. Jan 17 00:37:49.848595 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:37:49.871086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:37:49.913457 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:37:50.090220 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:37:50.095537 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:37:50.095596 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:37:50.104230 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 00:37:50.109493 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:37:50.109532 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 00:37:50.132606 kernel: ata3.00: applying bridge limits Jan 17 00:37:50.151420 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:37:50.157114 kernel: ata3.00: configured for UDMA/100 Jan 17 00:37:50.167098 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:37:50.265067 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 00:37:50.266239 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:37:50.286595 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:37:50.910493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:37:50.914409 disk-uuid[561]: The operation has completed successfully. Jan 17 00:37:51.084827 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:37:51.091849 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:37:51.122335 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:37:51.165888 sh[594]: Success Jan 17 00:37:51.265182 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 00:37:51.420851 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:37:51.429710 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:37:51.455324 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:37:51.488145 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:37:51.488248 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:37:51.498141 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:37:51.498203 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:37:51.508206 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:37:51.549380 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:37:51.558943 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:37:51.592270 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:37:51.613283 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:37:51.685140 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:37:51.685221 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:37:51.685241 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:37:51.740140 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:37:51.764512 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:37:51.774120 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:37:51.792548 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:37:51.810308 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:37:52.070547 ignition[688]: Ignition 2.19.0 Jan 17 00:37:52.071113 ignition[688]: Stage: fetch-offline Jan 17 00:37:52.071262 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:37:52.071280 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:37:52.071622 ignition[688]: parsed url from cmdline: "" Jan 17 00:37:52.071630 ignition[688]: no config URL provided Jan 17 00:37:52.071639 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:37:52.071654 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:37:52.071692 ignition[688]: op(1): [started] loading QEMU firmware config module Jan 17 00:37:52.071701 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 00:37:52.123155 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:37:52.108924 ignition[688]: op(1): [finished] loading QEMU firmware config module Jan 17 00:37:52.108956 ignition[688]: QEMU firmware config was not found. Ignoring... Jan 17 00:37:52.165518 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:37:52.224657 systemd-networkd[783]: lo: Link UP Jan 17 00:37:52.228202 systemd-networkd[783]: lo: Gained carrier Jan 17 00:37:52.231766 systemd-networkd[783]: Enumeration completed Jan 17 00:37:52.233460 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:37:52.233467 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:37:52.235701 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:37:52.252420 systemd-networkd[783]: eth0: Link UP Jan 17 00:37:52.252427 systemd-networkd[783]: eth0: Gained carrier Jan 17 00:37:52.252444 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:37:52.264540 systemd[1]: Reached target network.target - Network. Jan 17 00:37:52.329869 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:37:52.558829 ignition[688]: parsing config with SHA512: 256f7c5f86bdc3599ba5c15ceab4bd6dda46ee4f0ec3f1918620c7bd776b54ab4d723f6d5311935746373dde33dd140de6378132c3a0eb4406790fb5b61a6f66 Jan 17 00:37:52.622181 systemd-resolved[244]: Detected conflict on linux IN A 10.0.0.107 Jan 17 00:37:52.622243 systemd-resolved[244]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jan 17 00:37:52.635435 unknown[688]: fetched base config from "system" Jan 17 00:37:52.636219 ignition[688]: fetch-offline: fetch-offline passed Jan 17 00:37:52.635448 unknown[688]: fetched user config from "qemu" Jan 17 00:37:52.636343 ignition[688]: Ignition finished successfully Jan 17 00:37:52.673961 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:37:52.697548 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 00:37:52.713366 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:37:52.766232 ignition[787]: Ignition 2.19.0 Jan 17 00:37:52.766281 ignition[787]: Stage: kargs Jan 17 00:37:52.766556 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:37:52.779885 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:37:52.766573 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:37:52.769209 ignition[787]: kargs: kargs passed Jan 17 00:37:52.769333 ignition[787]: Ignition finished successfully Jan 17 00:37:52.818544 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:37:52.874689 ignition[795]: Ignition 2.19.0 Jan 17 00:37:52.877868 ignition[795]: Stage: disks Jan 17 00:37:52.878546 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:37:52.878596 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:37:52.896106 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:37:52.879960 ignition[795]: disks: disks passed Jan 17 00:37:52.880139 ignition[795]: Ignition finished successfully Jan 17 00:37:52.926134 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:37:52.940220 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:37:52.950686 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:37:52.967734 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:37:52.967884 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:37:53.001480 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:37:53.048890 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:37:53.067261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:37:53.100285 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:37:53.492297 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:37:53.492956 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:37:53.498122 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:37:53.532295 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:37:53.548624 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:37:53.552448 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:37:53.584671 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Jan 17 00:37:53.552512 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:37:53.609570 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:37:53.609601 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:37:53.609616 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:37:53.552580 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:37:53.617889 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:37:53.619678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:37:53.628250 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:37:53.649776 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:37:53.745524 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:37:53.755502 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:37:53.770600 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:37:53.778342 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:37:53.919413 systemd-networkd[783]: eth0: Gained IPv6LL Jan 17 00:37:54.030860 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:37:54.060431 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:37:54.095344 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:37:54.071317 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:37:54.096900 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:37:54.132412 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:37:54.176470 ignition[926]: INFO : Ignition 2.19.0 Jan 17 00:37:54.176470 ignition[926]: INFO : Stage: mount Jan 17 00:37:54.189356 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:37:54.189356 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:37:54.189356 ignition[926]: INFO : mount: mount passed Jan 17 00:37:54.189356 ignition[926]: INFO : Ignition finished successfully Jan 17 00:37:54.180178 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:37:54.210273 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:37:54.509380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:37:54.534146 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Jan 17 00:37:54.543330 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:37:54.543376 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:37:54.543396 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:37:54.599416 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:37:54.605410 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:37:54.669688 ignition[956]: INFO : Ignition 2.19.0 Jan 17 00:37:54.669688 ignition[956]: INFO : Stage: files Jan 17 00:37:54.678175 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:37:54.678175 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:37:54.678175 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:37:54.678175 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:37:54.678175 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:37:54.707653 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:37:54.707653 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:37:54.707653 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:37:54.707653 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:37:54.707653 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:37:54.687082 unknown[956]: wrote ssh authorized keys file for user: core Jan 17 00:37:54.789301 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:37:54.941269 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:37:54.941269 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:37:54.961123 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:37:55.065595 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:37:55.258315 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:37:55.258315 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:37:55.276854 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 17 00:37:55.421502 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:37:55.986475 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 17 00:37:55.986475 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:37:56.003324 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:37:56.011792 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:37:56.011792 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:37:56.011792 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 17 00:37:56.030825 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:37:56.039330 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 00:37:56.039330 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 17 00:37:56.039330 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 00:37:56.095778 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:37:56.106246 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 00:37:56.106246 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 00:37:56.106246 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:37:56.106246 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:37:56.142198 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:37:56.154812 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:37:56.165501 ignition[956]: INFO : files: files passed Jan 17 00:37:56.165501 ignition[956]: INFO : Ignition finished successfully Jan 17 00:37:56.178814 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:37:56.196304 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:37:56.207124 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:37:56.207758 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:37:56.208113 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:37:56.269945 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 00:37:56.289468 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:37:56.289468 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:37:56.305924 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:37:56.318343 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:37:56.318750 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:37:56.355261 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:37:56.459332 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:37:56.459574 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:37:56.480876 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:37:56.504318 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:37:56.504624 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:37:56.530516 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:37:56.563689 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:37:56.594538 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:37:56.630188 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:37:56.640473 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:37:56.645821 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:37:56.662316 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:37:56.662596 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:37:56.670609 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:37:56.673241 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:37:56.673380 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:37:56.869592 ignition[1010]: INFO : Ignition 2.19.0 Jan 17 00:37:56.869592 ignition[1010]: INFO : Stage: umount Jan 17 00:37:56.869592 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:37:56.869592 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 00:37:56.869592 ignition[1010]: INFO : umount: umount passed Jan 17 00:37:56.869592 ignition[1010]: INFO : Ignition finished successfully Jan 17 00:37:56.673498 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:37:56.673604 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:37:56.673713 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:37:56.673815 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:37:56.673939 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:37:56.674167 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:37:56.674275 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:37:56.674352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:37:56.674494 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:37:56.674738 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:37:56.674859 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:37:56.674934 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:37:56.676895 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:37:56.683230 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:37:56.683401 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:37:56.683659 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:37:56.693370 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:37:56.700703 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:37:56.702455 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:37:56.705714 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:37:56.715843 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:37:56.722740 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:37:56.725620 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:37:56.726414 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:37:56.726644 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:37:56.726744 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:37:56.733267 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:37:56.734689 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:37:56.735385 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:37:56.735517 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:37:56.819511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:37:56.846723 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:37:56.868722 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:37:56.905530 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:37:56.927802 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:37:56.928285 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:37:56.933123 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:37:56.933325 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:37:56.972368 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:37:56.972664 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:37:56.984706 systemd[1]: Stopped target network.target - Network. Jan 17 00:37:56.995783 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:37:56.995902 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:37:57.015598 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:37:57.015685 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:37:57.031279 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:37:57.031354 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:37:57.035959 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:37:57.036160 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:37:57.049330 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:37:57.091238 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:37:57.120279 systemd-networkd[783]: eth0: DHCPv6 lease lost Jan 17 00:37:57.129457 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:37:57.131294 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:37:57.131518 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:37:57.173512 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:37:57.173692 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:37:57.202219 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:37:57.204918 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:37:57.225403 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:37:57.227100 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:37:57.241876 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:37:57.241960 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:37:57.252431 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:37:57.252529 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:37:57.283615 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:37:57.289880 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:37:57.289961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:37:57.307720 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:37:57.307845 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:37:57.315382 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:37:57.315469 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:37:57.322404 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:37:57.322466 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:37:57.334211 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:37:57.398480 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:37:57.399264 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:37:57.421833 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:37:57.422183 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:37:57.428681 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:37:57.428761 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:37:57.463380 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:37:57.463456 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:37:57.468797 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:37:57.717770 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 17 00:37:57.468890 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:37:57.484597 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:37:57.484692 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:37:57.502128 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:37:57.502223 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:37:57.568586 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:37:57.582611 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:37:57.582725 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:37:57.596557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:37:57.596648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:37:57.606079 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:37:57.606462 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:37:57.611424 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:37:57.650530 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:37:57.663490 systemd[1]: Switching root. Jan 17 00:37:57.820778 systemd-journald[194]: Journal stopped Jan 17 00:38:08.673928 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:38:08.677563 kernel: SELinux: policy capability open_perms=1 Jan 17 00:38:08.678754 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:38:08.678777 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:38:08.678793 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:38:08.678843 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:38:08.678886 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:38:08.678931 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:38:08.678956 kernel: audit: type=1403 audit(1768610278.086:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:38:08.678989 systemd[1]: Successfully loaded SELinux policy in 80.803ms. Jan 17 00:38:08.679134 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.359ms. Jan 17 00:38:08.679160 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:38:08.679178 systemd[1]: Detected virtualization kvm. Jan 17 00:38:08.679201 systemd[1]: Detected architecture x86-64. Jan 17 00:38:08.679251 systemd[1]: Detected first boot. Jan 17 00:38:08.679274 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:38:08.679291 zram_generator::config[1053]: No configuration found. Jan 17 00:38:08.679340 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:38:08.679361 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:38:08.680748 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:38:08.680778 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:38:08.680800 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:38:08.680821 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:38:08.680839 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:38:08.680863 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:38:08.680886 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:38:08.680904 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:38:08.680922 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:38:08.680939 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:38:08.680958 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:38:08.680976 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:38:08.681106 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:38:08.681134 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:38:08.681152 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:38:08.681171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:38:08.681187 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:38:08.681204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:38:08.681220 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:38:08.681236 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:38:08.681252 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:38:08.681269 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:38:08.681289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:38:08.681306 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:38:08.681322 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:38:08.681342 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:38:08.681360 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:38:08.681376 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:38:08.681393 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:38:08.681450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:38:08.681472 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:38:08.681489 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:38:08.681505 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:38:08.681521 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:38:08.681538 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:38:08.681554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:38:08.681572 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:38:08.682783 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:38:08.682806 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:38:08.682835 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:38:08.682856 systemd[1]: Reached target machines.target - Containers. Jan 17 00:38:08.682873 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:38:08.682890 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:38:08.682906 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:38:08.682963 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:38:08.682986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:38:08.683112 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:38:08.683138 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:38:08.683159 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:38:08.683177 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:38:08.683195 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:38:08.683216 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:38:08.683234 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:38:08.683250 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:38:08.683268 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:38:08.683287 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:38:08.683313 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:38:08.683334 kernel: loop: module loaded Jan 17 00:38:08.683353 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:38:08.683372 kernel: fuse: init (API version 7.39) Jan 17 00:38:08.683390 kernel: ACPI: bus type drm_connector registered Jan 17 00:38:08.683465 systemd-journald[1137]: Collecting audit messages is disabled. Jan 17 00:38:08.683561 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:38:08.684147 systemd-journald[1137]: Journal started Jan 17 00:38:08.684180 systemd-journald[1137]: Runtime Journal (/run/log/journal/2c119fc373ba4e899de136a802170b64) is 6.0M, max 48.3M, 42.2M free. Jan 17 00:38:01.659494 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:38:01.784663 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:38:01.789909 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:38:01.795971 systemd[1]: systemd-journald.service: Consumed 1.995s CPU time. Jan 17 00:38:08.790297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:38:08.803657 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:38:08.803748 systemd[1]: Stopped verity-setup.service. Jan 17 00:38:08.803778 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:38:08.826410 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:38:08.861428 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:38:08.871273 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:38:08.881635 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:38:08.894985 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:38:08.907807 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:38:08.919452 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:38:08.937796 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:38:08.976776 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:38:08.990355 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:38:08.992836 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:38:09.005880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:38:09.009415 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:38:09.021836 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:38:09.023625 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:38:09.034645 systemd[1]: modprobe@drm.service: Consumed 2.144s CPU time. Jan 17 00:38:09.038629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:38:09.038952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:38:09.054861 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:38:09.056872 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:38:09.065943 systemd[1]: modprobe@fuse.service: Consumed 1.325s CPU time. Jan 17 00:38:09.067786 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:38:09.068211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:38:09.075959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:38:09.086688 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:38:09.095756 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:38:09.127903 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:38:09.160830 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:38:09.181787 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:38:09.190454 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:38:09.193198 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:38:09.205286 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:38:09.239511 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:38:09.261340 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:38:09.271744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:38:09.299626 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:38:09.319286 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:38:09.337960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:38:09.342930 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:38:09.354943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:38:09.363253 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:38:09.392154 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:38:09.409349 systemd-journald[1137]: Time spent on flushing to /var/log/journal/2c119fc373ba4e899de136a802170b64 is 52.261ms for 989 entries. Jan 17 00:38:09.409349 systemd-journald[1137]: System Journal (/var/log/journal/2c119fc373ba4e899de136a802170b64) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:38:09.581308 systemd-journald[1137]: Received client request to flush runtime journal. Jan 17 00:38:09.489640 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:38:09.505535 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:38:09.511606 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:38:09.517815 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:38:09.523260 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:38:09.528963 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:38:09.550773 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:38:09.571263 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:38:09.612532 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:38:09.638864 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:38:09.676088 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:38:09.681207 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:38:09.708764 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:38:09.710169 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:38:09.724947 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:38:09.741495 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:38:09.756357 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:38:09.761287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:38:09.822831 kernel: hrtimer: interrupt took 2636152 ns Jan 17 00:38:09.864257 kernel: loop1: detected capacity change from 0 to 229808 Jan 17 00:38:09.927604 kernel: loop2: detected capacity change from 0 to 142488 Jan 17 00:38:09.958121 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 17 00:38:09.958891 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 17 00:38:10.009376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:38:10.099223 kernel: loop3: detected capacity change from 0 to 140768 Jan 17 00:38:10.170880 kernel: loop4: detected capacity change from 0 to 229808 Jan 17 00:38:10.271130 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 00:38:10.353099 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 00:38:10.355795 (sd-merge)[1191]: Merged extensions into '/usr'. Jan 17 00:38:10.371362 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:38:10.371414 systemd[1]: Reloading... Jan 17 00:38:11.364101 zram_generator::config[1217]: No configuration found. Jan 17 00:38:12.066366 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:38:12.477461 systemd[1]: Reloading finished in 2104 ms. Jan 17 00:38:12.684543 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:38:12.708418 systemd[1]: Starting ensure-sysext.service... Jan 17 00:38:12.744520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:38:12.751191 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:38:12.760567 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:38:12.781336 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:38:12.781362 systemd[1]: Reloading... Jan 17 00:38:13.132513 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:38:13.133779 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:38:13.142243 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:38:13.142902 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 17 00:38:13.143274 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 17 00:38:13.158435 zram_generator::config[1286]: No configuration found. Jan 17 00:38:13.159237 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:38:13.159378 systemd-tmpfiles[1255]: Skipping /boot Jan 17 00:38:13.377998 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:38:13.378308 systemd-tmpfiles[1255]: Skipping /boot Jan 17 00:38:13.744093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:38:13.811529 systemd[1]: Reloading finished in 1029 ms. Jan 17 00:38:13.851522 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:38:13.869994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:38:13.901975 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:38:13.918181 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:38:13.947851 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:38:13.983444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:38:13.998754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:38:14.013982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:38:14.060305 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:38:14.071409 augenrules[1344]: No rules Jan 17 00:38:14.072298 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:38:14.073324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:38:14.076888 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:38:14.086661 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Jan 17 00:38:14.103809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:38:14.116277 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:38:14.124531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:38:14.124789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:38:14.127522 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:38:14.148927 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:38:14.166354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:38:14.167559 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:38:14.185613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:38:14.214687 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:38:14.222670 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:38:14.238166 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:38:14.250660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:38:14.251488 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:38:14.269912 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:38:14.272252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:38:14.344450 systemd[1]: Finished ensure-sysext.service. Jan 17 00:38:14.362474 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:38:14.365239 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:38:14.365789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:38:14.371434 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1374) Jan 17 00:38:14.382401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:38:14.398325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:38:14.413283 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:38:14.430333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:38:14.450198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:38:14.650613 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:38:14.663387 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:38:14.674223 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:38:14.681156 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:38:14.681252 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:38:14.682309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:38:14.682660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:38:14.691662 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:38:14.691992 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:38:14.702685 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:38:14.703106 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:38:14.717095 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 00:38:14.747681 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:38:14.748214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:38:14.985763 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:38:15.009106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:38:15.011550 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:38:15.013302 systemd-resolved[1333]: Positive Trust Anchors: Jan 17 00:38:15.013323 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:38:15.013371 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:38:15.028457 systemd-resolved[1333]: Defaulting to hostname 'linux'. Jan 17 00:38:15.048295 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:38:15.054635 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:38:15.110156 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:38:15.252144 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 00:38:15.252651 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:38:15.261659 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:38:15.267785 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:38:15.275554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:38:15.597298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:38:15.645622 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:38:15.658226 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 00:38:15.669774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:38:15.670224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:38:15.694255 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:38:15.709741 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:38:15.894406 systemd-networkd[1391]: lo: Link UP Jan 17 00:38:15.894455 systemd-networkd[1391]: lo: Gained carrier Jan 17 00:38:15.903987 systemd-networkd[1391]: Enumeration completed Jan 17 00:38:15.913254 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:38:15.913266 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:38:15.913620 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:38:15.953386 systemd-networkd[1391]: eth0: Link UP Jan 17 00:38:15.953401 systemd-networkd[1391]: eth0: Gained carrier Jan 17 00:38:15.953646 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:38:15.957248 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:38:15.972712 systemd[1]: Reached target network.target - Network. Jan 17 00:38:16.036277 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:38:16.044908 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:38:16.052319 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:38:16.059719 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.107/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 00:38:16.062726 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Jan 17 00:38:16.895951 systemd-resolved[1333]: Clock change detected. Flushing caches. Jan 17 00:38:16.896146 systemd-timesyncd[1397]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 00:38:16.896453 systemd-timesyncd[1397]: Initial clock synchronization to Sat 2026-01-17 00:38:16.895798 UTC. Jan 17 00:38:17.284000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:38:17.716361 kernel: kvm_amd: TSC scaling supported Jan 17 00:38:17.716594 kernel: kvm_amd: Nested Virtualization enabled Jan 17 00:38:17.716626 kernel: kvm_amd: Nested Paging enabled Jan 17 00:38:17.716657 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 00:38:17.722924 kernel: kvm_amd: PMU virtualization is disabled Jan 17 00:38:18.037137 kernel: EDAC MC: Ver: 3.0.0 Jan 17 00:38:18.097054 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:38:18.138665 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:38:18.183603 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:38:18.431843 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:38:18.446055 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:38:18.471830 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:38:18.488810 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:38:18.497871 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:38:18.500750 systemd-networkd[1391]: eth0: Gained IPv6LL Jan 17 00:38:18.507997 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:38:18.514711 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:38:18.520865 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:38:18.526637 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:38:18.526710 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:38:18.531852 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:38:18.537081 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:38:18.544778 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:38:18.574917 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:38:18.581805 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:38:18.613890 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:38:18.635460 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:38:18.654574 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:38:18.681962 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:38:18.699100 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:38:18.721966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:38:18.722132 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:38:18.726315 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:38:18.741034 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:38:18.752438 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 00:38:18.769677 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:38:18.783636 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:38:18.795765 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:38:18.801355 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:38:18.804848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:38:18.810923 jq[1434]: false Jan 17 00:38:18.814384 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:38:18.824079 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:38:18.837538 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:38:18.851661 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:38:18.852293 dbus-daemon[1433]: [system] SELinux support is enabled Jan 17 00:38:18.866164 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:38:18.869541 extend-filesystems[1435]: Found loop3 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found loop4 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found loop5 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found sr0 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found vda Jan 17 00:38:18.875919 extend-filesystems[1435]: Found vda1 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found vda2 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found vda3 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found usr Jan 17 00:38:18.875919 extend-filesystems[1435]: Found vda4 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found vda6 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found vda7 Jan 17 00:38:18.875919 extend-filesystems[1435]: Found vda9 Jan 17 00:38:18.875919 extend-filesystems[1435]: Checking size of /dev/vda9 Jan 17 00:38:19.074150 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 00:38:19.074310 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1362) Jan 17 00:38:19.074502 extend-filesystems[1435]: Resized partition /dev/vda9 Jan 17 00:38:18.885704 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:38:19.087847 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:38:18.901316 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:38:18.919511 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:38:19.113541 jq[1463]: true Jan 17 00:38:18.921526 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:38:18.964568 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:38:18.970714 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:38:19.116502 update_engine[1460]: I20260117 00:38:19.115886 1460 main.cc:92] Flatcar Update Engine starting Jan 17 00:38:18.992781 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:38:19.067644 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:38:19.067953 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:38:19.072427 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:38:19.074491 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:38:19.085057 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 00:38:19.085093 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:38:19.087288 systemd-logind[1452]: New seat seat0. Jan 17 00:38:19.095473 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:38:19.113600 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:38:19.121164 update_engine[1460]: I20260117 00:38:19.120895 1460 update_check_scheduler.cc:74] Next update check in 3m36s Jan 17 00:38:19.122180 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:38:19.123093 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:38:19.131297 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 00:38:19.167440 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:38:19.179145 jq[1469]: true Jan 17 00:38:19.190588 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:38:19.190588 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 00:38:19.190588 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 00:38:19.226449 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Jan 17 00:38:19.220071 dbus-daemon[1433]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:38:19.192826 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:38:19.231348 tar[1468]: linux-amd64/LICENSE Jan 17 00:38:19.195670 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:38:19.235469 tar[1468]: linux-amd64/helm Jan 17 00:38:19.239645 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 00:38:19.240139 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 00:38:19.246145 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:38:19.277411 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:38:19.277607 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:38:19.277876 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:38:19.288971 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:38:19.289149 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:38:19.570075 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:38:19.572150 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:38:20.043551 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:38:20.082813 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:38:20.101500 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:38:20.106409 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:38:20.114788 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:38:20.145766 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:38:20.146047 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:38:20.153873 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:38:20.172052 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:38:20.734786 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:38:20.761690 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:38:20.794931 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:38:20.802811 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:38:22.689505 containerd[1470]: time="2026-01-17T00:38:22.686984893Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:38:23.002683 tar[1468]: linux-amd64/README.md Jan 17 00:38:23.008728 containerd[1470]: time="2026-01-17T00:38:23.006305822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:38:23.015396 containerd[1470]: time="2026-01-17T00:38:23.015348133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:38:23.015485 containerd[1470]: time="2026-01-17T00:38:23.015466614Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:38:23.015593 containerd[1470]: time="2026-01-17T00:38:23.015576219Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:38:23.015942 containerd[1470]: time="2026-01-17T00:38:23.015919489Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:38:23.016119 containerd[1470]: time="2026-01-17T00:38:23.016098715Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:38:23.016794 containerd[1470]: time="2026-01-17T00:38:23.016428420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:38:23.017409 containerd[1470]: time="2026-01-17T00:38:23.017029637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:38:23.017829 containerd[1470]: time="2026-01-17T00:38:23.017794045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:38:23.017921 containerd[1470]: time="2026-01-17T00:38:23.017896616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:38:23.018012 containerd[1470]: time="2026-01-17T00:38:23.017987927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:38:23.018120 containerd[1470]: time="2026-01-17T00:38:23.018096049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:38:23.018637 containerd[1470]: time="2026-01-17T00:38:23.018607113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:38:23.019714 containerd[1470]: time="2026-01-17T00:38:23.019461859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:38:23.019714 containerd[1470]: time="2026-01-17T00:38:23.019664427Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:38:23.019714 containerd[1470]: time="2026-01-17T00:38:23.019690737Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:38:23.020059 containerd[1470]: time="2026-01-17T00:38:23.019960349Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:38:23.023543 containerd[1470]: time="2026-01-17T00:38:23.020158510Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:38:23.031879 containerd[1470]: time="2026-01-17T00:38:23.031768616Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:38:23.032922 containerd[1470]: time="2026-01-17T00:38:23.032066201Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:38:23.032922 containerd[1470]: time="2026-01-17T00:38:23.032148134Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:38:23.032922 containerd[1470]: time="2026-01-17T00:38:23.032170607Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:38:23.032922 containerd[1470]: time="2026-01-17T00:38:23.032189461Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:38:23.032922 containerd[1470]: time="2026-01-17T00:38:23.032623181Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:38:23.035105 containerd[1470]: time="2026-01-17T00:38:23.034971276Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:38:23.035847 containerd[1470]: time="2026-01-17T00:38:23.035816514Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:38:23.035964 containerd[1470]: time="2026-01-17T00:38:23.035946637Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:38:23.036072 containerd[1470]: time="2026-01-17T00:38:23.036046434Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:38:23.036149 containerd[1470]: time="2026-01-17T00:38:23.036132405Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:38:23.036359 containerd[1470]: time="2026-01-17T00:38:23.036330624Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:38:23.036503 containerd[1470]: time="2026-01-17T00:38:23.036484792Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:38:23.036611 containerd[1470]: time="2026-01-17T00:38:23.036593506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:38:23.036678 containerd[1470]: time="2026-01-17T00:38:23.036662444Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:38:23.036772 containerd[1470]: time="2026-01-17T00:38:23.036756891Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:38:23.036852 containerd[1470]: time="2026-01-17T00:38:23.036836098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:38:23.036911 containerd[1470]: time="2026-01-17T00:38:23.036897283Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:38:23.037151 containerd[1470]: time="2026-01-17T00:38:23.037127783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.037432 containerd[1470]: time="2026-01-17T00:38:23.037360468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.037824 containerd[1470]: time="2026-01-17T00:38:23.037800960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.037924 containerd[1470]: time="2026-01-17T00:38:23.037904634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.037991 containerd[1470]: time="2026-01-17T00:38:23.037976057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.038064 containerd[1470]: time="2026-01-17T00:38:23.038044104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.038300 containerd[1470]: time="2026-01-17T00:38:23.038185618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.038381 containerd[1470]: time="2026-01-17T00:38:23.038365424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.038867 containerd[1470]: time="2026-01-17T00:38:23.038844559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.039014 containerd[1470]: time="2026-01-17T00:38:23.038993217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.039101 containerd[1470]: time="2026-01-17T00:38:23.039079407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.039171 containerd[1470]: time="2026-01-17T00:38:23.039155940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.040648 containerd[1470]: time="2026-01-17T00:38:23.040622319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.040913 containerd[1470]: time="2026-01-17T00:38:23.040889417Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:38:23.041116 containerd[1470]: time="2026-01-17T00:38:23.041090793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.041190 containerd[1470]: time="2026-01-17T00:38:23.041175813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.043545 containerd[1470]: time="2026-01-17T00:38:23.041422754Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:38:23.043545 containerd[1470]: time="2026-01-17T00:38:23.042476175Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:38:23.043545 containerd[1470]: time="2026-01-17T00:38:23.042613922Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:38:23.043545 containerd[1470]: time="2026-01-17T00:38:23.042633539Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:38:23.043545 containerd[1470]: time="2026-01-17T00:38:23.042709651Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:38:23.043545 containerd[1470]: time="2026-01-17T00:38:23.042722755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.043545 containerd[1470]: time="2026-01-17T00:38:23.042740388Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:38:23.043545 containerd[1470]: time="2026-01-17T00:38:23.042789339Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:38:23.043545 containerd[1470]: time="2026-01-17T00:38:23.042805029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:38:23.045179 containerd[1470]: time="2026-01-17T00:38:23.044132588Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:38:23.045179 containerd[1470]: time="2026-01-17T00:38:23.044421778Z" level=info msg="Connect containerd service" Jan 17 00:38:23.045179 containerd[1470]: time="2026-01-17T00:38:23.044525191Z" level=info msg="using legacy CRI server" Jan 17 00:38:23.045179 containerd[1470]: time="2026-01-17T00:38:23.044536853Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:38:23.047745 containerd[1470]: time="2026-01-17T00:38:23.045490663Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:38:23.048544 containerd[1470]: time="2026-01-17T00:38:23.048374148Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:38:23.050406 containerd[1470]: time="2026-01-17T00:38:23.050169306Z" level=info msg="Start subscribing containerd event" Jan 17 00:38:23.050575 containerd[1470]: time="2026-01-17T00:38:23.050556570Z" level=info msg="Start recovering state" Jan 17 00:38:23.052789 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:38:23.053709 containerd[1470]: time="2026-01-17T00:38:23.053634377Z" level=info msg="Start event monitor" Jan 17 00:38:23.053802 containerd[1470]: time="2026-01-17T00:38:23.053728362Z" level=info msg="Start snapshots syncer" Jan 17 00:38:23.053853 containerd[1470]: time="2026-01-17T00:38:23.053798403Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:38:23.053853 containerd[1470]: time="2026-01-17T00:38:23.053842075Z" level=info msg="Start streaming server" Jan 17 00:38:23.055398 containerd[1470]: time="2026-01-17T00:38:23.051299721Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:38:23.055557 containerd[1470]: time="2026-01-17T00:38:23.055538141Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:38:23.060305 containerd[1470]: time="2026-01-17T00:38:23.058484616Z" level=info msg="containerd successfully booted in 0.375718s" Jan 17 00:38:23.063829 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:38:26.966979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:38:26.967827 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:38:26.972928 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:38:26.973378 systemd[1]: Startup finished in 2.263s (kernel) + 12.089s (initrd) + 28.142s (userspace) = 42.495s. Jan 17 00:38:27.968574 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:38:27.976821 systemd[1]: Started sshd@0-10.0.0.107:22-10.0.0.1:41998.service - OpenSSH per-connection server daemon (10.0.0.1:41998). Jan 17 00:38:28.717981 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 41998 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:38:28.741707 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:38:28.806894 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:38:28.882986 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:38:29.022045 systemd-logind[1452]: New session 1 of user core. Jan 17 00:38:29.348434 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:38:29.943860 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:38:30.038024 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:38:31.314003 systemd[1559]: Queued start job for default target default.target. Jan 17 00:38:31.361475 systemd[1559]: Created slice app.slice - User Application Slice. Jan 17 00:38:31.361790 systemd[1559]: Reached target paths.target - Paths. Jan 17 00:38:31.361922 systemd[1559]: Reached target timers.target - Timers. Jan 17 00:38:31.386967 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:38:31.674154 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:38:31.678716 systemd[1559]: Reached target sockets.target - Sockets. Jan 17 00:38:31.678746 systemd[1559]: Reached target basic.target - Basic System. Jan 17 00:38:31.678812 systemd[1559]: Reached target default.target - Main User Target. Jan 17 00:38:31.678860 systemd[1559]: Startup finished in 1.577s. Jan 17 00:38:31.680082 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:38:31.707488 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:38:32.113418 systemd[1]: Started sshd@1-10.0.0.107:22-10.0.0.1:42008.service - OpenSSH per-connection server daemon (10.0.0.1:42008). Jan 17 00:38:32.476900 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 42008 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:38:32.484346 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:38:32.499155 systemd-logind[1452]: New session 2 of user core. Jan 17 00:38:32.519593 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:38:32.902587 sshd[1571]: pam_unix(sshd:session): session closed for user core Jan 17 00:38:32.975424 systemd[1]: Started sshd@2-10.0.0.107:22-10.0.0.1:39000.service - OpenSSH per-connection server daemon (10.0.0.1:39000). Jan 17 00:38:32.976587 systemd[1]: sshd@1-10.0.0.107:22-10.0.0.1:42008.service: Deactivated successfully. Jan 17 00:38:32.982615 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:38:32.987040 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:38:32.997748 systemd-logind[1452]: Removed session 2. Jan 17 00:38:33.225547 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 39000 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:38:33.224444 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:38:33.271341 systemd-logind[1452]: New session 3 of user core. Jan 17 00:38:33.274719 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:38:33.640396 sshd[1576]: pam_unix(sshd:session): session closed for user core Jan 17 00:38:33.687798 systemd[1]: sshd@2-10.0.0.107:22-10.0.0.1:39000.service: Deactivated successfully. Jan 17 00:38:33.699806 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:38:33.705797 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:38:33.741690 systemd[1]: Started sshd@3-10.0.0.107:22-10.0.0.1:39016.service - OpenSSH per-connection server daemon (10.0.0.1:39016). Jan 17 00:38:33.744665 systemd-logind[1452]: Removed session 3. Jan 17 00:38:33.778917 kubelet[1548]: E0117 00:38:33.778755 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:38:33.794668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:38:33.794999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:38:33.825455 systemd[1]: kubelet.service: Consumed 9.662s CPU time. Jan 17 00:38:33.950465 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 39016 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:38:33.957183 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:38:33.991717 systemd-logind[1452]: New session 4 of user core. Jan 17 00:38:34.002780 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:38:34.142624 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 17 00:38:34.158029 systemd[1]: sshd@3-10.0.0.107:22-10.0.0.1:39016.service: Deactivated successfully. Jan 17 00:38:34.179686 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:38:34.192029 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:38:34.225690 systemd[1]: Started sshd@4-10.0.0.107:22-10.0.0.1:39020.service - OpenSSH per-connection server daemon (10.0.0.1:39020). Jan 17 00:38:34.234729 systemd-logind[1452]: Removed session 4. Jan 17 00:38:34.305116 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 39020 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:38:34.315720 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:38:34.699122 systemd-logind[1452]: New session 5 of user core. Jan 17 00:38:34.714927 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:38:34.916179 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:38:34.916846 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:38:34.980164 sudo[1596]: pam_unix(sudo:session): session closed for user root Jan 17 00:38:34.998462 sshd[1593]: pam_unix(sshd:session): session closed for user core Jan 17 00:38:35.020966 systemd[1]: sshd@4-10.0.0.107:22-10.0.0.1:39020.service: Deactivated successfully. Jan 17 00:38:35.028957 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:38:35.050843 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:38:35.094552 systemd[1]: Started sshd@5-10.0.0.107:22-10.0.0.1:39024.service - OpenSSH per-connection server daemon (10.0.0.1:39024). Jan 17 00:38:35.102594 systemd-logind[1452]: Removed session 5. Jan 17 00:38:35.265858 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 39024 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:38:35.276866 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:38:35.312624 systemd-logind[1452]: New session 6 of user core. Jan 17 00:38:35.325683 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:38:35.533689 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:38:35.534896 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:38:35.625583 sudo[1605]: pam_unix(sudo:session): session closed for user root Jan 17 00:38:35.669071 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:38:35.671770 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:38:35.799918 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:38:35.920509 auditctl[1608]: No rules Jan 17 00:38:35.977175 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:38:35.980780 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:38:36.072750 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:38:36.298389 augenrules[1626]: No rules Jan 17 00:38:36.307326 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:38:36.320356 sudo[1604]: pam_unix(sudo:session): session closed for user root Jan 17 00:38:36.347836 sshd[1601]: pam_unix(sshd:session): session closed for user core Jan 17 00:38:36.420403 systemd[1]: Started sshd@6-10.0.0.107:22-10.0.0.1:39040.service - OpenSSH per-connection server daemon (10.0.0.1:39040). Jan 17 00:38:36.421420 systemd[1]: sshd@5-10.0.0.107:22-10.0.0.1:39024.service: Deactivated successfully. Jan 17 00:38:36.423474 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:38:36.430719 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:38:36.445552 systemd-logind[1452]: Removed session 6. Jan 17 00:38:36.519162 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 39040 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:38:36.583455 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:38:36.625052 systemd-logind[1452]: New session 7 of user core. Jan 17 00:38:36.640872 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:38:36.790166 sudo[1637]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:38:36.795021 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:38:43.682126 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:38:43.688075 (dockerd)[1656]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:38:43.872992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:38:43.959120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:38:47.076994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:38:47.122089 (kubelet)[1669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:38:47.676394 kubelet[1669]: E0117 00:38:47.675745 1669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:38:47.685568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:38:47.721081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:38:47.947624 systemd[1]: kubelet.service: Consumed 2.690s CPU time. Jan 17 00:38:48.436306 dockerd[1656]: time="2026-01-17T00:38:48.435574469Z" level=info msg="Starting up" Jan 17 00:38:49.512758 dockerd[1656]: time="2026-01-17T00:38:49.511607019Z" level=info msg="Loading containers: start." Jan 17 00:38:50.302454 kernel: Initializing XFRM netlink socket Jan 17 00:38:50.824942 systemd-networkd[1391]: docker0: Link UP Jan 17 00:38:50.894148 dockerd[1656]: time="2026-01-17T00:38:50.893663418Z" level=info msg="Loading containers: done." Jan 17 00:38:51.010619 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1606876502-merged.mount: Deactivated successfully. Jan 17 00:38:51.047342 dockerd[1656]: time="2026-01-17T00:38:51.046604765Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:38:51.047342 dockerd[1656]: time="2026-01-17T00:38:51.046797961Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:38:51.049623 dockerd[1656]: time="2026-01-17T00:38:51.047064569Z" level=info msg="Daemon has completed initialization" Jan 17 00:38:51.190281 dockerd[1656]: time="2026-01-17T00:38:51.190002259Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:38:51.196603 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:38:53.547829 containerd[1470]: time="2026-01-17T00:38:53.547502073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 17 00:38:54.831076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262101657.mount: Deactivated successfully. Jan 17 00:38:57.873736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:38:57.892933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:38:58.535950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:38:58.571409 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:38:58.842567 kubelet[1888]: E0117 00:38:58.841387 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:38:58.848506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:38:58.848843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:39:00.668271 containerd[1470]: time="2026-01-17T00:39:00.667272712Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114712" Jan 17 00:39:00.678319 containerd[1470]: time="2026-01-17T00:39:00.671491468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:00.681792 containerd[1470]: time="2026-01-17T00:39:00.679409851Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:00.701443 containerd[1470]: time="2026-01-17T00:39:00.700977343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:00.708593 containerd[1470]: time="2026-01-17T00:39:00.706911482Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 7.159311182s" Jan 17 00:39:00.708593 containerd[1470]: time="2026-01-17T00:39:00.706953588Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 17 00:39:00.719143 containerd[1470]: time="2026-01-17T00:39:00.716761503Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 17 00:39:04.500656 update_engine[1460]: I20260117 00:39:04.495500 1460 update_attempter.cc:509] Updating boot flags... Jan 17 00:39:05.310067 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1908) Jan 17 00:39:05.747418 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1907) Jan 17 00:39:06.034344 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1907) Jan 17 00:39:08.909938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:39:08.950945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:09.708787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:09.727934 containerd[1470]: time="2026-01-17T00:39:09.727882234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:09.728002 (kubelet)[1924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:39:09.733093 containerd[1470]: time="2026-01-17T00:39:09.732727985Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016781" Jan 17 00:39:09.736000 containerd[1470]: time="2026-01-17T00:39:09.735891847Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:09.766305 containerd[1470]: time="2026-01-17T00:39:09.766019296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:09.769313 containerd[1470]: time="2026-01-17T00:39:09.768116284Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 9.046384041s" Jan 17 00:39:09.769313 containerd[1470]: time="2026-01-17T00:39:09.768581091Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 17 00:39:09.777118 containerd[1470]: time="2026-01-17T00:39:09.777042110Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 17 00:39:10.204138 kubelet[1924]: E0117 00:39:10.203776 1924 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:39:10.213650 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:39:10.213981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:39:14.895074 containerd[1470]: time="2026-01-17T00:39:14.894737726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:14.916948 containerd[1470]: time="2026-01-17T00:39:14.914474084Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158102" Jan 17 00:39:14.923769 containerd[1470]: time="2026-01-17T00:39:14.923540762Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:14.933752 containerd[1470]: time="2026-01-17T00:39:14.931917195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:14.943562 containerd[1470]: time="2026-01-17T00:39:14.940916701Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 5.163792069s" Jan 17 00:39:14.943562 containerd[1470]: time="2026-01-17T00:39:14.941046762Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 17 00:39:14.957865 containerd[1470]: time="2026-01-17T00:39:14.957075478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 17 00:39:19.300571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573435314.mount: Deactivated successfully. Jan 17 00:39:20.370814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:39:20.387094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:20.845654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:20.853533 (kubelet)[1953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:39:21.286755 kubelet[1953]: E0117 00:39:21.286412 1953 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:39:21.297523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:39:21.297784 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:39:22.966471 containerd[1470]: time="2026-01-17T00:39:22.965805336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:22.971613 containerd[1470]: time="2026-01-17T00:39:22.971517290Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930096" Jan 17 00:39:22.976612 containerd[1470]: time="2026-01-17T00:39:22.975047482Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:22.990571 containerd[1470]: time="2026-01-17T00:39:22.988544714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:22.992006 containerd[1470]: time="2026-01-17T00:39:22.990962645Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 8.033828999s" Jan 17 00:39:22.992006 containerd[1470]: time="2026-01-17T00:39:22.991011214Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 17 00:39:23.006060 containerd[1470]: time="2026-01-17T00:39:23.005478818Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 17 00:39:23.669035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount64167720.mount: Deactivated successfully. Jan 17 00:39:25.163939 containerd[1470]: time="2026-01-17T00:39:25.163781739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:25.168961 containerd[1470]: time="2026-01-17T00:39:25.168721934Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jan 17 00:39:25.171663 containerd[1470]: time="2026-01-17T00:39:25.171579909Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:25.177851 containerd[1470]: time="2026-01-17T00:39:25.177412264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:25.179305 containerd[1470]: time="2026-01-17T00:39:25.178873669Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.173302227s" Jan 17 00:39:25.179305 containerd[1470]: time="2026-01-17T00:39:25.178945794Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 17 00:39:25.180829 containerd[1470]: time="2026-01-17T00:39:25.180408623Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:39:25.761621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264280098.mount: Deactivated successfully. Jan 17 00:39:25.781916 containerd[1470]: time="2026-01-17T00:39:25.781770092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:25.785445 containerd[1470]: time="2026-01-17T00:39:25.785188257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 17 00:39:25.787719 containerd[1470]: time="2026-01-17T00:39:25.787617873Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:25.798918 containerd[1470]: time="2026-01-17T00:39:25.798800269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:25.800702 containerd[1470]: time="2026-01-17T00:39:25.799859232Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 619.386763ms" Jan 17 00:39:25.800702 containerd[1470]: time="2026-01-17T00:39:25.799925756Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:39:25.801510 containerd[1470]: time="2026-01-17T00:39:25.801400363Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 17 00:39:26.626787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109837746.mount: Deactivated successfully. Jan 17 00:39:31.392387 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:39:31.428995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:32.276534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:32.289288 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:39:32.772817 kubelet[2076]: E0117 00:39:32.772499 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:39:32.782434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:39:32.785821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:39:32.787096 systemd[1]: kubelet.service: Consumed 1.054s CPU time. Jan 17 00:39:37.466384 containerd[1470]: time="2026-01-17T00:39:37.464935930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:37.466384 containerd[1470]: time="2026-01-17T00:39:37.468621369Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926227" Jan 17 00:39:37.475096 containerd[1470]: time="2026-01-17T00:39:37.474869502Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:37.491709 containerd[1470]: time="2026-01-17T00:39:37.491439528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:39:37.492601 containerd[1470]: time="2026-01-17T00:39:37.492507769Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 11.691046772s" Jan 17 00:39:37.492601 containerd[1470]: time="2026-01-17T00:39:37.492573652Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 17 00:39:42.485111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:42.486901 systemd[1]: kubelet.service: Consumed 1.054s CPU time. Jan 17 00:39:42.523434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:42.628929 systemd[1]: Reloading requested from client PID 2121 ('systemctl') (unit session-7.scope)... Jan 17 00:39:42.628952 systemd[1]: Reloading... Jan 17 00:39:42.888802 zram_generator::config[2158]: No configuration found. Jan 17 00:39:43.383882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:39:43.885427 systemd[1]: Reloading finished in 1255 ms. Jan 17 00:39:44.148455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:44.158992 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:44.161112 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:39:44.161890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:44.183660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:39:44.929280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:39:45.004581 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:39:45.395975 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:39:45.395975 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:39:45.399329 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:39:45.399329 kubelet[2211]: I0117 00:39:45.397252 2211 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:39:46.814245 kubelet[2211]: I0117 00:39:46.813050 2211 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:39:46.814245 kubelet[2211]: I0117 00:39:46.813665 2211 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:39:46.819317 kubelet[2211]: I0117 00:39:46.817694 2211 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:39:47.005743 kubelet[2211]: E0117 00:39:47.005678 2211 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:39:47.020329 kubelet[2211]: I0117 00:39:47.013108 2211 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:39:47.050525 kubelet[2211]: E0117 00:39:47.050475 2211 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:39:47.051041 kubelet[2211]: I0117 00:39:47.050773 2211 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:39:47.087339 kubelet[2211]: I0117 00:39:47.084630 2211 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:39:47.093896 kubelet[2211]: I0117 00:39:47.092052 2211 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:39:47.093896 kubelet[2211]: I0117 00:39:47.092936 2211 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:39:47.096580 kubelet[2211]: I0117 00:39:47.095303 2211 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:39:47.096580 kubelet[2211]: I0117 00:39:47.095358 2211 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:39:47.096580 kubelet[2211]: I0117 00:39:47.096041 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:39:47.145557 kubelet[2211]: I0117 00:39:47.137929 2211 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:39:47.151431 kubelet[2211]: I0117 00:39:47.147417 2211 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:39:47.151431 kubelet[2211]: I0117 00:39:47.147786 2211 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:39:47.151431 kubelet[2211]: I0117 00:39:47.147929 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:39:47.216372 kubelet[2211]: E0117 00:39:47.216287 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:39:47.221509 kubelet[2211]: E0117 00:39:47.216725 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:39:47.248107 kubelet[2211]: I0117 00:39:47.246419 2211 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:39:47.248107 kubelet[2211]: I0117 00:39:47.247751 2211 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:39:47.252056 kubelet[2211]: W0117 00:39:47.251018 2211 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:39:47.261023 kubelet[2211]: I0117 00:39:47.260916 2211 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:39:47.261512 kubelet[2211]: I0117 00:39:47.261154 2211 server.go:1289] "Started kubelet" Jan 17 00:39:47.262562 kubelet[2211]: I0117 00:39:47.261953 2211 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:39:47.265904 kubelet[2211]: I0117 00:39:47.264969 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:39:47.267295 kubelet[2211]: I0117 00:39:47.266390 2211 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:39:47.275151 kubelet[2211]: I0117 00:39:47.274984 2211 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:39:47.277433 kubelet[2211]: I0117 00:39:47.276300 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:39:47.277433 kubelet[2211]: I0117 00:39:47.276971 2211 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:39:47.281487 kubelet[2211]: E0117 00:39:47.279299 2211 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:39:47.281487 kubelet[2211]: I0117 00:39:47.279427 2211 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:39:47.281487 kubelet[2211]: I0117 00:39:47.279828 2211 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:39:47.281487 kubelet[2211]: I0117 00:39:47.280030 2211 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:39:47.281487 kubelet[2211]: E0117 00:39:47.280685 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:39:47.281487 kubelet[2211]: E0117 00:39:47.281287 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="200ms" Jan 17 00:39:47.283942 kubelet[2211]: E0117 00:39:47.283773 2211 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:39:47.287979 kubelet[2211]: I0117 00:39:47.287878 2211 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:39:47.287979 kubelet[2211]: I0117 00:39:47.287909 2211 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:39:47.288132 kubelet[2211]: I0117 00:39:47.288054 2211 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:39:47.297278 kubelet[2211]: E0117 00:39:47.291552 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.107:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.107:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188b5dc6d6c1da4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:39:47.260987979 +0000 UTC m=+2.149045715,LastTimestamp:2026-01-17 00:39:47.260987979 +0000 UTC m=+2.149045715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:39:47.352449 kubelet[2211]: I0117 00:39:47.351838 2211 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:39:47.352449 kubelet[2211]: I0117 00:39:47.351865 2211 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:39:47.352449 kubelet[2211]: I0117 00:39:47.351895 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:39:47.380515 kubelet[2211]: E0117 00:39:47.380445 2211 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:39:47.487931 kubelet[2211]: E0117 00:39:47.487250 2211 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:39:47.511986 kubelet[2211]: I0117 00:39:47.509352 2211 policy_none.go:49] "None policy: Start" Jan 17 00:39:47.511986 kubelet[2211]: I0117 00:39:47.509609 2211 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:39:47.511986 kubelet[2211]: I0117 00:39:47.509727 2211 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:39:47.511986 kubelet[2211]: E0117 00:39:47.511643 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="400ms" Jan 17 00:39:47.547407 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:39:47.563692 kubelet[2211]: I0117 00:39:47.547880 2211 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:39:47.563692 kubelet[2211]: I0117 00:39:47.556324 2211 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:39:47.563692 kubelet[2211]: I0117 00:39:47.556529 2211 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:39:47.563692 kubelet[2211]: I0117 00:39:47.556587 2211 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:39:47.563692 kubelet[2211]: I0117 00:39:47.556638 2211 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:39:47.563692 kubelet[2211]: E0117 00:39:47.556716 2211 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:39:47.563692 kubelet[2211]: E0117 00:39:47.557562 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:39:47.573419 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:39:47.587473 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:39:47.588541 kubelet[2211]: E0117 00:39:47.588258 2211 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:39:47.663136 kubelet[2211]: E0117 00:39:47.662837 2211 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:39:47.667036 kubelet[2211]: E0117 00:39:47.665116 2211 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:39:47.667036 kubelet[2211]: I0117 00:39:47.666413 2211 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:39:47.667036 kubelet[2211]: I0117 00:39:47.666492 2211 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:39:47.678798 kubelet[2211]: I0117 00:39:47.676577 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:39:47.678798 kubelet[2211]: E0117 00:39:47.677696 2211 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:39:47.678798 kubelet[2211]: E0117 00:39:47.677920 2211 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:39:47.776477 kubelet[2211]: I0117 00:39:47.774548 2211 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:39:47.776477 kubelet[2211]: E0117 00:39:47.775417 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Jan 17 00:39:47.902278 kubelet[2211]: I0117 00:39:47.900845 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a7026cbc7d356671f3fd1d90f3de11f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a7026cbc7d356671f3fd1d90f3de11f\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:39:47.902278 kubelet[2211]: I0117 00:39:47.900910 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:39:47.902278 kubelet[2211]: I0117 00:39:47.900940 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:39:47.902278 kubelet[2211]: I0117 00:39:47.900966 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:39:47.902278 kubelet[2211]: I0117 00:39:47.900993 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:39:47.903005 kubelet[2211]: I0117 00:39:47.901017 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:39:47.903005 kubelet[2211]: I0117 00:39:47.901040 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a7026cbc7d356671f3fd1d90f3de11f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a7026cbc7d356671f3fd1d90f3de11f\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:39:47.903005 kubelet[2211]: I0117 00:39:47.901066 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a7026cbc7d356671f3fd1d90f3de11f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a7026cbc7d356671f3fd1d90f3de11f\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:39:47.904025 systemd[1]: Created slice kubepods-burstable-pod7a7026cbc7d356671f3fd1d90f3de11f.slice - libcontainer container kubepods-burstable-pod7a7026cbc7d356671f3fd1d90f3de11f.slice. Jan 17 00:39:47.915004 kubelet[2211]: E0117 00:39:47.912879 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="800ms" Jan 17 00:39:47.932319 kubelet[2211]: E0117 00:39:47.930579 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:47.944934 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Jan 17 00:39:47.969817 kubelet[2211]: E0117 00:39:47.969721 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:47.984912 kubelet[2211]: I0117 00:39:47.984877 2211 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:39:47.985802 kubelet[2211]: E0117 00:39:47.985638 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Jan 17 00:39:47.990640 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Jan 17 00:39:48.000747 kubelet[2211]: E0117 00:39:48.000655 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:48.002635 kubelet[2211]: I0117 00:39:48.002501 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:39:48.232609 kubelet[2211]: E0117 00:39:48.232386 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:48.237006 containerd[1470]: time="2026-01-17T00:39:48.236833990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a7026cbc7d356671f3fd1d90f3de11f,Namespace:kube-system,Attempt:0,}" Jan 17 00:39:48.271828 kubelet[2211]: E0117 00:39:48.271672 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:48.276083 containerd[1470]: time="2026-01-17T00:39:48.275123059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Jan 17 00:39:48.302702 kubelet[2211]: E0117 00:39:48.301432 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:48.309055 containerd[1470]: time="2026-01-17T00:39:48.308925782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Jan 17 00:39:48.341955 kubelet[2211]: E0117 00:39:48.341725 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.107:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:39:48.469305 kubelet[2211]: I0117 00:39:48.468863 2211 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:39:48.473793 kubelet[2211]: E0117 00:39:48.473574 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Jan 17 00:39:48.528430 kubelet[2211]: E0117 00:39:48.527644 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.107:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:39:48.716689 kubelet[2211]: E0117 00:39:48.716588 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="1.6s" Jan 17 00:39:48.743285 kubelet[2211]: E0117 00:39:48.743066 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.107:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:39:48.794299 kubelet[2211]: E0117 00:39:48.793848 2211 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.107:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:39:49.050045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282851540.mount: Deactivated successfully. Jan 17 00:39:49.086648 containerd[1470]: time="2026-01-17T00:39:49.084810467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:39:49.095707 containerd[1470]: time="2026-01-17T00:39:49.095360209Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:39:49.103066 containerd[1470]: time="2026-01-17T00:39:49.102781471Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:39:49.107278 containerd[1470]: time="2026-01-17T00:39:49.106944621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:39:49.110726 containerd[1470]: time="2026-01-17T00:39:49.110381269Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:39:49.115700 containerd[1470]: time="2026-01-17T00:39:49.114979217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:39:49.118614 containerd[1470]: time="2026-01-17T00:39:49.118553024Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 00:39:49.121978 containerd[1470]: time="2026-01-17T00:39:49.121939201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:39:49.128020 containerd[1470]: time="2026-01-17T00:39:49.127502719Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 852.174357ms" Jan 17 00:39:49.137840 containerd[1470]: time="2026-01-17T00:39:49.137711569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 828.661996ms" Jan 17 00:39:49.144447 containerd[1470]: time="2026-01-17T00:39:49.144345927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 907.237845ms" Jan 17 00:39:49.163698 kubelet[2211]: E0117 00:39:49.163653 2211 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.107:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.107:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:39:49.279906 kubelet[2211]: I0117 00:39:49.279805 2211 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:39:49.281108 kubelet[2211]: E0117 00:39:49.280460 2211 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.107:6443/api/v1/nodes\": dial tcp 10.0.0.107:6443: connect: connection refused" node="localhost" Jan 17 00:39:49.882444 containerd[1470]: time="2026-01-17T00:39:49.881707476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:39:49.882444 containerd[1470]: time="2026-01-17T00:39:49.881795221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:39:49.882444 containerd[1470]: time="2026-01-17T00:39:49.881825437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:39:49.882444 containerd[1470]: time="2026-01-17T00:39:49.882006775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:39:49.904833 containerd[1470]: time="2026-01-17T00:39:49.904089442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:39:49.904833 containerd[1470]: time="2026-01-17T00:39:49.904154203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:39:49.904833 containerd[1470]: time="2026-01-17T00:39:49.904306286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:39:49.904833 containerd[1470]: time="2026-01-17T00:39:49.904625022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:39:49.909272 containerd[1470]: time="2026-01-17T00:39:49.906083813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:39:49.909272 containerd[1470]: time="2026-01-17T00:39:49.906348798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:39:49.909272 containerd[1470]: time="2026-01-17T00:39:49.906418478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:39:49.909272 containerd[1470]: time="2026-01-17T00:39:49.906603322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:39:49.986956 systemd[1]: Started cri-containerd-fbf23ad0c01a1056761e07755c839d6b917232caa9435b70448fbecde22e2161.scope - libcontainer container fbf23ad0c01a1056761e07755c839d6b917232caa9435b70448fbecde22e2161. Jan 17 00:39:50.011630 systemd[1]: Started cri-containerd-791f6f5b523b77ad29308a7764ebaa89385923cc970b7d8d9baa8c88a60c0210.scope - libcontainer container 791f6f5b523b77ad29308a7764ebaa89385923cc970b7d8d9baa8c88a60c0210. Jan 17 00:39:50.017379 systemd[1]: Started cri-containerd-a2789b8c3ce5d7f0b07ed3ce9644ddb00aeefb6f7ef8b6bfcff7a05e0b3a254c.scope - libcontainer container a2789b8c3ce5d7f0b07ed3ce9644ddb00aeefb6f7ef8b6bfcff7a05e0b3a254c. Jan 17 00:39:50.295545 containerd[1470]: time="2026-01-17T00:39:50.293816087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7a7026cbc7d356671f3fd1d90f3de11f,Namespace:kube-system,Attempt:0,} returns sandbox id \"791f6f5b523b77ad29308a7764ebaa89385923cc970b7d8d9baa8c88a60c0210\"" Jan 17 00:39:50.302639 kubelet[2211]: E0117 00:39:50.302604 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:50.307648 containerd[1470]: time="2026-01-17T00:39:50.307464552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbf23ad0c01a1056761e07755c839d6b917232caa9435b70448fbecde22e2161\"" Jan 17 00:39:50.308383 kubelet[2211]: E0117 00:39:50.308362 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:50.318335 kubelet[2211]: E0117 00:39:50.318098 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.107:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.107:6443: connect: connection refused" interval="3.2s" Jan 17 00:39:50.337314 containerd[1470]: time="2026-01-17T00:39:50.337060502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2789b8c3ce5d7f0b07ed3ce9644ddb00aeefb6f7ef8b6bfcff7a05e0b3a254c\"" Jan 17 00:39:50.340494 kubelet[2211]: E0117 00:39:50.340467 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:50.362540 containerd[1470]: time="2026-01-17T00:39:50.362383778Z" level=info msg="CreateContainer within sandbox \"791f6f5b523b77ad29308a7764ebaa89385923cc970b7d8d9baa8c88a60c0210\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:39:50.372325 containerd[1470]: time="2026-01-17T00:39:50.369154373Z" level=info msg="CreateContainer within sandbox \"fbf23ad0c01a1056761e07755c839d6b917232caa9435b70448fbecde22e2161\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:39:50.372698 containerd[1470]: time="2026-01-17T00:39:50.372509279Z" level=info msg="CreateContainer within sandbox \"a2789b8c3ce5d7f0b07ed3ce9644ddb00aeefb6f7ef8b6bfcff7a05e0b3a254c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:39:50.404391 containerd[1470]: time="2026-01-17T00:39:50.403406475Z" level=info msg="CreateContainer within sandbox \"791f6f5b523b77ad29308a7764ebaa89385923cc970b7d8d9baa8c88a60c0210\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a1bbc4fa2a49cada088d0aef23cce40d1ad4545e27ce72ac519a7cda7e3c141\"" Jan 17 00:39:50.406760 containerd[1470]: time="2026-01-17T00:39:50.406594874Z" level=info msg="StartContainer for \"6a1bbc4fa2a49cada088d0aef23cce40d1ad4545e27ce72ac519a7cda7e3c141\"" Jan 17 00:39:50.411071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1565662845.mount: Deactivated successfully. Jan 17 00:39:50.429597 containerd[1470]: time="2026-01-17T00:39:50.429539785Z" level=info msg="CreateContainer within sandbox \"a2789b8c3ce5d7f0b07ed3ce9644ddb00aeefb6f7ef8b6bfcff7a05e0b3a254c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a8744e6b6bf0d088b1b5a6afbaadb49ff6d09ec5bf6d989c8fa1b79b12bb53a\"" Jan 17 00:39:50.433035 containerd[1470]: time="2026-01-17T00:39:50.431557843Z" level=info msg="StartContainer for \"0a8744e6b6bf0d088b1b5a6afbaadb49ff6d09ec5bf6d989c8fa1b79b12bb53a\"" Jan 17 00:39:50.441316 containerd[1470]: time="2026-01-17T00:39:50.440549250Z" level=info msg="CreateContainer within sandbox \"fbf23ad0c01a1056761e07755c839d6b917232caa9435b70448fbecde22e2161\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d\"" Jan 17 00:39:50.444522 containerd[1470]: time="2026-01-17T00:39:50.444491253Z" level=info msg="StartContainer for \"aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d\"" Jan 17 00:39:50.481751 systemd[1]: Started cri-containerd-6a1bbc4fa2a49cada088d0aef23cce40d1ad4545e27ce72ac519a7cda7e3c141.scope - libcontainer container 6a1bbc4fa2a49cada088d0aef23cce40d1ad4545e27ce72ac519a7cda7e3c141. Jan 17 00:39:50.517866 systemd[1]: Started cri-containerd-0a8744e6b6bf0d088b1b5a6afbaadb49ff6d09ec5bf6d989c8fa1b79b12bb53a.scope - libcontainer container 0a8744e6b6bf0d088b1b5a6afbaadb49ff6d09ec5bf6d989c8fa1b79b12bb53a. Jan 17 00:39:50.544419 systemd[1]: Started cri-containerd-aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d.scope - libcontainer container aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d. Jan 17 00:39:50.676686 containerd[1470]: time="2026-01-17T00:39:50.676503701Z" level=info msg="StartContainer for \"6a1bbc4fa2a49cada088d0aef23cce40d1ad4545e27ce72ac519a7cda7e3c141\" returns successfully" Jan 17 00:39:50.697610 containerd[1470]: time="2026-01-17T00:39:50.697482332Z" level=info msg="StartContainer for \"aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d\" returns successfully" Jan 17 00:39:50.754726 containerd[1470]: time="2026-01-17T00:39:50.750011808Z" level=info msg="StartContainer for \"0a8744e6b6bf0d088b1b5a6afbaadb49ff6d09ec5bf6d989c8fa1b79b12bb53a\" returns successfully" Jan 17 00:39:50.884510 kubelet[2211]: I0117 00:39:50.884442 2211 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:39:51.639610 kubelet[2211]: E0117 00:39:51.639515 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:51.640353 kubelet[2211]: E0117 00:39:51.639688 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:51.691733 kubelet[2211]: E0117 00:39:51.691349 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:51.709942 kubelet[2211]: E0117 00:39:51.705306 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:51.737851 kubelet[2211]: E0117 00:39:51.737484 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:51.740364 kubelet[2211]: E0117 00:39:51.739048 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:52.737005 kubelet[2211]: E0117 00:39:52.736057 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:52.742281 kubelet[2211]: E0117 00:39:52.742123 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:52.743305 kubelet[2211]: E0117 00:39:52.743279 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:52.750290 kubelet[2211]: E0117 00:39:52.749314 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:52.750632 kubelet[2211]: E0117 00:39:52.750611 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:52.751439 kubelet[2211]: E0117 00:39:52.751365 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:53.804705 kubelet[2211]: E0117 00:39:53.800104 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:53.809422 kubelet[2211]: E0117 00:39:53.807943 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:53.809422 kubelet[2211]: E0117 00:39:53.808296 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:53.822266 kubelet[2211]: E0117 00:39:53.819290 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:54.789625 kubelet[2211]: E0117 00:39:54.789263 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:54.791745 kubelet[2211]: E0117 00:39:54.791314 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:55.924675 kubelet[2211]: E0117 00:39:55.924162 2211 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 17 00:39:55.926919 kubelet[2211]: E0117 00:39:55.926806 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:39:57.681762 kubelet[2211]: E0117 00:39:57.678586 2211 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 00:39:58.344188 kubelet[2211]: I0117 00:39:58.338872 2211 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:39:58.344188 kubelet[2211]: E0117 00:39:58.339030 2211 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 17 00:39:58.382921 kubelet[2211]: I0117 00:39:58.382516 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:39:58.389270 kubelet[2211]: E0117 00:39:58.386335 2211 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5dc6d6c1da4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:39:47.260987979 +0000 UTC m=+2.149045715,LastTimestamp:2026-01-17 00:39:47.260987979 +0000 UTC m=+2.149045715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:39:58.525644 kubelet[2211]: E0117 00:39:58.525389 2211 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188b5dc6d81d41b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-01-17 00:39:47.283755449 +0000 UTC m=+2.171813207,LastTimestamp:2026-01-17 00:39:47.283755449 +0000 UTC m=+2.171813207,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 00:39:58.588641 kubelet[2211]: E0117 00:39:58.588420 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="6.4s" Jan 17 00:39:58.598668 kubelet[2211]: E0117 00:39:58.597589 2211 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 17 00:39:58.598668 kubelet[2211]: I0117 00:39:58.597690 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:39:58.611941 kubelet[2211]: E0117 00:39:58.611800 2211 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 17 00:39:58.615290 kubelet[2211]: I0117 00:39:58.612386 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:39:58.621125 kubelet[2211]: E0117 00:39:58.620798 2211 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:39:59.195350 kubelet[2211]: I0117 00:39:59.194960 2211 apiserver.go:52] "Watching apiserver" Jan 17 00:39:59.290436 kubelet[2211]: I0117 00:39:59.289489 2211 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:40:00.163472 kubelet[2211]: I0117 00:40:00.163293 2211 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:00.192255 kubelet[2211]: E0117 00:40:00.189504 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:00.242378 kubelet[2211]: E0117 00:40:00.241769 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:00.387080 kubelet[2211]: I0117 00:40:00.386869 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.386723112 podStartE2EDuration="386.723112ms" podCreationTimestamp="2026-01-17 00:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:40:00.336128328 +0000 UTC m=+15.224186074" watchObservedRunningTime="2026-01-17 00:40:00.386723112 +0000 UTC m=+15.274780989" Jan 17 00:40:01.248944 kubelet[2211]: E0117 00:40:01.247587 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:04.810114 systemd[1]: Reloading requested from client PID 2502 ('systemctl') (unit session-7.scope)... Jan 17 00:40:04.810173 systemd[1]: Reloading... Jan 17 00:40:05.122285 zram_generator::config[2541]: No configuration found. Jan 17 00:40:05.421022 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:40:05.712544 systemd[1]: Reloading finished in 901 ms. Jan 17 00:40:05.819143 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:40:05.848361 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:40:05.848727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:40:05.848806 systemd[1]: kubelet.service: Consumed 4.927s CPU time, 138.2M memory peak, 0B memory swap peak. Jan 17 00:40:05.873424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:40:06.235498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:40:06.235892 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:40:06.524080 kubelet[2585]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:40:06.524080 kubelet[2585]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:40:06.524080 kubelet[2585]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:40:06.524080 kubelet[2585]: I0117 00:40:06.523058 2585 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:40:06.544109 kubelet[2585]: I0117 00:40:06.544062 2585 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 17 00:40:06.546311 kubelet[2585]: I0117 00:40:06.544377 2585 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:40:06.546311 kubelet[2585]: I0117 00:40:06.545019 2585 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:40:06.549070 kubelet[2585]: I0117 00:40:06.546836 2585 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:40:06.559271 kubelet[2585]: I0117 00:40:06.558831 2585 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:40:06.572109 kubelet[2585]: E0117 00:40:06.571753 2585 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:40:06.572831 kubelet[2585]: I0117 00:40:06.572369 2585 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:40:06.594121 kubelet[2585]: I0117 00:40:06.594087 2585 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:40:06.594727 kubelet[2585]: I0117 00:40:06.594677 2585 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:40:06.595095 kubelet[2585]: I0117 00:40:06.594815 2585 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:40:06.595434 kubelet[2585]: I0117 00:40:06.595416 2585 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:40:06.595506 kubelet[2585]: I0117 00:40:06.595496 2585 container_manager_linux.go:303] "Creating device plugin manager" Jan 17 00:40:06.595720 kubelet[2585]: I0117 00:40:06.595707 2585 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:40:06.596574 kubelet[2585]: I0117 00:40:06.596437 2585 kubelet.go:480] "Attempting to sync node with API server" Jan 17 00:40:06.596574 kubelet[2585]: I0117 00:40:06.596467 2585 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:40:06.596574 kubelet[2585]: I0117 00:40:06.596500 2585 kubelet.go:386] "Adding apiserver pod source" Jan 17 00:40:06.596574 kubelet[2585]: I0117 00:40:06.596523 2585 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:40:06.598348 kubelet[2585]: I0117 00:40:06.598298 2585 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:40:06.600270 kubelet[2585]: I0117 00:40:06.599270 2585 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:40:06.607295 kubelet[2585]: I0117 00:40:06.606395 2585 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:40:06.607295 kubelet[2585]: I0117 00:40:06.606478 2585 server.go:1289] "Started kubelet" Jan 17 00:40:06.620437 kubelet[2585]: I0117 00:40:06.620325 2585 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:40:06.625589 kubelet[2585]: I0117 00:40:06.625515 2585 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:40:06.636293 kubelet[2585]: I0117 00:40:06.627164 2585 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:40:06.636293 kubelet[2585]: I0117 00:40:06.635566 2585 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:40:06.636293 kubelet[2585]: I0117 00:40:06.629696 2585 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:40:06.651057 kubelet[2585]: I0117 00:40:06.647462 2585 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:40:06.651057 kubelet[2585]: E0117 00:40:06.647986 2585 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 17 00:40:06.667053 kubelet[2585]: I0117 00:40:06.665322 2585 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:40:06.667053 kubelet[2585]: I0117 00:40:06.666865 2585 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:40:06.681790 kubelet[2585]: I0117 00:40:06.681667 2585 server.go:317] "Adding debug handlers to kubelet server" Jan 17 00:40:06.682810 kubelet[2585]: E0117 00:40:06.682782 2585 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:40:06.683188 kubelet[2585]: I0117 00:40:06.683169 2585 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:40:06.685266 kubelet[2585]: I0117 00:40:06.683447 2585 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:40:06.687585 kubelet[2585]: I0117 00:40:06.687544 2585 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:40:06.688427 sudo[2610]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:40:06.689049 sudo[2610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:40:06.726146 kubelet[2585]: I0117 00:40:06.726081 2585 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 17 00:40:06.731645 kubelet[2585]: I0117 00:40:06.731480 2585 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 17 00:40:06.731645 kubelet[2585]: I0117 00:40:06.731507 2585 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 17 00:40:06.731645 kubelet[2585]: I0117 00:40:06.731541 2585 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:40:06.731645 kubelet[2585]: I0117 00:40:06.731552 2585 kubelet.go:2436] "Starting kubelet main sync loop" Jan 17 00:40:06.731645 kubelet[2585]: E0117 00:40:06.731615 2585 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:40:06.807689 kubelet[2585]: I0117 00:40:06.807173 2585 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:40:06.808385 kubelet[2585]: I0117 00:40:06.808273 2585 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:40:06.808385 kubelet[2585]: I0117 00:40:06.808327 2585 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:40:06.810456 kubelet[2585]: I0117 00:40:06.808798 2585 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:40:06.810456 kubelet[2585]: I0117 00:40:06.808852 2585 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:40:06.810456 kubelet[2585]: I0117 00:40:06.808880 2585 policy_none.go:49] "None policy: Start" Jan 17 00:40:06.810456 kubelet[2585]: I0117 00:40:06.808895 2585 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:40:06.810456 kubelet[2585]: I0117 00:40:06.808960 2585 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:40:06.810456 kubelet[2585]: I0117 00:40:06.809131 2585 state_mem.go:75] "Updated machine memory state" Jan 17 00:40:06.827287 kubelet[2585]: E0117 00:40:06.825570 2585 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:40:06.827287 kubelet[2585]: I0117 00:40:06.825963 2585 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:40:06.827287 kubelet[2585]: I0117 00:40:06.825986 2585 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:40:06.827287 kubelet[2585]: I0117 00:40:06.827102 2585 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:40:06.830023 kubelet[2585]: E0117 00:40:06.829996 2585 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:40:06.883632 kubelet[2585]: I0117 00:40:06.883454 2585 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:40:06.896435 kubelet[2585]: I0117 00:40:06.896403 2585 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:06.906817 kubelet[2585]: I0117 00:40:06.906729 2585 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:06.946180 kubelet[2585]: E0117 00:40:06.946144 2585 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:06.980538 kubelet[2585]: I0117 00:40:06.980482 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a7026cbc7d356671f3fd1d90f3de11f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7a7026cbc7d356671f3fd1d90f3de11f\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:06.980958 kubelet[2585]: I0117 00:40:06.980792 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:06.980958 kubelet[2585]: I0117 00:40:06.980839 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Jan 17 00:40:06.980958 kubelet[2585]: I0117 00:40:06.980874 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:06.981473 kubelet[2585]: I0117 00:40:06.981144 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:06.981473 kubelet[2585]: I0117 00:40:06.981178 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:06.981473 kubelet[2585]: I0117 00:40:06.981266 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 00:40:06.981473 kubelet[2585]: I0117 00:40:06.981312 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a7026cbc7d356671f3fd1d90f3de11f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a7026cbc7d356671f3fd1d90f3de11f\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:06.981473 kubelet[2585]: I0117 00:40:06.981339 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a7026cbc7d356671f3fd1d90f3de11f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7a7026cbc7d356671f3fd1d90f3de11f\") " pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:07.037647 kubelet[2585]: I0117 00:40:07.037610 2585 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jan 17 00:40:07.114018 kubelet[2585]: I0117 00:40:07.104632 2585 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jan 17 00:40:07.114018 kubelet[2585]: I0117 00:40:07.108830 2585 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jan 17 00:40:07.239844 kubelet[2585]: E0117 00:40:07.237481 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:07.246947 kubelet[2585]: E0117 00:40:07.246467 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:07.246947 kubelet[2585]: E0117 00:40:07.246662 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:07.598401 kubelet[2585]: I0117 00:40:07.598151 2585 apiserver.go:52] "Watching apiserver" Jan 17 00:40:07.667260 kubelet[2585]: I0117 00:40:07.666312 2585 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:40:07.778327 kubelet[2585]: I0117 00:40:07.776944 2585 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:07.778327 kubelet[2585]: E0117 00:40:07.777628 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:07.778327 kubelet[2585]: I0117 00:40:07.778024 2585 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 17 00:40:07.799585 kubelet[2585]: E0117 00:40:07.799523 2585 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 17 00:40:07.799715 kubelet[2585]: E0117 00:40:07.799677 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:07.804607 kubelet[2585]: E0117 00:40:07.804177 2585 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 00:40:07.804607 kubelet[2585]: E0117 00:40:07.804505 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:07.822261 kubelet[2585]: I0117 00:40:07.820760 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.820745067 podStartE2EDuration="1.820745067s" podCreationTimestamp="2026-01-17 00:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:40:07.820416149 +0000 UTC m=+1.574229683" watchObservedRunningTime="2026-01-17 00:40:07.820745067 +0000 UTC m=+1.574558601" Jan 17 00:40:07.848696 kubelet[2585]: I0117 00:40:07.848524 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8485026580000001 podStartE2EDuration="1.848502658s" podCreationTimestamp="2026-01-17 00:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:40:07.848156899 +0000 UTC m=+1.601970432" watchObservedRunningTime="2026-01-17 00:40:07.848502658 +0000 UTC m=+1.602316212" Jan 17 00:40:08.011494 sudo[2610]: pam_unix(sudo:session): session closed for user root Jan 17 00:40:08.781688 kubelet[2585]: E0117 00:40:08.781378 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:08.783623 kubelet[2585]: E0117 00:40:08.783539 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:08.784318 kubelet[2585]: E0117 00:40:08.784012 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:09.183982 kubelet[2585]: I0117 00:40:09.181942 2585 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:40:09.190056 containerd[1470]: time="2026-01-17T00:40:09.189410819Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:40:09.190638 kubelet[2585]: I0117 00:40:09.190078 2585 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:40:09.786802 kubelet[2585]: E0117 00:40:09.785825 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:10.164009 systemd[1]: Created slice kubepods-besteffort-pod2dd0e4d0_50f1_4ecf_8f89_2a8844cc0b97.slice - libcontainer container kubepods-besteffort-pod2dd0e4d0_50f1_4ecf_8f89_2a8844cc0b97.slice. Jan 17 00:40:10.197688 systemd[1]: Created slice kubepods-burstable-pod750cca3b_f3be_48de_9f36_1cc8e2858e62.slice - libcontainer container kubepods-burstable-pod750cca3b_f3be_48de_9f36_1cc8e2858e62.slice. Jan 17 00:40:10.236736 kubelet[2585]: I0117 00:40:10.236643 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhhb\" (UniqueName: \"kubernetes.io/projected/2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97-kube-api-access-jjhhb\") pod \"kube-proxy-rdmpk\" (UID: \"2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97\") " pod="kube-system/kube-proxy-rdmpk" Jan 17 00:40:10.236736 kubelet[2585]: I0117 00:40:10.236688 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-run\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.236736 kubelet[2585]: I0117 00:40:10.236713 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-cgroup\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.236736 kubelet[2585]: I0117 00:40:10.236735 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-etc-cni-netd\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237131 kubelet[2585]: I0117 00:40:10.236753 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/750cca3b-f3be-48de-9f36-1cc8e2858e62-clustermesh-secrets\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237131 kubelet[2585]: I0117 00:40:10.236774 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97-lib-modules\") pod \"kube-proxy-rdmpk\" (UID: \"2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97\") " pod="kube-system/kube-proxy-rdmpk" Jan 17 00:40:10.237131 kubelet[2585]: I0117 00:40:10.236792 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-bpf-maps\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237131 kubelet[2585]: I0117 00:40:10.236911 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cni-path\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237131 kubelet[2585]: I0117 00:40:10.236932 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94vg\" (UniqueName: \"kubernetes.io/projected/750cca3b-f3be-48de-9f36-1cc8e2858e62-kube-api-access-q94vg\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237131 kubelet[2585]: I0117 00:40:10.236956 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-xtables-lock\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237451 kubelet[2585]: I0117 00:40:10.236974 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-config-path\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237451 kubelet[2585]: I0117 00:40:10.236994 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97-xtables-lock\") pod \"kube-proxy-rdmpk\" (UID: \"2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97\") " pod="kube-system/kube-proxy-rdmpk" Jan 17 00:40:10.237451 kubelet[2585]: I0117 00:40:10.237012 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-hostproc\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237451 kubelet[2585]: I0117 00:40:10.237031 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-lib-modules\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237451 kubelet[2585]: I0117 00:40:10.237050 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-host-proc-sys-net\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237451 kubelet[2585]: I0117 00:40:10.237068 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-host-proc-sys-kernel\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237634 kubelet[2585]: I0117 00:40:10.237087 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/750cca3b-f3be-48de-9f36-1cc8e2858e62-hubble-tls\") pod \"cilium-k7tj9\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " pod="kube-system/cilium-k7tj9" Jan 17 00:40:10.237634 kubelet[2585]: I0117 00:40:10.237110 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97-kube-proxy\") pod \"kube-proxy-rdmpk\" (UID: \"2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97\") " pod="kube-system/kube-proxy-rdmpk" Jan 17 00:40:10.425945 systemd[1]: Created slice kubepods-besteffort-pod47237cb4_c2f4_4383_b09c_99b5cc5dae91.slice - libcontainer container kubepods-besteffort-pod47237cb4_c2f4_4383_b09c_99b5cc5dae91.slice. Jan 17 00:40:10.440266 kubelet[2585]: I0117 00:40:10.440145 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47237cb4-c2f4-4383-b09c-99b5cc5dae91-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2wk75\" (UID: \"47237cb4-c2f4-4383-b09c-99b5cc5dae91\") " pod="kube-system/cilium-operator-6c4d7847fc-2wk75" Jan 17 00:40:10.441353 kubelet[2585]: I0117 00:40:10.441327 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lr4z\" (UniqueName: \"kubernetes.io/projected/47237cb4-c2f4-4383-b09c-99b5cc5dae91-kube-api-access-2lr4z\") pod \"cilium-operator-6c4d7847fc-2wk75\" (UID: \"47237cb4-c2f4-4383-b09c-99b5cc5dae91\") " pod="kube-system/cilium-operator-6c4d7847fc-2wk75" Jan 17 00:40:10.486586 kubelet[2585]: E0117 00:40:10.486540 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:10.488377 containerd[1470]: time="2026-01-17T00:40:10.487735926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdmpk,Uid:2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97,Namespace:kube-system,Attempt:0,}" Jan 17 00:40:10.515901 kubelet[2585]: E0117 00:40:10.515381 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:10.516549 containerd[1470]: time="2026-01-17T00:40:10.516180863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7tj9,Uid:750cca3b-f3be-48de-9f36-1cc8e2858e62,Namespace:kube-system,Attempt:0,}" Jan 17 00:40:10.591174 containerd[1470]: time="2026-01-17T00:40:10.590756003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:40:10.591174 containerd[1470]: time="2026-01-17T00:40:10.591031772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:40:10.591174 containerd[1470]: time="2026-01-17T00:40:10.591058502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:10.591754 containerd[1470]: time="2026-01-17T00:40:10.591397099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:10.617351 containerd[1470]: time="2026-01-17T00:40:10.616137265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:40:10.617351 containerd[1470]: time="2026-01-17T00:40:10.616353085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:40:10.617351 containerd[1470]: time="2026-01-17T00:40:10.616379082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:10.617351 containerd[1470]: time="2026-01-17T00:40:10.616522799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:10.669289 systemd[1]: Started cri-containerd-8004943ced862e12136cbd48bedeb5e3cd62b122b53ee3fd0d406b3b4354be2b.scope - libcontainer container 8004943ced862e12136cbd48bedeb5e3cd62b122b53ee3fd0d406b3b4354be2b. Jan 17 00:40:10.682797 systemd[1]: Started cri-containerd-74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f.scope - libcontainer container 74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f. Jan 17 00:40:10.740565 kubelet[2585]: E0117 00:40:10.739074 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:10.745096 containerd[1470]: time="2026-01-17T00:40:10.744140331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2wk75,Uid:47237cb4-c2f4-4383-b09c-99b5cc5dae91,Namespace:kube-system,Attempt:0,}" Jan 17 00:40:10.787980 containerd[1470]: time="2026-01-17T00:40:10.787652472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7tj9,Uid:750cca3b-f3be-48de-9f36-1cc8e2858e62,Namespace:kube-system,Attempt:0,} returns sandbox id \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\"" Jan 17 00:40:10.793366 kubelet[2585]: E0117 00:40:10.792765 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:10.800778 containerd[1470]: time="2026-01-17T00:40:10.799612920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdmpk,Uid:2dd0e4d0-50f1-4ecf-8f89-2a8844cc0b97,Namespace:kube-system,Attempt:0,} returns sandbox id \"8004943ced862e12136cbd48bedeb5e3cd62b122b53ee3fd0d406b3b4354be2b\"" Jan 17 00:40:10.806395 containerd[1470]: time="2026-01-17T00:40:10.803838489Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:40:10.806472 kubelet[2585]: E0117 00:40:10.804588 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:10.821185 containerd[1470]: time="2026-01-17T00:40:10.820699407Z" level=info msg="CreateContainer within sandbox \"8004943ced862e12136cbd48bedeb5e3cd62b122b53ee3fd0d406b3b4354be2b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:40:10.872868 containerd[1470]: time="2026-01-17T00:40:10.870457863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:40:10.872868 containerd[1470]: time="2026-01-17T00:40:10.870593765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:40:10.872868 containerd[1470]: time="2026-01-17T00:40:10.870615254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:10.872868 containerd[1470]: time="2026-01-17T00:40:10.870741158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:40:10.896903 containerd[1470]: time="2026-01-17T00:40:10.892010707Z" level=info msg="CreateContainer within sandbox \"8004943ced862e12136cbd48bedeb5e3cd62b122b53ee3fd0d406b3b4354be2b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0155ec0e94949fa86cc54e223ad2ea9c4fb97591612f479b4927c4133cde0d33\"" Jan 17 00:40:10.903382 containerd[1470]: time="2026-01-17T00:40:10.900333663Z" level=info msg="StartContainer for \"0155ec0e94949fa86cc54e223ad2ea9c4fb97591612f479b4927c4133cde0d33\"" Jan 17 00:40:10.933726 systemd[1]: Started cri-containerd-df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb.scope - libcontainer container df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb. Jan 17 00:40:10.991337 systemd[1]: Started cri-containerd-0155ec0e94949fa86cc54e223ad2ea9c4fb97591612f479b4927c4133cde0d33.scope - libcontainer container 0155ec0e94949fa86cc54e223ad2ea9c4fb97591612f479b4927c4133cde0d33. Jan 17 00:40:11.048095 containerd[1470]: time="2026-01-17T00:40:11.048047592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2wk75,Uid:47237cb4-c2f4-4383-b09c-99b5cc5dae91,Namespace:kube-system,Attempt:0,} returns sandbox id \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\"" Jan 17 00:40:11.052568 kubelet[2585]: E0117 00:40:11.050587 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:11.089741 containerd[1470]: time="2026-01-17T00:40:11.089684191Z" level=info msg="StartContainer for \"0155ec0e94949fa86cc54e223ad2ea9c4fb97591612f479b4927c4133cde0d33\" returns successfully" Jan 17 00:40:11.837714 kubelet[2585]: E0117 00:40:11.837580 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:13.825888 kubelet[2585]: E0117 00:40:13.823292 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:13.899039 kubelet[2585]: E0117 00:40:13.890764 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:13.942486 kubelet[2585]: I0117 00:40:13.941766 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rdmpk" podStartSLOduration=3.941736141 podStartE2EDuration="3.941736141s" podCreationTimestamp="2026-01-17 00:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:40:11.877370979 +0000 UTC m=+5.631184542" watchObservedRunningTime="2026-01-17 00:40:13.941736141 +0000 UTC m=+7.695549695" Jan 17 00:40:16.510037 kubelet[2585]: E0117 00:40:16.509014 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:17.353305 kubelet[2585]: E0117 00:40:17.353119 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:22.564930 kubelet[2585]: E0117 00:40:22.140426 2585 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.272s" Jan 17 00:40:44.885979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931017938.mount: Deactivated successfully. Jan 17 00:40:55.953737 containerd[1470]: time="2026-01-17T00:40:55.953633307Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:55.956661 containerd[1470]: time="2026-01-17T00:40:55.956575974Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:40:55.959286 containerd[1470]: time="2026-01-17T00:40:55.959135628Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:55.963264 containerd[1470]: time="2026-01-17T00:40:55.963125197Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 45.159192502s" Jan 17 00:40:55.963264 containerd[1470]: time="2026-01-17T00:40:55.963252925Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:40:55.965352 containerd[1470]: time="2026-01-17T00:40:55.965186135Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:40:55.981879 containerd[1470]: time="2026-01-17T00:40:55.981784219Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:40:56.030911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758524255.mount: Deactivated successfully. Jan 17 00:40:56.050759 containerd[1470]: time="2026-01-17T00:40:56.050366264Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\"" Jan 17 00:40:56.055924 containerd[1470]: time="2026-01-17T00:40:56.054382159Z" level=info msg="StartContainer for \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\"" Jan 17 00:40:56.170603 systemd[1]: Started cri-containerd-43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e.scope - libcontainer container 43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e. Jan 17 00:40:56.252487 containerd[1470]: time="2026-01-17T00:40:56.252289882Z" level=info msg="StartContainer for \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\" returns successfully" Jan 17 00:40:56.304714 systemd[1]: cri-containerd-43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e.scope: Deactivated successfully. Jan 17 00:40:56.646816 containerd[1470]: time="2026-01-17T00:40:56.646136110Z" level=info msg="shim disconnected" id=43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e namespace=k8s.io Jan 17 00:40:56.646816 containerd[1470]: time="2026-01-17T00:40:56.646437952Z" level=warning msg="cleaning up after shim disconnected" id=43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e namespace=k8s.io Jan 17 00:40:56.646816 containerd[1470]: time="2026-01-17T00:40:56.646452148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:40:57.030441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e-rootfs.mount: Deactivated successfully. Jan 17 00:40:57.127436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798424572.mount: Deactivated successfully. Jan 17 00:40:57.219055 kubelet[2585]: E0117 00:40:57.218917 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:57.234095 containerd[1470]: time="2026-01-17T00:40:57.233720022Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:40:57.307614 containerd[1470]: time="2026-01-17T00:40:57.305454622Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\"" Jan 17 00:40:57.313704 containerd[1470]: time="2026-01-17T00:40:57.313271239Z" level=info msg="StartContainer for \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\"" Jan 17 00:40:57.395682 systemd[1]: Started cri-containerd-c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03.scope - libcontainer container c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03. Jan 17 00:40:57.481833 containerd[1470]: time="2026-01-17T00:40:57.481688073Z" level=info msg="StartContainer for \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\" returns successfully" Jan 17 00:40:57.508934 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:40:57.509336 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:40:57.509467 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:40:57.523803 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:40:57.531755 systemd[1]: cri-containerd-c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03.scope: Deactivated successfully. Jan 17 00:40:57.647757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:40:57.723474 containerd[1470]: time="2026-01-17T00:40:57.723131737Z" level=info msg="shim disconnected" id=c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03 namespace=k8s.io Jan 17 00:40:57.723474 containerd[1470]: time="2026-01-17T00:40:57.723373646Z" level=warning msg="cleaning up after shim disconnected" id=c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03 namespace=k8s.io Jan 17 00:40:57.723474 containerd[1470]: time="2026-01-17T00:40:57.723386831Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:40:58.246437 kubelet[2585]: E0117 00:40:58.242672 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:58.292671 containerd[1470]: time="2026-01-17T00:40:58.289981514Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:40:58.356045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268058337.mount: Deactivated successfully. Jan 17 00:40:58.375315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount967866606.mount: Deactivated successfully. Jan 17 00:40:58.418129 containerd[1470]: time="2026-01-17T00:40:58.417992099Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\"" Jan 17 00:40:58.429780 containerd[1470]: time="2026-01-17T00:40:58.427678122Z" level=info msg="StartContainer for \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\"" Jan 17 00:40:58.546169 systemd[1]: Started cri-containerd-b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd.scope - libcontainer container b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd. Jan 17 00:40:58.769793 systemd[1]: cri-containerd-b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd.scope: Deactivated successfully. Jan 17 00:40:58.779133 containerd[1470]: time="2026-01-17T00:40:58.776757068Z" level=info msg="StartContainer for \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\" returns successfully" Jan 17 00:40:58.983754 containerd[1470]: time="2026-01-17T00:40:58.983570806Z" level=info msg="shim disconnected" id=b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd namespace=k8s.io Jan 17 00:40:58.984008 containerd[1470]: time="2026-01-17T00:40:58.983748487Z" level=warning msg="cleaning up after shim disconnected" id=b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd namespace=k8s.io Jan 17 00:40:58.984008 containerd[1470]: time="2026-01-17T00:40:58.983830108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:40:59.050850 containerd[1470]: time="2026-01-17T00:40:59.050616293Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:40:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:40:59.250597 kubelet[2585]: E0117 00:40:59.249470 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:40:59.286336 containerd[1470]: time="2026-01-17T00:40:59.286281366Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:40:59.379003 containerd[1470]: time="2026-01-17T00:40:59.378835928Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\"" Jan 17 00:40:59.379793 containerd[1470]: time="2026-01-17T00:40:59.379575575Z" level=info msg="StartContainer for \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\"" Jan 17 00:40:59.488749 systemd[1]: Started cri-containerd-320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35.scope - libcontainer container 320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35. Jan 17 00:40:59.550433 systemd[1]: cri-containerd-320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35.scope: Deactivated successfully. Jan 17 00:40:59.561869 containerd[1470]: time="2026-01-17T00:40:59.559023046Z" level=info msg="StartContainer for \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\" returns successfully" Jan 17 00:40:59.680598 containerd[1470]: time="2026-01-17T00:40:59.680312085Z" level=info msg="shim disconnected" id=320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35 namespace=k8s.io Jan 17 00:40:59.680598 containerd[1470]: time="2026-01-17T00:40:59.680379430Z" level=warning msg="cleaning up after shim disconnected" id=320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35 namespace=k8s.io Jan 17 00:40:59.680598 containerd[1470]: time="2026-01-17T00:40:59.680388457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:40:59.984153 containerd[1470]: time="2026-01-17T00:40:59.983987401Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:40:59.990646 containerd[1470]: time="2026-01-17T00:40:59.990135162Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:40:59.995895 containerd[1470]: time="2026-01-17T00:40:59.995731155Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:41:00.003831 containerd[1470]: time="2026-01-17T00:41:00.003705662Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.038391348s" Jan 17 00:41:00.003831 containerd[1470]: time="2026-01-17T00:41:00.003814033Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:41:00.021056 containerd[1470]: time="2026-01-17T00:41:00.020157120Z" level=info msg="CreateContainer within sandbox \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:41:00.032975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35-rootfs.mount: Deactivated successfully. Jan 17 00:41:00.114681 containerd[1470]: time="2026-01-17T00:41:00.110651840Z" level=info msg="CreateContainer within sandbox \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6\"" Jan 17 00:41:00.114681 containerd[1470]: time="2026-01-17T00:41:00.113754445Z" level=info msg="StartContainer for \"48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6\"" Jan 17 00:41:00.234071 systemd[1]: Started cri-containerd-48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6.scope - libcontainer container 48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6. Jan 17 00:41:00.272618 kubelet[2585]: E0117 00:41:00.268806 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:00.339623 containerd[1470]: time="2026-01-17T00:41:00.336035101Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:41:00.464400 containerd[1470]: time="2026-01-17T00:41:00.463388713Z" level=info msg="StartContainer for \"48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6\" returns successfully" Jan 17 00:41:00.497641 containerd[1470]: time="2026-01-17T00:41:00.496695192Z" level=info msg="CreateContainer within sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\"" Jan 17 00:41:00.507691 containerd[1470]: time="2026-01-17T00:41:00.506296607Z" level=info msg="StartContainer for \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\"" Jan 17 00:41:00.625770 systemd[1]: Started cri-containerd-2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9.scope - libcontainer container 2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9. Jan 17 00:41:00.940799 containerd[1470]: time="2026-01-17T00:41:00.940664876Z" level=info msg="StartContainer for \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\" returns successfully" Jan 17 00:41:01.346168 kubelet[2585]: E0117 00:41:01.345923 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:01.753585 kubelet[2585]: I0117 00:41:01.751839 2585 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:41:01.865820 kubelet[2585]: I0117 00:41:01.865751 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2wk75" podStartSLOduration=2.913067249 podStartE2EDuration="51.865673723s" podCreationTimestamp="2026-01-17 00:40:10 +0000 UTC" firstStartedPulling="2026-01-17 00:40:11.052561659 +0000 UTC m=+4.806375193" lastFinishedPulling="2026-01-17 00:41:00.005168113 +0000 UTC m=+53.758981667" observedRunningTime="2026-01-17 00:41:01.460469651 +0000 UTC m=+55.214283205" watchObservedRunningTime="2026-01-17 00:41:01.865673723 +0000 UTC m=+55.619487257" Jan 17 00:41:01.942666 systemd[1]: Created slice kubepods-burstable-pod7ebb7853_cf88_4b45_97a5_9f29126dd3d3.slice - libcontainer container kubepods-burstable-pod7ebb7853_cf88_4b45_97a5_9f29126dd3d3.slice. Jan 17 00:41:01.972855 systemd[1]: Created slice kubepods-burstable-pod6a4e5e92_feb5_4343_8d71_d9530d48db77.slice - libcontainer container kubepods-burstable-pod6a4e5e92_feb5_4343_8d71_d9530d48db77.slice. Jan 17 00:41:02.110049 kubelet[2585]: I0117 00:41:02.108880 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a4e5e92-feb5-4343-8d71-d9530d48db77-config-volume\") pod \"coredns-674b8bbfcf-5jg4n\" (UID: \"6a4e5e92-feb5-4343-8d71-d9530d48db77\") " pod="kube-system/coredns-674b8bbfcf-5jg4n" Jan 17 00:41:02.110049 kubelet[2585]: I0117 00:41:02.108966 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ebb7853-cf88-4b45-97a5-9f29126dd3d3-config-volume\") pod \"coredns-674b8bbfcf-5ncb6\" (UID: \"7ebb7853-cf88-4b45-97a5-9f29126dd3d3\") " pod="kube-system/coredns-674b8bbfcf-5ncb6" Jan 17 00:41:02.110049 kubelet[2585]: I0117 00:41:02.108991 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn669\" (UniqueName: \"kubernetes.io/projected/6a4e5e92-feb5-4343-8d71-d9530d48db77-kube-api-access-wn669\") pod \"coredns-674b8bbfcf-5jg4n\" (UID: \"6a4e5e92-feb5-4343-8d71-d9530d48db77\") " pod="kube-system/coredns-674b8bbfcf-5jg4n" Jan 17 00:41:02.110049 kubelet[2585]: I0117 00:41:02.109018 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2frv\" (UniqueName: \"kubernetes.io/projected/7ebb7853-cf88-4b45-97a5-9f29126dd3d3-kube-api-access-t2frv\") pod \"coredns-674b8bbfcf-5ncb6\" (UID: \"7ebb7853-cf88-4b45-97a5-9f29126dd3d3\") " pod="kube-system/coredns-674b8bbfcf-5ncb6" Jan 17 00:41:02.250596 kubelet[2585]: E0117 00:41:02.250412 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:02.263860 containerd[1470]: time="2026-01-17T00:41:02.263727616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5ncb6,Uid:7ebb7853-cf88-4b45-97a5-9f29126dd3d3,Namespace:kube-system,Attempt:0,}" Jan 17 00:41:02.280115 kubelet[2585]: E0117 00:41:02.279944 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:02.282423 containerd[1470]: time="2026-01-17T00:41:02.281326246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5jg4n,Uid:6a4e5e92-feb5-4343-8d71-d9530d48db77,Namespace:kube-system,Attempt:0,}" Jan 17 00:41:02.351852 kubelet[2585]: E0117 00:41:02.348921 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:02.351852 kubelet[2585]: E0117 00:41:02.349726 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:02.609692 kubelet[2585]: I0117 00:41:02.609295 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k7tj9" podStartSLOduration=7.445175834 podStartE2EDuration="52.60927246s" podCreationTimestamp="2026-01-17 00:40:10 +0000 UTC" firstStartedPulling="2026-01-17 00:40:10.800873174 +0000 UTC m=+4.554686708" lastFinishedPulling="2026-01-17 00:40:55.964969771 +0000 UTC m=+49.718783334" observedRunningTime="2026-01-17 00:41:02.598804379 +0000 UTC m=+56.352617934" watchObservedRunningTime="2026-01-17 00:41:02.60927246 +0000 UTC m=+56.363086004" Jan 17 00:41:03.373576 kubelet[2585]: E0117 00:41:03.372137 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:03.867393 systemd[1]: run-containerd-runc-k8s.io-2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9-runc.f2rLZR.mount: Deactivated successfully. Jan 17 00:41:04.384142 kubelet[2585]: E0117 00:41:04.383715 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:05.295901 systemd-networkd[1391]: cilium_host: Link UP Jan 17 00:41:05.296703 systemd-networkd[1391]: cilium_net: Link UP Jan 17 00:41:05.296714 systemd-networkd[1391]: cilium_net: Gained carrier Jan 17 00:41:05.297019 systemd-networkd[1391]: cilium_host: Gained carrier Jan 17 00:41:05.307849 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jan 17 00:41:05.479934 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jan 17 00:41:05.758961 systemd-networkd[1391]: cilium_vxlan: Link UP Jan 17 00:41:05.758971 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jan 17 00:41:06.577294 kernel: NET: Registered PF_ALG protocol family Jan 17 00:41:07.708119 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jan 17 00:41:08.523979 kubelet[2585]: E0117 00:41:08.523891 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:08.530395 systemd-networkd[1391]: lxc_health: Link UP Jan 17 00:41:08.570168 systemd-networkd[1391]: lxc_health: Gained carrier Jan 17 00:41:08.849589 systemd[1]: run-containerd-runc-k8s.io-2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9-runc.3XKtY4.mount: Deactivated successfully. Jan 17 00:41:09.517436 kubelet[2585]: E0117 00:41:09.508352 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:09.513890 systemd-networkd[1391]: lxcf93c8ec28666: Link UP Jan 17 00:41:09.619855 kernel: eth0: renamed from tmpc31ea Jan 17 00:41:09.650776 systemd-networkd[1391]: lxc19ff9d33ad98: Link UP Jan 17 00:41:09.655335 kernel: eth0: renamed from tmpcfca9 Jan 17 00:41:09.655356 systemd-networkd[1391]: lxcf93c8ec28666: Gained carrier Jan 17 00:41:09.667429 systemd-networkd[1391]: lxc19ff9d33ad98: Gained carrier Jan 17 00:41:10.648054 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 17 00:41:11.223785 systemd-networkd[1391]: lxc19ff9d33ad98: Gained IPv6LL Jan 17 00:41:11.243405 systemd[1]: run-containerd-runc-k8s.io-2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9-runc.86GkT6.mount: Deactivated successfully. Jan 17 00:41:11.739087 systemd-networkd[1391]: lxcf93c8ec28666: Gained IPv6LL Jan 17 00:41:17.735356 kubelet[2585]: E0117 00:41:17.735167 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:17.984725 sudo[1637]: pam_unix(sudo:session): session closed for user root Jan 17 00:41:17.999106 sshd[1632]: pam_unix(sshd:session): session closed for user core Jan 17 00:41:18.011893 systemd[1]: sshd@6-10.0.0.107:22-10.0.0.1:39040.service: Deactivated successfully. Jan 17 00:41:18.020562 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:41:18.020904 systemd[1]: session-7.scope: Consumed 17.853s CPU time, 165.2M memory peak, 0B memory swap peak. Jan 17 00:41:18.025987 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:41:18.031574 systemd-logind[1452]: Removed session 7. Jan 17 00:41:24.179535 containerd[1470]: time="2026-01-17T00:41:24.178875533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:41:24.179535 containerd[1470]: time="2026-01-17T00:41:24.178984335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:41:24.179535 containerd[1470]: time="2026-01-17T00:41:24.179008140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:24.179535 containerd[1470]: time="2026-01-17T00:41:24.179145646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:24.194622 containerd[1470]: time="2026-01-17T00:41:24.193726353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:41:24.194622 containerd[1470]: time="2026-01-17T00:41:24.193809007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:41:24.194622 containerd[1470]: time="2026-01-17T00:41:24.193850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:24.194622 containerd[1470]: time="2026-01-17T00:41:24.194021573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:41:24.415968 systemd[1]: Started cri-containerd-c31ea552686f3e276a3f5a6a5dc8ae943819ac3b22cb8efd32d83a1eea4915ed.scope - libcontainer container c31ea552686f3e276a3f5a6a5dc8ae943819ac3b22cb8efd32d83a1eea4915ed. Jan 17 00:41:24.420472 systemd[1]: Started cri-containerd-cfca97be089f8bda894365df82475a797142fb9ed64fcb883e32606ddd78f52c.scope - libcontainer container cfca97be089f8bda894365df82475a797142fb9ed64fcb883e32606ddd78f52c. Jan 17 00:41:24.489493 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:41:24.500360 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 00:41:24.603815 containerd[1470]: time="2026-01-17T00:41:24.591271274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5ncb6,Uid:7ebb7853-cf88-4b45-97a5-9f29126dd3d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfca97be089f8bda894365df82475a797142fb9ed64fcb883e32606ddd78f52c\"" Jan 17 00:41:24.609784 kubelet[2585]: E0117 00:41:24.605901 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:24.610621 containerd[1470]: time="2026-01-17T00:41:24.610491496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5jg4n,Uid:6a4e5e92-feb5-4343-8d71-d9530d48db77,Namespace:kube-system,Attempt:0,} returns sandbox id \"c31ea552686f3e276a3f5a6a5dc8ae943819ac3b22cb8efd32d83a1eea4915ed\"" Jan 17 00:41:24.616960 kubelet[2585]: E0117 00:41:24.616851 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:24.635571 containerd[1470]: time="2026-01-17T00:41:24.635334375Z" level=info msg="CreateContainer within sandbox \"cfca97be089f8bda894365df82475a797142fb9ed64fcb883e32606ddd78f52c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:41:24.667508 containerd[1470]: time="2026-01-17T00:41:24.664826841Z" level=info msg="CreateContainer within sandbox \"c31ea552686f3e276a3f5a6a5dc8ae943819ac3b22cb8efd32d83a1eea4915ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:41:24.844400 containerd[1470]: time="2026-01-17T00:41:24.842648941Z" level=info msg="CreateContainer within sandbox \"cfca97be089f8bda894365df82475a797142fb9ed64fcb883e32606ddd78f52c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77139773895060e403929fb6d6e5bdded07fa61da8401c956fb1f168fa1755ee\"" Jan 17 00:41:24.867499 containerd[1470]: time="2026-01-17T00:41:24.867445444Z" level=info msg="StartContainer for \"77139773895060e403929fb6d6e5bdded07fa61da8401c956fb1f168fa1755ee\"" Jan 17 00:41:24.877265 containerd[1470]: time="2026-01-17T00:41:24.877181494Z" level=info msg="CreateContainer within sandbox \"c31ea552686f3e276a3f5a6a5dc8ae943819ac3b22cb8efd32d83a1eea4915ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24e724a0e769b8c282c50c5edf08d687b2ce38ba9c5cce157f1f8c0d1fd77c9c\"" Jan 17 00:41:24.878841 containerd[1470]: time="2026-01-17T00:41:24.878741030Z" level=info msg="StartContainer for \"24e724a0e769b8c282c50c5edf08d687b2ce38ba9c5cce157f1f8c0d1fd77c9c\"" Jan 17 00:41:24.990265 systemd[1]: Started cri-containerd-77139773895060e403929fb6d6e5bdded07fa61da8401c956fb1f168fa1755ee.scope - libcontainer container 77139773895060e403929fb6d6e5bdded07fa61da8401c956fb1f168fa1755ee. Jan 17 00:41:25.009931 systemd[1]: Started cri-containerd-24e724a0e769b8c282c50c5edf08d687b2ce38ba9c5cce157f1f8c0d1fd77c9c.scope - libcontainer container 24e724a0e769b8c282c50c5edf08d687b2ce38ba9c5cce157f1f8c0d1fd77c9c. Jan 17 00:41:25.237767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233062238.mount: Deactivated successfully. Jan 17 00:41:25.280025 containerd[1470]: time="2026-01-17T00:41:25.279866363Z" level=info msg="StartContainer for \"77139773895060e403929fb6d6e5bdded07fa61da8401c956fb1f168fa1755ee\" returns successfully" Jan 17 00:41:25.434609 kubelet[2585]: E0117 00:41:25.428070 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:25.517599 containerd[1470]: time="2026-01-17T00:41:25.516445467Z" level=info msg="StartContainer for \"24e724a0e769b8c282c50c5edf08d687b2ce38ba9c5cce157f1f8c0d1fd77c9c\" returns successfully" Jan 17 00:41:26.563731 kubelet[2585]: E0117 00:41:26.562568 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:26.563731 kubelet[2585]: E0117 00:41:26.562726 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:26.616753 kubelet[2585]: I0117 00:41:26.616653 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5ncb6" podStartSLOduration=76.616633779 podStartE2EDuration="1m16.616633779s" podCreationTimestamp="2026-01-17 00:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:41:25.53084176 +0000 UTC m=+79.284655325" watchObservedRunningTime="2026-01-17 00:41:26.616633779 +0000 UTC m=+80.370447312" Jan 17 00:41:26.656533 kubelet[2585]: I0117 00:41:26.655614 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5jg4n" podStartSLOduration=76.655592238 podStartE2EDuration="1m16.655592238s" podCreationTimestamp="2026-01-17 00:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:41:26.616047884 +0000 UTC m=+80.369861448" watchObservedRunningTime="2026-01-17 00:41:26.655592238 +0000 UTC m=+80.409405782" Jan 17 00:41:27.574276 kubelet[2585]: E0117 00:41:27.573611 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:27.574276 kubelet[2585]: E0117 00:41:27.574033 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:28.574950 kubelet[2585]: E0117 00:41:28.574819 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:30.758883 kubelet[2585]: E0117 00:41:30.758784 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:33.732842 kubelet[2585]: E0117 00:41:33.732760 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:37.740572 kubelet[2585]: E0117 00:41:37.739499 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.497715 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.501670 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.502126 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.503049 1460 omaha_request_params.cc:62] Current group set to lts Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.503438 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.503460 1460 update_attempter.cc:643] Scheduling an action processor start. Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.503494 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.503652 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.503788 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.503808 1460 omaha_request_action.cc:272] Request: Jan 17 00:41:55.505851 update_engine[1460]: Jan 17 00:41:55.505851 update_engine[1460]: Jan 17 00:41:55.505851 update_engine[1460]: Jan 17 00:41:55.505851 update_engine[1460]: Jan 17 00:41:55.505851 update_engine[1460]: Jan 17 00:41:55.505851 update_engine[1460]: Jan 17 00:41:55.505851 update_engine[1460]: Jan 17 00:41:55.505851 update_engine[1460]: Jan 17 00:41:55.505851 update_engine[1460]: I20260117 00:41:55.503821 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:41:55.523682 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 00:41:55.529030 update_engine[1460]: I20260117 00:41:55.528834 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:41:55.535643 update_engine[1460]: I20260117 00:41:55.535356 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:41:55.552492 update_engine[1460]: E20260117 00:41:55.550999 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:41:55.552492 update_engine[1460]: I20260117 00:41:55.551985 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 00:42:05.486943 update_engine[1460]: I20260117 00:42:05.485957 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:42:05.486943 update_engine[1460]: I20260117 00:42:05.486501 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:42:05.486943 update_engine[1460]: I20260117 00:42:05.486768 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:42:05.502503 update_engine[1460]: E20260117 00:42:05.502175 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:42:05.502503 update_engine[1460]: I20260117 00:42:05.502420 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 00:42:06.789142 kubelet[2585]: E0117 00:42:06.789066 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:14.737270 kubelet[2585]: E0117 00:42:14.734735 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:15.489788 update_engine[1460]: I20260117 00:42:15.488718 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:42:15.489788 update_engine[1460]: I20260117 00:42:15.489121 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:42:15.490714 update_engine[1460]: I20260117 00:42:15.490402 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:42:15.506440 update_engine[1460]: E20260117 00:42:15.505595 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:42:15.506440 update_engine[1460]: I20260117 00:42:15.505736 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 00:42:25.497098 update_engine[1460]: I20260117 00:42:25.491633 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:42:25.513063 update_engine[1460]: I20260117 00:42:25.499046 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:42:25.513063 update_engine[1460]: I20260117 00:42:25.503259 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:42:25.531088 update_engine[1460]: E20260117 00:42:25.526902 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.527030 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.527050 1460 omaha_request_action.cc:617] Omaha request response: Jan 17 00:42:25.531088 update_engine[1460]: E20260117 00:42:25.527997 1460 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.528460 1460 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.528481 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.528494 1460 update_attempter.cc:306] Processing Done. Jan 17 00:42:25.531088 update_engine[1460]: E20260117 00:42:25.528519 1460 update_attempter.cc:619] Update failed. Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.528532 1460 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.528544 1460 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.528558 1460 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.528705 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.528745 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:42:25.531088 update_engine[1460]: I20260117 00:42:25.528761 1460 omaha_request_action.cc:272] Request: Jan 17 00:42:25.531088 update_engine[1460]: Jan 17 00:42:25.531088 update_engine[1460]: Jan 17 00:42:25.544548 update_engine[1460]: Jan 17 00:42:25.544548 update_engine[1460]: Jan 17 00:42:25.544548 update_engine[1460]: Jan 17 00:42:25.544548 update_engine[1460]: Jan 17 00:42:25.544548 update_engine[1460]: I20260117 00:42:25.528774 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:42:25.544548 update_engine[1460]: I20260117 00:42:25.529103 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:42:25.544548 update_engine[1460]: I20260117 00:42:25.543841 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:42:25.544800 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 00:42:25.584848 update_engine[1460]: E20260117 00:42:25.574828 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:42:25.584848 update_engine[1460]: I20260117 00:42:25.574941 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:42:25.584848 update_engine[1460]: I20260117 00:42:25.574954 1460 omaha_request_action.cc:617] Omaha request response: Jan 17 00:42:25.584848 update_engine[1460]: I20260117 00:42:25.574968 1460 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:42:25.584848 update_engine[1460]: I20260117 00:42:25.574977 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:42:25.584848 update_engine[1460]: I20260117 00:42:25.574987 1460 update_attempter.cc:306] Processing Done. Jan 17 00:42:25.584848 update_engine[1460]: I20260117 00:42:25.574999 1460 update_attempter.cc:310] Error event sent. Jan 17 00:42:25.584848 update_engine[1460]: I20260117 00:42:25.575016 1460 update_check_scheduler.cc:74] Next update check in 49m3s Jan 17 00:42:25.589000 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 00:42:29.322701 systemd[1]: Started sshd@7-10.0.0.107:22-10.0.0.1:49206.service - OpenSSH per-connection server daemon (10.0.0.1:49206). Jan 17 00:42:29.431547 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 49206 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:29.434793 sshd[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:29.446788 systemd-logind[1452]: New session 8 of user core. Jan 17 00:42:29.461319 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:42:29.768721 sshd[4115]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:29.773151 systemd[1]: sshd@7-10.0.0.107:22-10.0.0.1:49206.service: Deactivated successfully. Jan 17 00:42:29.776640 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:42:29.783168 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:42:29.786150 systemd-logind[1452]: Removed session 8. Jan 17 00:42:34.827726 systemd[1]: Started sshd@8-10.0.0.107:22-10.0.0.1:32790.service - OpenSSH per-connection server daemon (10.0.0.1:32790). Jan 17 00:42:34.923661 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 32790 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:34.928871 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:34.949080 systemd-logind[1452]: New session 9 of user core. Jan 17 00:42:34.962670 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:42:35.299675 sshd[4132]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:35.309666 systemd[1]: sshd@8-10.0.0.107:22-10.0.0.1:32790.service: Deactivated successfully. Jan 17 00:42:35.317011 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:42:35.325039 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:42:35.330489 systemd-logind[1452]: Removed session 9. Jan 17 00:42:39.734716 kubelet[2585]: E0117 00:42:39.733283 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:39.744061 kubelet[2585]: E0117 00:42:39.740038 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:40.357065 systemd[1]: Started sshd@9-10.0.0.107:22-10.0.0.1:32792.service - OpenSSH per-connection server daemon (10.0.0.1:32792). Jan 17 00:42:40.559708 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 32792 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:40.583666 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:40.637056 systemd-logind[1452]: New session 10 of user core. Jan 17 00:42:40.685909 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:42:41.200130 sshd[4148]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:41.224616 systemd[1]: sshd@9-10.0.0.107:22-10.0.0.1:32792.service: Deactivated successfully. Jan 17 00:42:41.233741 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:42:41.239004 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:42:41.244995 systemd-logind[1452]: Removed session 10. Jan 17 00:42:42.737279 kubelet[2585]: E0117 00:42:42.734415 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:46.282978 systemd[1]: Started sshd@10-10.0.0.107:22-10.0.0.1:34460.service - OpenSSH per-connection server daemon (10.0.0.1:34460). Jan 17 00:42:46.378189 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 34460 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:46.383600 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:46.395104 systemd-logind[1452]: New session 11 of user core. Jan 17 00:42:46.420962 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:42:46.806740 sshd[4165]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:46.831021 systemd[1]: sshd@10-10.0.0.107:22-10.0.0.1:34460.service: Deactivated successfully. Jan 17 00:42:46.838049 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:42:46.849580 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:42:46.858591 systemd-logind[1452]: Removed session 11. Jan 17 00:42:50.735410 kubelet[2585]: E0117 00:42:50.734588 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:51.860160 systemd[1]: Started sshd@11-10.0.0.107:22-10.0.0.1:34474.service - OpenSSH per-connection server daemon (10.0.0.1:34474). Jan 17 00:42:52.002536 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 34474 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:52.005062 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:52.047994 systemd-logind[1452]: New session 12 of user core. Jan 17 00:42:52.054517 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:42:52.398525 sshd[4180]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:52.405840 systemd[1]: sshd@11-10.0.0.107:22-10.0.0.1:34474.service: Deactivated successfully. Jan 17 00:42:52.410572 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:42:52.423470 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:42:52.429495 systemd-logind[1452]: Removed session 12. Jan 17 00:42:52.738364 kubelet[2585]: E0117 00:42:52.733955 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:55.745773 kubelet[2585]: E0117 00:42:55.744356 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:42:57.471537 systemd[1]: Started sshd@12-10.0.0.107:22-10.0.0.1:60336.service - OpenSSH per-connection server daemon (10.0.0.1:60336). Jan 17 00:42:57.652765 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 60336 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:42:57.662814 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:57.694498 systemd-logind[1452]: New session 13 of user core. Jan 17 00:42:57.720907 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:42:58.352985 sshd[4195]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:58.368131 systemd[1]: sshd@12-10.0.0.107:22-10.0.0.1:60336.service: Deactivated successfully. Jan 17 00:42:58.370853 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:42:58.389899 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:42:58.398089 systemd-logind[1452]: Removed session 13. Jan 17 00:43:03.431047 systemd[1]: Started sshd@13-10.0.0.107:22-10.0.0.1:48270.service - OpenSSH per-connection server daemon (10.0.0.1:48270). Jan 17 00:43:03.596430 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 48270 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:43:03.605047 sshd[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:03.625617 systemd-logind[1452]: New session 14 of user core. Jan 17 00:43:03.640625 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:43:04.208921 sshd[4210]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:04.226118 systemd[1]: sshd@13-10.0.0.107:22-10.0.0.1:48270.service: Deactivated successfully. Jan 17 00:43:04.246906 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:43:04.261908 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:43:04.264533 systemd-logind[1452]: Removed session 14. Jan 17 00:43:09.254366 systemd[1]: Started sshd@14-10.0.0.107:22-10.0.0.1:48278.service - OpenSSH per-connection server daemon (10.0.0.1:48278). Jan 17 00:43:09.415904 sshd[4227]: Accepted publickey for core from 10.0.0.1 port 48278 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:43:09.423690 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:09.443497 systemd-logind[1452]: New session 15 of user core. Jan 17 00:43:09.460379 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:43:10.005999 sshd[4227]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:10.013798 systemd[1]: sshd@14-10.0.0.107:22-10.0.0.1:48278.service: Deactivated successfully. Jan 17 00:43:10.025766 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:43:10.051043 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:43:10.062435 systemd-logind[1452]: Removed session 15. Jan 17 00:43:15.046620 systemd[1]: Started sshd@15-10.0.0.107:22-10.0.0.1:52760.service - OpenSSH per-connection server daemon (10.0.0.1:52760). Jan 17 00:43:15.176095 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 52760 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:43:15.179641 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:15.218378 systemd-logind[1452]: New session 16 of user core. Jan 17 00:43:15.229700 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:43:15.565695 sshd[4246]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:15.584778 systemd[1]: sshd@15-10.0.0.107:22-10.0.0.1:52760.service: Deactivated successfully. Jan 17 00:43:15.591700 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:43:15.598389 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:43:15.636815 systemd[1]: Started sshd@16-10.0.0.107:22-10.0.0.1:52768.service - OpenSSH per-connection server daemon (10.0.0.1:52768). Jan 17 00:43:15.640543 systemd-logind[1452]: Removed session 16. Jan 17 00:43:15.711055 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 52768 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:43:15.714542 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:15.736352 systemd-logind[1452]: New session 17 of user core. Jan 17 00:43:15.749385 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:43:16.098531 sshd[4265]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:16.120784 systemd[1]: sshd@16-10.0.0.107:22-10.0.0.1:52768.service: Deactivated successfully. Jan 17 00:43:16.126184 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:43:16.130471 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:43:16.149071 systemd[1]: Started sshd@17-10.0.0.107:22-10.0.0.1:52778.service - OpenSSH per-connection server daemon (10.0.0.1:52778). Jan 17 00:43:16.152077 systemd-logind[1452]: Removed session 17. Jan 17 00:43:16.223945 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 52778 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:43:16.226681 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:16.237351 systemd-logind[1452]: New session 18 of user core. Jan 17 00:43:16.248721 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:43:16.576489 sshd[4277]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:16.582032 systemd[1]: sshd@17-10.0.0.107:22-10.0.0.1:52778.service: Deactivated successfully. Jan 17 00:43:16.587629 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:43:16.592578 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:43:16.594896 systemd-logind[1452]: Removed session 18. Jan 17 00:43:21.678174 systemd[1]: Started sshd@18-10.0.0.107:22-10.0.0.1:52788.service - OpenSSH per-connection server daemon (10.0.0.1:52788). Jan 17 00:43:21.872593 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 52788 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:43:21.883066 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:21.908769 systemd-logind[1452]: New session 19 of user core. Jan 17 00:43:21.930377 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:43:22.218584 sshd[4291]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:22.231047 systemd[1]: sshd@18-10.0.0.107:22-10.0.0.1:52788.service: Deactivated successfully. Jan 17 00:43:22.241895 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:43:22.259463 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:43:22.264667 systemd-logind[1452]: Removed session 19. Jan 17 00:43:24.735403 kubelet[2585]: E0117 00:43:24.734791 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:31.140790 systemd[1]: Started sshd@19-10.0.0.107:22-10.0.0.1:48852.service - OpenSSH per-connection server daemon (10.0.0.1:48852). Jan 17 00:43:32.719501 kubelet[2585]: E0117 00:43:32.719409 2585 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.602s" Jan 17 00:43:32.787247 kubelet[2585]: E0117 00:43:32.785345 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:32.788543 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 48852 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:43:32.826170 sshd[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:32.895446 systemd-logind[1452]: New session 20 of user core. Jan 17 00:43:32.904540 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:43:48.084957 kubelet[2585]: E0117 00:43:48.082756 2585 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.97s" Jan 17 00:43:48.124697 systemd[1]: cri-containerd-aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d.scope: Deactivated successfully. Jan 17 00:43:48.125511 systemd[1]: cri-containerd-aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d.scope: Consumed 9.959s CPU time, 16.5M memory peak, 0B memory swap peak. Jan 17 00:43:48.134861 systemd[1]: cri-containerd-48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6.scope: Deactivated successfully. Jan 17 00:43:48.135535 systemd[1]: cri-containerd-48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6.scope: Consumed 2.441s CPU time. Jan 17 00:43:48.394486 sshd[4306]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:48.408268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d-rootfs.mount: Deactivated successfully. Jan 17 00:43:48.411787 systemd[1]: sshd@19-10.0.0.107:22-10.0.0.1:48852.service: Deactivated successfully. Jan 17 00:43:48.426643 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:43:48.427725 systemd[1]: session-20.scope: Consumed 5.489s CPU time. Jan 17 00:43:48.444126 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:43:48.502729 systemd-logind[1452]: Removed session 20. Jan 17 00:43:48.513485 containerd[1470]: time="2026-01-17T00:43:48.508032544Z" level=info msg="shim disconnected" id=aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d namespace=k8s.io Jan 17 00:43:48.518135 containerd[1470]: time="2026-01-17T00:43:48.515398358Z" level=warning msg="cleaning up after shim disconnected" id=aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d namespace=k8s.io Jan 17 00:43:48.518135 containerd[1470]: time="2026-01-17T00:43:48.515493044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:48.561624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6-rootfs.mount: Deactivated successfully. Jan 17 00:43:48.592451 kubelet[2585]: E0117 00:43:48.589366 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:48.625040 containerd[1470]: time="2026-01-17T00:43:48.624931333Z" level=info msg="shim disconnected" id=48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6 namespace=k8s.io Jan 17 00:43:48.625803 containerd[1470]: time="2026-01-17T00:43:48.625546812Z" level=warning msg="cleaning up after shim disconnected" id=48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6 namespace=k8s.io Jan 17 00:43:48.626087 containerd[1470]: time="2026-01-17T00:43:48.625687013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:43:49.094977 kubelet[2585]: I0117 00:43:49.094478 2585 scope.go:117] "RemoveContainer" containerID="aca19e4bedba90e6d905c0b6d945ab3fb3577da5973f9ec63072f42bd2e3e64d" Jan 17 00:43:49.094977 kubelet[2585]: E0117 00:43:49.094625 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:49.112431 kubelet[2585]: I0117 00:43:49.111470 2585 scope.go:117] "RemoveContainer" containerID="48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6" Jan 17 00:43:49.112431 kubelet[2585]: E0117 00:43:49.111594 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:49.114842 containerd[1470]: time="2026-01-17T00:43:49.114617919Z" level=info msg="CreateContainer within sandbox \"fbf23ad0c01a1056761e07755c839d6b917232caa9435b70448fbecde22e2161\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:43:49.116755 containerd[1470]: time="2026-01-17T00:43:49.115170129Z" level=info msg="CreateContainer within sandbox \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Jan 17 00:43:49.326173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133007019.mount: Deactivated successfully. Jan 17 00:43:49.374097 containerd[1470]: time="2026-01-17T00:43:49.371604440Z" level=info msg="CreateContainer within sandbox \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b\"" Jan 17 00:43:49.380481 containerd[1470]: time="2026-01-17T00:43:49.378078398Z" level=info msg="StartContainer for \"8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b\"" Jan 17 00:43:49.435773 containerd[1470]: time="2026-01-17T00:43:49.435449067Z" level=info msg="CreateContainer within sandbox \"fbf23ad0c01a1056761e07755c839d6b917232caa9435b70448fbecde22e2161\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"fff0cdf4ba8e6927fc356109bde06c041d850c3286a47d19c00a7664e3547a45\"" Jan 17 00:43:49.442777 containerd[1470]: time="2026-01-17T00:43:49.439848170Z" level=info msg="StartContainer for \"fff0cdf4ba8e6927fc356109bde06c041d850c3286a47d19c00a7664e3547a45\"" Jan 17 00:43:49.552934 systemd[1]: Started cri-containerd-8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b.scope - libcontainer container 8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b. Jan 17 00:43:49.654691 systemd[1]: Started cri-containerd-fff0cdf4ba8e6927fc356109bde06c041d850c3286a47d19c00a7664e3547a45.scope - libcontainer container fff0cdf4ba8e6927fc356109bde06c041d850c3286a47d19c00a7664e3547a45. Jan 17 00:43:49.769184 containerd[1470]: time="2026-01-17T00:43:49.768093098Z" level=info msg="StartContainer for \"8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b\" returns successfully" Jan 17 00:43:49.877411 containerd[1470]: time="2026-01-17T00:43:49.874717427Z" level=info msg="StartContainer for \"fff0cdf4ba8e6927fc356109bde06c041d850c3286a47d19c00a7664e3547a45\" returns successfully" Jan 17 00:43:50.162559 kubelet[2585]: E0117 00:43:50.160966 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:50.164619 kubelet[2585]: E0117 00:43:50.164540 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:50.412902 systemd[1]: run-containerd-runc-k8s.io-fff0cdf4ba8e6927fc356109bde06c041d850c3286a47d19c00a7664e3547a45-runc.FbyRp3.mount: Deactivated successfully. Jan 17 00:43:51.181008 kubelet[2585]: E0117 00:43:51.180382 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:53.499989 systemd[1]: Started sshd@20-10.0.0.107:22-10.0.0.1:48972.service - OpenSSH per-connection server daemon (10.0.0.1:48972). Jan 17 00:43:53.637835 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 48972 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:43:53.641454 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:53.666136 systemd-logind[1452]: New session 21 of user core. Jan 17 00:43:53.682614 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:43:54.081687 sshd[4459]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:54.099180 systemd[1]: sshd@20-10.0.0.107:22-10.0.0.1:48972.service: Deactivated successfully. Jan 17 00:43:54.117080 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:43:54.119467 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:43:54.123176 systemd-logind[1452]: Removed session 21. Jan 17 00:43:56.431448 kubelet[2585]: E0117 00:43:56.429384 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:43:59.106457 systemd[1]: Started sshd@21-10.0.0.107:22-10.0.0.1:48980.service - OpenSSH per-connection server daemon (10.0.0.1:48980). Jan 17 00:43:59.197840 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 48980 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:43:59.205048 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:59.224054 systemd-logind[1452]: New session 22 of user core. Jan 17 00:43:59.250560 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:43:59.695656 sshd[4474]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:59.708042 systemd[1]: sshd@21-10.0.0.107:22-10.0.0.1:48980.service: Deactivated successfully. Jan 17 00:43:59.717727 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:43:59.723790 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:43:59.726002 systemd-logind[1452]: Removed session 22. Jan 17 00:43:59.735794 kubelet[2585]: E0117 00:43:59.733683 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:04.744928 systemd[1]: Started sshd@22-10.0.0.107:22-10.0.0.1:40054.service - OpenSSH per-connection server daemon (10.0.0.1:40054). Jan 17 00:44:04.892430 sshd[4489]: Accepted publickey for core from 10.0.0.1 port 40054 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:04.891913 sshd[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:04.919676 systemd-logind[1452]: New session 23 of user core. Jan 17 00:44:04.930580 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:44:05.218711 sshd[4489]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:05.227703 systemd[1]: sshd@22-10.0.0.107:22-10.0.0.1:40054.service: Deactivated successfully. Jan 17 00:44:05.237757 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:44:05.241537 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:44:05.246957 systemd-logind[1452]: Removed session 23. Jan 17 00:44:05.741673 kubelet[2585]: E0117 00:44:05.739156 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:06.450822 kubelet[2585]: E0117 00:44:06.450258 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:07.314741 kubelet[2585]: E0117 00:44:07.314664 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:07.735525 kubelet[2585]: E0117 00:44:07.735334 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:08.733715 kubelet[2585]: E0117 00:44:08.733638 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:09.738583 kubelet[2585]: E0117 00:44:09.733877 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:10.271850 systemd[1]: Started sshd@23-10.0.0.107:22-10.0.0.1:40064.service - OpenSSH per-connection server daemon (10.0.0.1:40064). Jan 17 00:44:10.355001 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 40064 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:10.360702 sshd[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:10.387048 systemd-logind[1452]: New session 24 of user core. Jan 17 00:44:10.399526 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:44:10.620916 sshd[4505]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:10.630928 systemd[1]: sshd@23-10.0.0.107:22-10.0.0.1:40064.service: Deactivated successfully. Jan 17 00:44:10.635000 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:44:10.638406 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:44:10.647438 systemd-logind[1452]: Removed session 24. Jan 17 00:44:15.666590 systemd[1]: Started sshd@24-10.0.0.107:22-10.0.0.1:43504.service - OpenSSH per-connection server daemon (10.0.0.1:43504). Jan 17 00:44:15.802028 sshd[4523]: Accepted publickey for core from 10.0.0.1 port 43504 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:15.801647 sshd[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:15.827629 systemd-logind[1452]: New session 25 of user core. Jan 17 00:44:15.847844 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:44:16.155398 sshd[4523]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:16.176515 systemd[1]: sshd@24-10.0.0.107:22-10.0.0.1:43504.service: Deactivated successfully. Jan 17 00:44:16.179545 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:44:16.182810 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:44:16.186668 systemd-logind[1452]: Removed session 25. Jan 17 00:44:21.213586 systemd[1]: Started sshd@25-10.0.0.107:22-10.0.0.1:43514.service - OpenSSH per-connection server daemon (10.0.0.1:43514). Jan 17 00:44:21.309911 sshd[4540]: Accepted publickey for core from 10.0.0.1 port 43514 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:21.319326 sshd[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:21.350109 systemd-logind[1452]: New session 26 of user core. Jan 17 00:44:21.368855 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:44:21.806521 sshd[4540]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:21.821837 systemd[1]: sshd@25-10.0.0.107:22-10.0.0.1:43514.service: Deactivated successfully. Jan 17 00:44:21.829389 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:44:21.835118 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:44:21.840646 systemd-logind[1452]: Removed session 26. Jan 17 00:44:26.872992 systemd[1]: Started sshd@26-10.0.0.107:22-10.0.0.1:50732.service - OpenSSH per-connection server daemon (10.0.0.1:50732). Jan 17 00:44:26.975570 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 50732 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:26.979952 sshd[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:27.002497 systemd-logind[1452]: New session 27 of user core. Jan 17 00:44:27.012701 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:44:27.490891 sshd[4554]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:27.497734 systemd[1]: sshd@26-10.0.0.107:22-10.0.0.1:50732.service: Deactivated successfully. Jan 17 00:44:27.504831 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:44:27.512580 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:44:27.517066 systemd-logind[1452]: Removed session 27. Jan 17 00:44:32.553092 systemd[1]: Started sshd@27-10.0.0.107:22-10.0.0.1:49592.service - OpenSSH per-connection server daemon (10.0.0.1:49592). Jan 17 00:44:32.742809 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 49592 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:32.747868 sshd[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:32.775532 systemd-logind[1452]: New session 28 of user core. Jan 17 00:44:32.798681 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:44:33.180757 sshd[4568]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:33.199444 systemd[1]: sshd@27-10.0.0.107:22-10.0.0.1:49592.service: Deactivated successfully. Jan 17 00:44:33.209838 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:44:33.214947 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:44:33.218953 systemd-logind[1452]: Removed session 28. Jan 17 00:44:35.744383 kubelet[2585]: E0117 00:44:35.739032 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:44:38.271378 systemd[1]: Started sshd@28-10.0.0.107:22-10.0.0.1:49594.service - OpenSSH per-connection server daemon (10.0.0.1:49594). Jan 17 00:44:38.452544 sshd[4582]: Accepted publickey for core from 10.0.0.1 port 49594 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:38.464942 sshd[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:38.534179 systemd-logind[1452]: New session 29 of user core. Jan 17 00:44:38.577764 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 00:44:39.240777 sshd[4582]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:39.249493 systemd[1]: sshd@28-10.0.0.107:22-10.0.0.1:49594.service: Deactivated successfully. Jan 17 00:44:39.260894 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 00:44:39.270158 systemd-logind[1452]: Session 29 logged out. Waiting for processes to exit. Jan 17 00:44:39.280512 systemd-logind[1452]: Removed session 29. Jan 17 00:44:44.304780 systemd[1]: Started sshd@29-10.0.0.107:22-10.0.0.1:35516.service - OpenSSH per-connection server daemon (10.0.0.1:35516). Jan 17 00:44:44.444957 sshd[4599]: Accepted publickey for core from 10.0.0.1 port 35516 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:44.455355 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:44.505179 systemd-logind[1452]: New session 30 of user core. Jan 17 00:44:44.522586 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 00:44:45.039972 sshd[4599]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:45.059762 systemd[1]: sshd@29-10.0.0.107:22-10.0.0.1:35516.service: Deactivated successfully. Jan 17 00:44:45.065591 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 00:44:45.072597 systemd-logind[1452]: Session 30 logged out. Waiting for processes to exit. Jan 17 00:44:45.079562 systemd-logind[1452]: Removed session 30. Jan 17 00:44:50.116770 systemd[1]: Started sshd@30-10.0.0.107:22-10.0.0.1:35520.service - OpenSSH per-connection server daemon (10.0.0.1:35520). Jan 17 00:44:50.344446 sshd[4613]: Accepted publickey for core from 10.0.0.1 port 35520 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:50.347524 sshd[4613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:50.378664 systemd-logind[1452]: New session 31 of user core. Jan 17 00:44:50.391694 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 17 00:44:50.855140 sshd[4613]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:50.881991 systemd[1]: sshd@30-10.0.0.107:22-10.0.0.1:35520.service: Deactivated successfully. Jan 17 00:44:50.884827 systemd[1]: session-31.scope: Deactivated successfully. Jan 17 00:44:50.892387 systemd-logind[1452]: Session 31 logged out. Waiting for processes to exit. Jan 17 00:44:50.897057 systemd-logind[1452]: Removed session 31. Jan 17 00:44:55.949466 systemd[1]: Started sshd@31-10.0.0.107:22-10.0.0.1:33528.service - OpenSSH per-connection server daemon (10.0.0.1:33528). Jan 17 00:44:56.089441 sshd[4628]: Accepted publickey for core from 10.0.0.1 port 33528 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:56.095943 sshd[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:56.134757 systemd-logind[1452]: New session 32 of user core. Jan 17 00:44:56.150582 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 17 00:44:56.483941 sshd[4628]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:56.513955 systemd[1]: sshd@31-10.0.0.107:22-10.0.0.1:33528.service: Deactivated successfully. Jan 17 00:44:56.517611 systemd[1]: session-32.scope: Deactivated successfully. Jan 17 00:44:56.519524 systemd-logind[1452]: Session 32 logged out. Waiting for processes to exit. Jan 17 00:44:56.532818 systemd[1]: Started sshd@32-10.0.0.107:22-10.0.0.1:33544.service - OpenSSH per-connection server daemon (10.0.0.1:33544). Jan 17 00:44:56.536539 systemd-logind[1452]: Removed session 32. Jan 17 00:44:56.651416 sshd[4642]: Accepted publickey for core from 10.0.0.1 port 33544 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:56.656848 sshd[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:56.680431 systemd-logind[1452]: New session 33 of user core. Jan 17 00:44:56.690027 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 17 00:44:58.254391 sshd[4642]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:58.312702 systemd[1]: Started sshd@33-10.0.0.107:22-10.0.0.1:33560.service - OpenSSH per-connection server daemon (10.0.0.1:33560). Jan 17 00:44:58.318715 systemd[1]: sshd@32-10.0.0.107:22-10.0.0.1:33544.service: Deactivated successfully. Jan 17 00:44:58.328753 systemd[1]: session-33.scope: Deactivated successfully. Jan 17 00:44:58.364539 systemd-logind[1452]: Session 33 logged out. Waiting for processes to exit. Jan 17 00:44:58.407957 systemd-logind[1452]: Removed session 33. Jan 17 00:44:58.635671 sshd[4653]: Accepted publickey for core from 10.0.0.1 port 33560 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:44:58.655019 sshd[4653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:58.680997 systemd-logind[1452]: New session 34 of user core. Jan 17 00:44:58.701731 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 17 00:45:01.387903 sshd[4653]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:01.420619 systemd[1]: sshd@33-10.0.0.107:22-10.0.0.1:33560.service: Deactivated successfully. Jan 17 00:45:01.447520 systemd[1]: session-34.scope: Deactivated successfully. Jan 17 00:45:01.447825 systemd[1]: session-34.scope: Consumed 1.203s CPU time. Jan 17 00:45:01.453147 systemd-logind[1452]: Session 34 logged out. Waiting for processes to exit. Jan 17 00:45:01.483848 systemd-logind[1452]: Removed session 34. Jan 17 00:45:01.500923 systemd[1]: Started sshd@34-10.0.0.107:22-10.0.0.1:33570.service - OpenSSH per-connection server daemon (10.0.0.1:33570). Jan 17 00:45:01.626669 sshd[4682]: Accepted publickey for core from 10.0.0.1 port 33570 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:01.632923 sshd[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:01.651906 systemd-logind[1452]: New session 35 of user core. Jan 17 00:45:01.666101 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 17 00:45:01.739912 kubelet[2585]: E0117 00:45:01.737887 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:02.518515 sshd[4682]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:02.563039 systemd[1]: sshd@34-10.0.0.107:22-10.0.0.1:33570.service: Deactivated successfully. Jan 17 00:45:02.571580 systemd[1]: session-35.scope: Deactivated successfully. Jan 17 00:45:02.573027 systemd-logind[1452]: Session 35 logged out. Waiting for processes to exit. Jan 17 00:45:02.616714 systemd[1]: Started sshd@35-10.0.0.107:22-10.0.0.1:44970.service - OpenSSH per-connection server daemon (10.0.0.1:44970). Jan 17 00:45:02.619459 systemd-logind[1452]: Removed session 35. Jan 17 00:45:02.675594 sshd[4694]: Accepted publickey for core from 10.0.0.1 port 44970 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:02.678508 sshd[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:02.701859 systemd-logind[1452]: New session 36 of user core. Jan 17 00:45:02.716531 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 17 00:45:03.184464 sshd[4694]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:03.201655 systemd[1]: sshd@35-10.0.0.107:22-10.0.0.1:44970.service: Deactivated successfully. Jan 17 00:45:03.204154 systemd[1]: session-36.scope: Deactivated successfully. Jan 17 00:45:03.225683 systemd-logind[1452]: Session 36 logged out. Waiting for processes to exit. Jan 17 00:45:03.230111 systemd-logind[1452]: Removed session 36. Jan 17 00:45:10.748102 systemd[1]: Started sshd@36-10.0.0.107:22-10.0.0.1:44982.service - OpenSSH per-connection server daemon (10.0.0.1:44982). Jan 17 00:45:11.426491 kubelet[2585]: E0117 00:45:11.426048 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:11.451175 kubelet[2585]: E0117 00:45:11.429132 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:11.449486 sshd[4711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:11.453092 sshd[4711]: Accepted publickey for core from 10.0.0.1 port 44982 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:11.538417 systemd-logind[1452]: New session 37 of user core. Jan 17 00:45:11.582499 systemd[1]: Started session-37.scope - Session 37 of User core. Jan 17 00:45:12.755531 kubelet[2585]: E0117 00:45:12.753435 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:13.154579 sshd[4711]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:13.210418 systemd[1]: sshd@36-10.0.0.107:22-10.0.0.1:44982.service: Deactivated successfully. Jan 17 00:45:13.297149 systemd[1]: session-37.scope: Deactivated successfully. Jan 17 00:45:13.300701 systemd[1]: session-37.scope: Consumed 1.194s CPU time. Jan 17 00:45:13.302172 systemd-logind[1452]: Session 37 logged out. Waiting for processes to exit. Jan 17 00:45:13.311869 systemd-logind[1452]: Removed session 37. Jan 17 00:45:13.742955 kubelet[2585]: E0117 00:45:13.738837 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:18.452576 systemd[1]: Started sshd@37-10.0.0.107:22-10.0.0.1:52266.service - OpenSSH per-connection server daemon (10.0.0.1:52266). Jan 17 00:45:18.581938 sshd[4727]: Accepted publickey for core from 10.0.0.1 port 52266 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:18.579575 sshd[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:18.602798 systemd-logind[1452]: New session 38 of user core. Jan 17 00:45:18.636700 systemd[1]: Started session-38.scope - Session 38 of User core. Jan 17 00:45:19.120816 sshd[4727]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:19.139399 systemd[1]: sshd@37-10.0.0.107:22-10.0.0.1:52266.service: Deactivated successfully. Jan 17 00:45:19.151485 systemd[1]: session-38.scope: Deactivated successfully. Jan 17 00:45:19.154430 systemd-logind[1452]: Session 38 logged out. Waiting for processes to exit. Jan 17 00:45:19.156732 systemd-logind[1452]: Removed session 38. Jan 17 00:45:23.734823 kubelet[2585]: E0117 00:45:23.733894 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:24.223040 systemd[1]: Started sshd@38-10.0.0.107:22-10.0.0.1:60584.service - OpenSSH per-connection server daemon (10.0.0.1:60584). Jan 17 00:45:24.286345 sshd[4742]: Accepted publickey for core from 10.0.0.1 port 60584 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:24.289353 sshd[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:24.298364 systemd-logind[1452]: New session 39 of user core. Jan 17 00:45:24.315320 systemd[1]: Started session-39.scope - Session 39 of User core. Jan 17 00:45:24.727160 sshd[4742]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:24.742916 systemd[1]: sshd@38-10.0.0.107:22-10.0.0.1:60584.service: Deactivated successfully. Jan 17 00:45:24.751400 systemd[1]: session-39.scope: Deactivated successfully. Jan 17 00:45:24.754187 systemd-logind[1452]: Session 39 logged out. Waiting for processes to exit. Jan 17 00:45:24.774884 systemd-logind[1452]: Removed session 39. Jan 17 00:45:29.783676 systemd[1]: Started sshd@39-10.0.0.107:22-10.0.0.1:60596.service - OpenSSH per-connection server daemon (10.0.0.1:60596). Jan 17 00:45:29.956467 sshd[4757]: Accepted publickey for core from 10.0.0.1 port 60596 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:29.969160 sshd[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:30.012287 systemd-logind[1452]: New session 40 of user core. Jan 17 00:45:30.030847 systemd[1]: Started session-40.scope - Session 40 of User core. Jan 17 00:45:30.504788 sshd[4757]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:30.516426 systemd-logind[1452]: Session 40 logged out. Waiting for processes to exit. Jan 17 00:45:30.519904 systemd[1]: sshd@39-10.0.0.107:22-10.0.0.1:60596.service: Deactivated successfully. Jan 17 00:45:30.526907 systemd[1]: session-40.scope: Deactivated successfully. Jan 17 00:45:30.534007 systemd-logind[1452]: Removed session 40. Jan 17 00:45:31.737574 kubelet[2585]: E0117 00:45:31.736669 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:35.561753 systemd[1]: Started sshd@40-10.0.0.107:22-10.0.0.1:56456.service - OpenSSH per-connection server daemon (10.0.0.1:56456). Jan 17 00:45:35.676370 sshd[4775]: Accepted publickey for core from 10.0.0.1 port 56456 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:35.678587 sshd[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:35.722964 systemd-logind[1452]: New session 41 of user core. Jan 17 00:45:35.731582 systemd[1]: Started session-41.scope - Session 41 of User core. Jan 17 00:45:36.001504 sshd[4775]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:36.011129 systemd[1]: sshd@40-10.0.0.107:22-10.0.0.1:56456.service: Deactivated successfully. Jan 17 00:45:36.014126 systemd[1]: session-41.scope: Deactivated successfully. Jan 17 00:45:36.020024 systemd-logind[1452]: Session 41 logged out. Waiting for processes to exit. Jan 17 00:45:36.023027 systemd-logind[1452]: Removed session 41. Jan 17 00:45:41.068313 systemd[1]: Started sshd@41-10.0.0.107:22-10.0.0.1:56462.service - OpenSSH per-connection server daemon (10.0.0.1:56462). Jan 17 00:45:41.177626 sshd[4790]: Accepted publickey for core from 10.0.0.1 port 56462 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:41.184048 sshd[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:41.212568 systemd-logind[1452]: New session 42 of user core. Jan 17 00:45:41.226499 systemd[1]: Started session-42.scope - Session 42 of User core. Jan 17 00:45:41.631752 sshd[4790]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:41.644734 systemd[1]: sshd@41-10.0.0.107:22-10.0.0.1:56462.service: Deactivated successfully. Jan 17 00:45:41.649689 systemd[1]: session-42.scope: Deactivated successfully. Jan 17 00:45:41.660637 systemd-logind[1452]: Session 42 logged out. Waiting for processes to exit. Jan 17 00:45:41.675119 systemd-logind[1452]: Removed session 42. Jan 17 00:45:45.738916 kubelet[2585]: E0117 00:45:45.738696 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:46.688913 systemd[1]: Started sshd@42-10.0.0.107:22-10.0.0.1:41438.service - OpenSSH per-connection server daemon (10.0.0.1:41438). Jan 17 00:45:46.742786 sshd[4806]: Accepted publickey for core from 10.0.0.1 port 41438 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:46.745802 sshd[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:46.763970 systemd-logind[1452]: New session 43 of user core. Jan 17 00:45:46.770811 systemd[1]: Started session-43.scope - Session 43 of User core. Jan 17 00:45:47.192939 sshd[4806]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:47.210430 systemd[1]: sshd@42-10.0.0.107:22-10.0.0.1:41438.service: Deactivated successfully. Jan 17 00:45:47.220522 systemd[1]: session-43.scope: Deactivated successfully. Jan 17 00:45:47.225578 systemd-logind[1452]: Session 43 logged out. Waiting for processes to exit. Jan 17 00:45:47.241838 systemd[1]: Started sshd@43-10.0.0.107:22-10.0.0.1:41454.service - OpenSSH per-connection server daemon (10.0.0.1:41454). Jan 17 00:45:47.248160 systemd-logind[1452]: Removed session 43. Jan 17 00:45:47.331017 sshd[4820]: Accepted publickey for core from 10.0.0.1 port 41454 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:47.338474 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:47.360139 systemd-logind[1452]: New session 44 of user core. Jan 17 00:45:47.386348 systemd[1]: Started session-44.scope - Session 44 of User core. Jan 17 00:45:50.321997 containerd[1470]: time="2026-01-17T00:45:50.320959396Z" level=info msg="StopContainer for \"8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b\" with timeout 30 (s)" Jan 17 00:45:50.332669 containerd[1470]: time="2026-01-17T00:45:50.332575225Z" level=info msg="Stop container \"8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b\" with signal terminated" Jan 17 00:45:50.395919 systemd[1]: cri-containerd-8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b.scope: Deactivated successfully. Jan 17 00:45:50.397931 systemd[1]: cri-containerd-8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b.scope: Consumed 1.446s CPU time. Jan 17 00:45:50.443076 containerd[1470]: time="2026-01-17T00:45:50.442932851Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:45:50.468550 containerd[1470]: time="2026-01-17T00:45:50.468423347Z" level=info msg="StopContainer for \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\" with timeout 2 (s)" Jan 17 00:45:50.469680 containerd[1470]: time="2026-01-17T00:45:50.469538356Z" level=info msg="Stop container \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\" with signal terminated" Jan 17 00:45:50.495314 systemd-networkd[1391]: lxc_health: Link DOWN Jan 17 00:45:50.495358 systemd-networkd[1391]: lxc_health: Lost carrier Jan 17 00:45:50.505793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b-rootfs.mount: Deactivated successfully. Jan 17 00:45:50.557772 containerd[1470]: time="2026-01-17T00:45:50.550115732Z" level=info msg="shim disconnected" id=8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b namespace=k8s.io Jan 17 00:45:50.557772 containerd[1470]: time="2026-01-17T00:45:50.550284938Z" level=warning msg="cleaning up after shim disconnected" id=8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b namespace=k8s.io Jan 17 00:45:50.557772 containerd[1470]: time="2026-01-17T00:45:50.553707810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:50.551141 systemd[1]: cri-containerd-2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9.scope: Deactivated successfully. Jan 17 00:45:50.552369 systemd[1]: cri-containerd-2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9.scope: Consumed 24.715s CPU time. Jan 17 00:45:50.654104 containerd[1470]: time="2026-01-17T00:45:50.654057963Z" level=info msg="StopContainer for \"8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b\" returns successfully" Jan 17 00:45:50.655953 containerd[1470]: time="2026-01-17T00:45:50.655873838Z" level=info msg="StopPodSandbox for \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\"" Jan 17 00:45:50.667389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9-rootfs.mount: Deactivated successfully. Jan 17 00:45:50.688737 kubelet[2585]: I0117 00:45:50.685703 2585 scope.go:117] "RemoveContainer" containerID="48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6" Jan 17 00:45:50.689804 containerd[1470]: time="2026-01-17T00:45:50.686072044Z" level=info msg="Container to stop \"48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.689804 containerd[1470]: time="2026-01-17T00:45:50.686107070Z" level=info msg="Container to stop \"8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.689804 containerd[1470]: time="2026-01-17T00:45:50.687067472Z" level=info msg="RemoveContainer for \"48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6\"" Jan 17 00:45:50.693847 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb-shm.mount: Deactivated successfully. Jan 17 00:45:50.724140 containerd[1470]: time="2026-01-17T00:45:50.723805042Z" level=info msg="RemoveContainer for \"48d07c6260cefe866d1fa9715a3a5e913653e48c44ea954639134ef779431ac6\" returns successfully" Jan 17 00:45:50.732539 containerd[1470]: time="2026-01-17T00:45:50.732460808Z" level=info msg="shim disconnected" id=2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9 namespace=k8s.io Jan 17 00:45:50.732960 containerd[1470]: time="2026-01-17T00:45:50.732713368Z" level=warning msg="cleaning up after shim disconnected" id=2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9 namespace=k8s.io Jan 17 00:45:50.732960 containerd[1470]: time="2026-01-17T00:45:50.732736862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:50.751651 systemd[1]: cri-containerd-df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb.scope: Deactivated successfully. Jan 17 00:45:50.812022 containerd[1470]: time="2026-01-17T00:45:50.810146658Z" level=info msg="StopContainer for \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\" returns successfully" Jan 17 00:45:50.812022 containerd[1470]: time="2026-01-17T00:45:50.811019433Z" level=info msg="StopPodSandbox for \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\"" Jan 17 00:45:50.812022 containerd[1470]: time="2026-01-17T00:45:50.811058165Z" level=info msg="Container to stop \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.812022 containerd[1470]: time="2026-01-17T00:45:50.811078924Z" level=info msg="Container to stop \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.812022 containerd[1470]: time="2026-01-17T00:45:50.811094363Z" level=info msg="Container to stop \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.812022 containerd[1470]: time="2026-01-17T00:45:50.811110723Z" level=info msg="Container to stop \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.812022 containerd[1470]: time="2026-01-17T00:45:50.811125812Z" level=info msg="Container to stop \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:50.864836 systemd[1]: cri-containerd-74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f.scope: Deactivated successfully. Jan 17 00:45:50.873555 containerd[1470]: time="2026-01-17T00:45:50.873415576Z" level=info msg="shim disconnected" id=df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb namespace=k8s.io Jan 17 00:45:50.873989 containerd[1470]: time="2026-01-17T00:45:50.873959699Z" level=warning msg="cleaning up after shim disconnected" id=df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb namespace=k8s.io Jan 17 00:45:50.874090 containerd[1470]: time="2026-01-17T00:45:50.874068172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:50.923776 containerd[1470]: time="2026-01-17T00:45:50.923127522Z" level=info msg="TearDown network for sandbox \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" successfully" Jan 17 00:45:50.923776 containerd[1470]: time="2026-01-17T00:45:50.923315051Z" level=info msg="StopPodSandbox for \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" returns successfully" Jan 17 00:45:50.974530 containerd[1470]: time="2026-01-17T00:45:50.974306223Z" level=info msg="shim disconnected" id=74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f namespace=k8s.io Jan 17 00:45:50.974530 containerd[1470]: time="2026-01-17T00:45:50.974412020Z" level=warning msg="cleaning up after shim disconnected" id=74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f namespace=k8s.io Jan 17 00:45:50.974530 containerd[1470]: time="2026-01-17T00:45:50.974429603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:51.025381 containerd[1470]: time="2026-01-17T00:45:51.025061160Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:45:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:45:51.032282 containerd[1470]: time="2026-01-17T00:45:51.031916275Z" level=info msg="TearDown network for sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" successfully" Jan 17 00:45:51.032282 containerd[1470]: time="2026-01-17T00:45:51.031988419Z" level=info msg="StopPodSandbox for \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" returns successfully" Jan 17 00:45:51.098520 kubelet[2585]: I0117 00:45:51.096105 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47237cb4-c2f4-4383-b09c-99b5cc5dae91-cilium-config-path\") pod \"47237cb4-c2f4-4383-b09c-99b5cc5dae91\" (UID: \"47237cb4-c2f4-4383-b09c-99b5cc5dae91\") " Jan 17 00:45:51.098520 kubelet[2585]: I0117 00:45:51.096175 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lr4z\" (UniqueName: \"kubernetes.io/projected/47237cb4-c2f4-4383-b09c-99b5cc5dae91-kube-api-access-2lr4z\") pod \"47237cb4-c2f4-4383-b09c-99b5cc5dae91\" (UID: \"47237cb4-c2f4-4383-b09c-99b5cc5dae91\") " Jan 17 00:45:51.105553 kubelet[2585]: I0117 00:45:51.103943 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47237cb4-c2f4-4383-b09c-99b5cc5dae91-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "47237cb4-c2f4-4383-b09c-99b5cc5dae91" (UID: "47237cb4-c2f4-4383-b09c-99b5cc5dae91"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:45:51.135083 kubelet[2585]: I0117 00:45:51.134835 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47237cb4-c2f4-4383-b09c-99b5cc5dae91-kube-api-access-2lr4z" (OuterVolumeSpecName: "kube-api-access-2lr4z") pod "47237cb4-c2f4-4383-b09c-99b5cc5dae91" (UID: "47237cb4-c2f4-4383-b09c-99b5cc5dae91"). InnerVolumeSpecName "kube-api-access-2lr4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:45:51.202619 kubelet[2585]: I0117 00:45:51.200577 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.202619 kubelet[2585]: I0117 00:45:51.200463 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-bpf-maps\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.202619 kubelet[2585]: I0117 00:45:51.200708 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-host-proc-sys-net\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.202619 kubelet[2585]: I0117 00:45:51.200738 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-xtables-lock\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.202619 kubelet[2585]: I0117 00:45:51.200769 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/750cca3b-f3be-48de-9f36-1cc8e2858e62-clustermesh-secrets\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.202619 kubelet[2585]: I0117 00:45:51.200792 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cni-path\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203375 kubelet[2585]: I0117 00:45:51.200819 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-config-path\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203375 kubelet[2585]: I0117 00:45:51.200849 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-cgroup\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203375 kubelet[2585]: I0117 00:45:51.200871 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-etc-cni-netd\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203375 kubelet[2585]: I0117 00:45:51.200893 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-host-proc-sys-kernel\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203375 kubelet[2585]: I0117 00:45:51.200922 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q94vg\" (UniqueName: \"kubernetes.io/projected/750cca3b-f3be-48de-9f36-1cc8e2858e62-kube-api-access-q94vg\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203375 kubelet[2585]: I0117 00:45:51.200945 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-run\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203598 kubelet[2585]: I0117 00:45:51.200968 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-hostproc\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203598 kubelet[2585]: I0117 00:45:51.200993 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-lib-modules\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203598 kubelet[2585]: I0117 00:45:51.201020 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/750cca3b-f3be-48de-9f36-1cc8e2858e62-hubble-tls\") pod \"750cca3b-f3be-48de-9f36-1cc8e2858e62\" (UID: \"750cca3b-f3be-48de-9f36-1cc8e2858e62\") " Jan 17 00:45:51.203598 kubelet[2585]: I0117 00:45:51.201148 2585 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.203598 kubelet[2585]: I0117 00:45:51.201174 2585 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47237cb4-c2f4-4383-b09c-99b5cc5dae91-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.203598 kubelet[2585]: I0117 00:45:51.201274 2585 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2lr4z\" (UniqueName: \"kubernetes.io/projected/47237cb4-c2f4-4383-b09c-99b5cc5dae91-kube-api-access-2lr4z\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.203598 kubelet[2585]: I0117 00:45:51.201813 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.203916 kubelet[2585]: I0117 00:45:51.202960 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.210289 kubelet[2585]: I0117 00:45:51.207630 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.210289 kubelet[2585]: I0117 00:45:51.207701 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cni-path" (OuterVolumeSpecName: "cni-path") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.210289 kubelet[2585]: I0117 00:45:51.209182 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.210289 kubelet[2585]: I0117 00:45:51.209318 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.210289 kubelet[2585]: I0117 00:45:51.209347 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-hostproc" (OuterVolumeSpecName: "hostproc") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.210566 kubelet[2585]: I0117 00:45:51.209372 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.210566 kubelet[2585]: I0117 00:45:51.209394 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:51.219271 kubelet[2585]: I0117 00:45:51.219083 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:45:51.221778 kubelet[2585]: I0117 00:45:51.220705 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750cca3b-f3be-48de-9f36-1cc8e2858e62-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:45:51.221778 kubelet[2585]: I0117 00:45:51.220869 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750cca3b-f3be-48de-9f36-1cc8e2858e62-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:45:51.225734 kubelet[2585]: I0117 00:45:51.225692 2585 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750cca3b-f3be-48de-9f36-1cc8e2858e62-kube-api-access-q94vg" (OuterVolumeSpecName: "kube-api-access-q94vg") pod "750cca3b-f3be-48de-9f36-1cc8e2858e62" (UID: "750cca3b-f3be-48de-9f36-1cc8e2858e62"). InnerVolumeSpecName "kube-api-access-q94vg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:45:51.302941 kubelet[2585]: I0117 00:45:51.302621 2585 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/750cca3b-f3be-48de-9f36-1cc8e2858e62-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.302941 kubelet[2585]: I0117 00:45:51.302703 2585 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cni-path\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.302941 kubelet[2585]: I0117 00:45:51.302718 2585 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.302941 kubelet[2585]: I0117 00:45:51.302730 2585 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.302941 kubelet[2585]: I0117 00:45:51.302741 2585 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.302941 kubelet[2585]: I0117 00:45:51.302752 2585 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.302941 kubelet[2585]: I0117 00:45:51.302764 2585 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q94vg\" (UniqueName: \"kubernetes.io/projected/750cca3b-f3be-48de-9f36-1cc8e2858e62-kube-api-access-q94vg\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.302941 kubelet[2585]: I0117 00:45:51.302775 2585 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-cilium-run\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.303482 kubelet[2585]: I0117 00:45:51.302786 2585 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-hostproc\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.303482 kubelet[2585]: I0117 00:45:51.302796 2585 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-lib-modules\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.303482 kubelet[2585]: I0117 00:45:51.302807 2585 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/750cca3b-f3be-48de-9f36-1cc8e2858e62-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.303482 kubelet[2585]: I0117 00:45:51.302818 2585 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.303482 kubelet[2585]: I0117 00:45:51.302828 2585 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/750cca3b-f3be-48de-9f36-1cc8e2858e62-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jan 17 00:45:51.378544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb-rootfs.mount: Deactivated successfully. Jan 17 00:45:51.378683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f-rootfs.mount: Deactivated successfully. Jan 17 00:45:51.378774 systemd[1]: var-lib-kubelet-pods-47237cb4\x2dc2f4\x2d4383\x2db09c\x2d99b5cc5dae91-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2lr4z.mount: Deactivated successfully. Jan 17 00:45:51.378886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f-shm.mount: Deactivated successfully. Jan 17 00:45:51.378974 systemd[1]: var-lib-kubelet-pods-750cca3b\x2df3be\x2d48de\x2d9f36\x2d1cc8e2858e62-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq94vg.mount: Deactivated successfully. Jan 17 00:45:51.379064 systemd[1]: var-lib-kubelet-pods-750cca3b\x2df3be\x2d48de\x2d9f36\x2d1cc8e2858e62-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:45:51.379154 systemd[1]: var-lib-kubelet-pods-750cca3b\x2df3be\x2d48de\x2d9f36\x2d1cc8e2858e62-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:45:51.627740 kubelet[2585]: E0117 00:45:51.627580 2585 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:45:51.734780 kubelet[2585]: I0117 00:45:51.734741 2585 scope.go:117] "RemoveContainer" containerID="8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b" Jan 17 00:45:51.743649 containerd[1470]: time="2026-01-17T00:45:51.743416894Z" level=info msg="RemoveContainer for \"8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b\"" Jan 17 00:45:51.754328 containerd[1470]: time="2026-01-17T00:45:51.754072873Z" level=info msg="RemoveContainer for \"8572019dfb58b770a7abf5238c97cf5880e7a924f4c9df597905dbc11446802b\" returns successfully" Jan 17 00:45:51.754708 kubelet[2585]: I0117 00:45:51.754683 2585 scope.go:117] "RemoveContainer" containerID="2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9" Jan 17 00:45:51.756957 systemd[1]: Removed slice kubepods-besteffort-pod47237cb4_c2f4_4383_b09c_99b5cc5dae91.slice - libcontainer container kubepods-besteffort-pod47237cb4_c2f4_4383_b09c_99b5cc5dae91.slice. Jan 17 00:45:51.757129 systemd[1]: kubepods-besteffort-pod47237cb4_c2f4_4383_b09c_99b5cc5dae91.slice: Consumed 3.943s CPU time. Jan 17 00:45:51.769641 containerd[1470]: time="2026-01-17T00:45:51.769277278Z" level=info msg="RemoveContainer for \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\"" Jan 17 00:45:51.782330 systemd[1]: Removed slice kubepods-burstable-pod750cca3b_f3be_48de_9f36_1cc8e2858e62.slice - libcontainer container kubepods-burstable-pod750cca3b_f3be_48de_9f36_1cc8e2858e62.slice. Jan 17 00:45:51.783446 systemd[1]: kubepods-burstable-pod750cca3b_f3be_48de_9f36_1cc8e2858e62.slice: Consumed 24.964s CPU time. Jan 17 00:45:51.788668 containerd[1470]: time="2026-01-17T00:45:51.788342280Z" level=info msg="RemoveContainer for \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\" returns successfully" Jan 17 00:45:51.788773 kubelet[2585]: I0117 00:45:51.788688 2585 scope.go:117] "RemoveContainer" containerID="320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35" Jan 17 00:45:51.793584 containerd[1470]: time="2026-01-17T00:45:51.792644171Z" level=info msg="RemoveContainer for \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\"" Jan 17 00:45:51.803305 containerd[1470]: time="2026-01-17T00:45:51.801795224Z" level=info msg="RemoveContainer for \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\" returns successfully" Jan 17 00:45:51.803607 kubelet[2585]: I0117 00:45:51.802142 2585 scope.go:117] "RemoveContainer" containerID="b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd" Jan 17 00:45:51.807369 containerd[1470]: time="2026-01-17T00:45:51.805760177Z" level=info msg="RemoveContainer for \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\"" Jan 17 00:45:51.821128 containerd[1470]: time="2026-01-17T00:45:51.820950248Z" level=info msg="RemoveContainer for \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\" returns successfully" Jan 17 00:45:51.823057 kubelet[2585]: I0117 00:45:51.822618 2585 scope.go:117] "RemoveContainer" containerID="c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03" Jan 17 00:45:51.828310 containerd[1470]: time="2026-01-17T00:45:51.827858178Z" level=info msg="RemoveContainer for \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\"" Jan 17 00:45:51.847959 containerd[1470]: time="2026-01-17T00:45:51.847774293Z" level=info msg="RemoveContainer for \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\" returns successfully" Jan 17 00:45:51.848445 kubelet[2585]: I0117 00:45:51.848414 2585 scope.go:117] "RemoveContainer" containerID="43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e" Jan 17 00:45:51.859522 containerd[1470]: time="2026-01-17T00:45:51.858392140Z" level=info msg="RemoveContainer for \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\"" Jan 17 00:45:51.868097 containerd[1470]: time="2026-01-17T00:45:51.867904769Z" level=info msg="RemoveContainer for \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\" returns successfully" Jan 17 00:45:51.868587 kubelet[2585]: I0117 00:45:51.868348 2585 scope.go:117] "RemoveContainer" containerID="2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9" Jan 17 00:45:51.870297 containerd[1470]: time="2026-01-17T00:45:51.869187161Z" level=error msg="ContainerStatus for \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\": not found" Jan 17 00:45:51.870620 kubelet[2585]: E0117 00:45:51.869945 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\": not found" containerID="2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9" Jan 17 00:45:51.870620 kubelet[2585]: I0117 00:45:51.870004 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9"} err="failed to get container status \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f27b31bc6487c1ec7aa8135f597dd436a501873c0d58ea98ff19860615b1ba9\": not found" Jan 17 00:45:51.870620 kubelet[2585]: I0117 00:45:51.870338 2585 scope.go:117] "RemoveContainer" containerID="320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35" Jan 17 00:45:51.870754 containerd[1470]: time="2026-01-17T00:45:51.870590034Z" level=error msg="ContainerStatus for \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\": not found" Jan 17 00:45:51.871289 kubelet[2585]: E0117 00:45:51.870848 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\": not found" containerID="320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35" Jan 17 00:45:51.871289 kubelet[2585]: I0117 00:45:51.870917 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35"} err="failed to get container status \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\": rpc error: code = NotFound desc = an error occurred when try to find container \"320e020c86776f11041f34dcd13acc12cbaa5bd29c3295e5a95c18635f61da35\": not found" Jan 17 00:45:51.871289 kubelet[2585]: I0117 00:45:51.870944 2585 scope.go:117] "RemoveContainer" containerID="b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd" Jan 17 00:45:51.871839 containerd[1470]: time="2026-01-17T00:45:51.871610326Z" level=error msg="ContainerStatus for \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\": not found" Jan 17 00:45:51.872650 kubelet[2585]: E0117 00:45:51.872602 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\": not found" containerID="b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd" Jan 17 00:45:51.872650 kubelet[2585]: I0117 00:45:51.872636 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd"} err="failed to get container status \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"b10fd0c16f20376b2c4944108110e8d6e1af361949731c423cf5d5cb210de5dd\": not found" Jan 17 00:45:51.872757 kubelet[2585]: I0117 00:45:51.872658 2585 scope.go:117] "RemoveContainer" containerID="c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03" Jan 17 00:45:51.873182 containerd[1470]: time="2026-01-17T00:45:51.872895141Z" level=error msg="ContainerStatus for \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\": not found" Jan 17 00:45:51.873362 kubelet[2585]: E0117 00:45:51.873178 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\": not found" containerID="c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03" Jan 17 00:45:51.873362 kubelet[2585]: I0117 00:45:51.873301 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03"} err="failed to get container status \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\": rpc error: code = NotFound desc = an error occurred when try to find container \"c90727af08677414c62d9f3030f78ac1f9cb92531d0fad34d20e20163fa5bf03\": not found" Jan 17 00:45:51.873362 kubelet[2585]: I0117 00:45:51.873326 2585 scope.go:117] "RemoveContainer" containerID="43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e" Jan 17 00:45:51.873738 containerd[1470]: time="2026-01-17T00:45:51.873555335Z" level=error msg="ContainerStatus for \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\": not found" Jan 17 00:45:51.873990 kubelet[2585]: E0117 00:45:51.873885 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\": not found" containerID="43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e" Jan 17 00:45:51.873990 kubelet[2585]: I0117 00:45:51.873945 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e"} err="failed to get container status \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\": rpc error: code = NotFound desc = an error occurred when try to find container \"43a5eebd8659d91d078ebcda0be5615205b701b11656450dc8326147c470142e\": not found" Jan 17 00:45:52.090649 sshd[4820]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:52.123416 systemd[1]: sshd@43-10.0.0.107:22-10.0.0.1:41454.service: Deactivated successfully. Jan 17 00:45:52.131827 systemd[1]: session-44.scope: Deactivated successfully. Jan 17 00:45:52.132324 systemd[1]: session-44.scope: Consumed 1.437s CPU time. Jan 17 00:45:52.147760 systemd-logind[1452]: Session 44 logged out. Waiting for processes to exit. Jan 17 00:45:52.208985 systemd[1]: Started sshd@44-10.0.0.107:22-10.0.0.1:41468.service - OpenSSH per-connection server daemon (10.0.0.1:41468). Jan 17 00:45:52.217483 systemd-logind[1452]: Removed session 44. Jan 17 00:45:52.358719 sshd[4979]: Accepted publickey for core from 10.0.0.1 port 41468 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:52.366688 sshd[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:52.390153 systemd-logind[1452]: New session 45 of user core. Jan 17 00:45:52.418681 systemd[1]: Started session-45.scope - Session 45 of User core. Jan 17 00:45:52.491328 kubelet[2585]: I0117 00:45:52.491168 2585 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:45:52Z","lastTransitionTime":"2026-01-17T00:45:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:45:52.763172 kubelet[2585]: I0117 00:45:52.760538 2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47237cb4-c2f4-4383-b09c-99b5cc5dae91" path="/var/lib/kubelet/pods/47237cb4-c2f4-4383-b09c-99b5cc5dae91/volumes" Jan 17 00:45:52.763172 kubelet[2585]: I0117 00:45:52.761622 2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750cca3b-f3be-48de-9f36-1cc8e2858e62" path="/var/lib/kubelet/pods/750cca3b-f3be-48de-9f36-1cc8e2858e62/volumes" Jan 17 00:45:53.737874 sshd[4979]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:53.790739 systemd[1]: sshd@44-10.0.0.107:22-10.0.0.1:41468.service: Deactivated successfully. Jan 17 00:45:53.801136 systemd[1]: session-45.scope: Deactivated successfully. Jan 17 00:45:53.817430 systemd-logind[1452]: Session 45 logged out. Waiting for processes to exit. Jan 17 00:45:53.840940 systemd[1]: Started sshd@45-10.0.0.107:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). Jan 17 00:45:53.854025 systemd-logind[1452]: Removed session 45. Jan 17 00:45:53.995080 systemd[1]: Created slice kubepods-burstable-pod271e9928_9d2d_4c87_b8a8_489aaf395426.slice - libcontainer container kubepods-burstable-pod271e9928_9d2d_4c87_b8a8_489aaf395426.slice. Jan 17 00:45:54.035655 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:54.042971 sshd[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:54.058635 systemd-logind[1452]: New session 46 of user core. Jan 17 00:45:54.083158 systemd[1]: Started session-46.scope - Session 46 of User core. Jan 17 00:45:54.097041 kubelet[2585]: I0117 00:45:54.096463 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-etc-cni-netd\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.097041 kubelet[2585]: I0117 00:45:54.096514 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/271e9928-9d2d-4c87-b8a8-489aaf395426-clustermesh-secrets\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.097041 kubelet[2585]: I0117 00:45:54.096536 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/271e9928-9d2d-4c87-b8a8-489aaf395426-cilium-ipsec-secrets\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.097041 kubelet[2585]: I0117 00:45:54.096561 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/271e9928-9d2d-4c87-b8a8-489aaf395426-hubble-tls\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.097041 kubelet[2585]: I0117 00:45:54.096592 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-lib-modules\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.097041 kubelet[2585]: I0117 00:45:54.096614 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-host-proc-sys-net\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.099540 kubelet[2585]: I0117 00:45:54.096639 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-cilium-run\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.099540 kubelet[2585]: I0117 00:45:54.096664 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-bpf-maps\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.099540 kubelet[2585]: I0117 00:45:54.096691 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-cilium-cgroup\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.099540 kubelet[2585]: I0117 00:45:54.096714 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-cni-path\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.099540 kubelet[2585]: I0117 00:45:54.096738 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/271e9928-9d2d-4c87-b8a8-489aaf395426-cilium-config-path\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.099540 kubelet[2585]: I0117 00:45:54.096760 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-host-proc-sys-kernel\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.099744 kubelet[2585]: I0117 00:45:54.096783 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-hostproc\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.099744 kubelet[2585]: I0117 00:45:54.096804 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/271e9928-9d2d-4c87-b8a8-489aaf395426-xtables-lock\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.099744 kubelet[2585]: I0117 00:45:54.096826 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqjp\" (UniqueName: \"kubernetes.io/projected/271e9928-9d2d-4c87-b8a8-489aaf395426-kube-api-access-gjqjp\") pod \"cilium-ttkgk\" (UID: \"271e9928-9d2d-4c87-b8a8-489aaf395426\") " pod="kube-system/cilium-ttkgk" Jan 17 00:45:54.193761 sshd[4993]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:54.260366 systemd[1]: sshd@45-10.0.0.107:22-10.0.0.1:53362.service: Deactivated successfully. Jan 17 00:45:54.266568 systemd[1]: session-46.scope: Deactivated successfully. Jan 17 00:45:54.270778 systemd-logind[1452]: Session 46 logged out. Waiting for processes to exit. Jan 17 00:45:54.302405 systemd[1]: Started sshd@46-10.0.0.107:22-10.0.0.1:53378.service - OpenSSH per-connection server daemon (10.0.0.1:53378). Jan 17 00:45:54.304716 systemd-logind[1452]: Removed session 46. Jan 17 00:45:54.324767 kubelet[2585]: E0117 00:45:54.324585 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:54.326766 containerd[1470]: time="2026-01-17T00:45:54.325705592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttkgk,Uid:271e9928-9d2d-4c87-b8a8-489aaf395426,Namespace:kube-system,Attempt:0,}" Jan 17 00:45:54.385692 sshd[5005]: Accepted publickey for core from 10.0.0.1 port 53378 ssh2: RSA SHA256:aqL86C7IG2RbKjodNz3kKVVy8CSbXTNlNXzHbRHMI/0 Jan 17 00:45:54.400430 sshd[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:54.409368 systemd-logind[1452]: New session 47 of user core. Jan 17 00:45:54.437285 systemd[1]: Started session-47.scope - Session 47 of User core. Jan 17 00:45:54.445410 containerd[1470]: time="2026-01-17T00:45:54.444013889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:45:54.445410 containerd[1470]: time="2026-01-17T00:45:54.444142680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:45:54.445948 containerd[1470]: time="2026-01-17T00:45:54.445617319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:54.449728 containerd[1470]: time="2026-01-17T00:45:54.448369711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:45:54.522919 systemd[1]: Started cri-containerd-24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2.scope - libcontainer container 24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2. Jan 17 00:45:54.613368 containerd[1470]: time="2026-01-17T00:45:54.613316855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ttkgk,Uid:271e9928-9d2d-4c87-b8a8-489aaf395426,Namespace:kube-system,Attempt:0,} returns sandbox id \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\"" Jan 17 00:45:54.618617 kubelet[2585]: E0117 00:45:54.617466 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:54.652583 containerd[1470]: time="2026-01-17T00:45:54.652530229Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:45:54.760920 containerd[1470]: time="2026-01-17T00:45:54.760849881Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"97f4ca5f01f2b2e26fc7137d99bdcf1501cd386176208006287cef7310aed148\"" Jan 17 00:45:54.764323 containerd[1470]: time="2026-01-17T00:45:54.762115090Z" level=info msg="StartContainer for \"97f4ca5f01f2b2e26fc7137d99bdcf1501cd386176208006287cef7310aed148\"" Jan 17 00:45:54.900703 systemd[1]: Started cri-containerd-97f4ca5f01f2b2e26fc7137d99bdcf1501cd386176208006287cef7310aed148.scope - libcontainer container 97f4ca5f01f2b2e26fc7137d99bdcf1501cd386176208006287cef7310aed148. Jan 17 00:45:55.019630 containerd[1470]: time="2026-01-17T00:45:55.019554569Z" level=info msg="StartContainer for \"97f4ca5f01f2b2e26fc7137d99bdcf1501cd386176208006287cef7310aed148\" returns successfully" Jan 17 00:45:55.066337 systemd[1]: cri-containerd-97f4ca5f01f2b2e26fc7137d99bdcf1501cd386176208006287cef7310aed148.scope: Deactivated successfully. Jan 17 00:45:55.197718 containerd[1470]: time="2026-01-17T00:45:55.197351742Z" level=info msg="shim disconnected" id=97f4ca5f01f2b2e26fc7137d99bdcf1501cd386176208006287cef7310aed148 namespace=k8s.io Jan 17 00:45:55.197718 containerd[1470]: time="2026-01-17T00:45:55.197441961Z" level=warning msg="cleaning up after shim disconnected" id=97f4ca5f01f2b2e26fc7137d99bdcf1501cd386176208006287cef7310aed148 namespace=k8s.io Jan 17 00:45:55.197718 containerd[1470]: time="2026-01-17T00:45:55.197462108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:55.836385 kubelet[2585]: E0117 00:45:55.835140 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:55.864576 containerd[1470]: time="2026-01-17T00:45:55.863560165Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:45:55.897827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount844555058.mount: Deactivated successfully. Jan 17 00:45:55.934836 containerd[1470]: time="2026-01-17T00:45:55.931959160Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f8b3c77de31526875da2493fd8b4a72aad6e3329d1d759998ad18d730106e030\"" Jan 17 00:45:55.936850 containerd[1470]: time="2026-01-17T00:45:55.935783398Z" level=info msg="StartContainer for \"f8b3c77de31526875da2493fd8b4a72aad6e3329d1d759998ad18d730106e030\"" Jan 17 00:45:56.028533 systemd[1]: Started cri-containerd-f8b3c77de31526875da2493fd8b4a72aad6e3329d1d759998ad18d730106e030.scope - libcontainer container f8b3c77de31526875da2493fd8b4a72aad6e3329d1d759998ad18d730106e030. Jan 17 00:45:56.117772 containerd[1470]: time="2026-01-17T00:45:56.115176636Z" level=info msg="StartContainer for \"f8b3c77de31526875da2493fd8b4a72aad6e3329d1d759998ad18d730106e030\" returns successfully" Jan 17 00:45:56.134678 systemd[1]: cri-containerd-f8b3c77de31526875da2493fd8b4a72aad6e3329d1d759998ad18d730106e030.scope: Deactivated successfully. Jan 17 00:45:56.239969 containerd[1470]: time="2026-01-17T00:45:56.239805744Z" level=info msg="shim disconnected" id=f8b3c77de31526875da2493fd8b4a72aad6e3329d1d759998ad18d730106e030 namespace=k8s.io Jan 17 00:45:56.239969 containerd[1470]: time="2026-01-17T00:45:56.239931688Z" level=warning msg="cleaning up after shim disconnected" id=f8b3c77de31526875da2493fd8b4a72aad6e3329d1d759998ad18d730106e030 namespace=k8s.io Jan 17 00:45:56.239969 containerd[1470]: time="2026-01-17T00:45:56.239949592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:56.252500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8b3c77de31526875da2493fd8b4a72aad6e3329d1d759998ad18d730106e030-rootfs.mount: Deactivated successfully. Jan 17 00:45:56.638389 kubelet[2585]: E0117 00:45:56.634948 2585 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:45:56.856329 kubelet[2585]: E0117 00:45:56.852543 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:56.916156 containerd[1470]: time="2026-01-17T00:45:56.910373486Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:45:57.042870 containerd[1470]: time="2026-01-17T00:45:57.042648976Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c4e01da052b7e253a35fa47d8c1d1dbc7fad47cf2b2ba1327e5e9730f550b282\"" Jan 17 00:45:57.052364 containerd[1470]: time="2026-01-17T00:45:57.049438565Z" level=info msg="StartContainer for \"c4e01da052b7e253a35fa47d8c1d1dbc7fad47cf2b2ba1327e5e9730f550b282\"" Jan 17 00:45:57.208703 systemd[1]: Started cri-containerd-c4e01da052b7e253a35fa47d8c1d1dbc7fad47cf2b2ba1327e5e9730f550b282.scope - libcontainer container c4e01da052b7e253a35fa47d8c1d1dbc7fad47cf2b2ba1327e5e9730f550b282. Jan 17 00:45:57.320793 containerd[1470]: time="2026-01-17T00:45:57.320032555Z" level=info msg="StartContainer for \"c4e01da052b7e253a35fa47d8c1d1dbc7fad47cf2b2ba1327e5e9730f550b282\" returns successfully" Jan 17 00:45:57.360736 systemd[1]: cri-containerd-c4e01da052b7e253a35fa47d8c1d1dbc7fad47cf2b2ba1327e5e9730f550b282.scope: Deactivated successfully. Jan 17 00:45:57.450536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4e01da052b7e253a35fa47d8c1d1dbc7fad47cf2b2ba1327e5e9730f550b282-rootfs.mount: Deactivated successfully. Jan 17 00:45:57.472430 containerd[1470]: time="2026-01-17T00:45:57.472164322Z" level=info msg="shim disconnected" id=c4e01da052b7e253a35fa47d8c1d1dbc7fad47cf2b2ba1327e5e9730f550b282 namespace=k8s.io Jan 17 00:45:57.472430 containerd[1470]: time="2026-01-17T00:45:57.472341033Z" level=warning msg="cleaning up after shim disconnected" id=c4e01da052b7e253a35fa47d8c1d1dbc7fad47cf2b2ba1327e5e9730f550b282 namespace=k8s.io Jan 17 00:45:57.472430 containerd[1470]: time="2026-01-17T00:45:57.472354689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:57.888543 kubelet[2585]: E0117 00:45:57.887695 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:57.909139 containerd[1470]: time="2026-01-17T00:45:57.909092848Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:45:58.007688 containerd[1470]: time="2026-01-17T00:45:58.007399945Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"57b56200a6c6dcd7e845118cd8155b3387a38897a32ff35a6cbc109b7321487d\"" Jan 17 00:45:58.017973 containerd[1470]: time="2026-01-17T00:45:58.016738297Z" level=info msg="StartContainer for \"57b56200a6c6dcd7e845118cd8155b3387a38897a32ff35a6cbc109b7321487d\"" Jan 17 00:45:58.261180 systemd[1]: Started cri-containerd-57b56200a6c6dcd7e845118cd8155b3387a38897a32ff35a6cbc109b7321487d.scope - libcontainer container 57b56200a6c6dcd7e845118cd8155b3387a38897a32ff35a6cbc109b7321487d. Jan 17 00:45:58.448784 systemd[1]: cri-containerd-57b56200a6c6dcd7e845118cd8155b3387a38897a32ff35a6cbc109b7321487d.scope: Deactivated successfully. Jan 17 00:45:58.481563 containerd[1470]: time="2026-01-17T00:45:58.481128166Z" level=info msg="StartContainer for \"57b56200a6c6dcd7e845118cd8155b3387a38897a32ff35a6cbc109b7321487d\" returns successfully" Jan 17 00:45:58.600049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57b56200a6c6dcd7e845118cd8155b3387a38897a32ff35a6cbc109b7321487d-rootfs.mount: Deactivated successfully. Jan 17 00:45:58.633147 containerd[1470]: time="2026-01-17T00:45:58.632841895Z" level=info msg="shim disconnected" id=57b56200a6c6dcd7e845118cd8155b3387a38897a32ff35a6cbc109b7321487d namespace=k8s.io Jan 17 00:45:58.633147 containerd[1470]: time="2026-01-17T00:45:58.632915380Z" level=warning msg="cleaning up after shim disconnected" id=57b56200a6c6dcd7e845118cd8155b3387a38897a32ff35a6cbc109b7321487d namespace=k8s.io Jan 17 00:45:58.633147 containerd[1470]: time="2026-01-17T00:45:58.632927393Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:58.677790 containerd[1470]: time="2026-01-17T00:45:58.677738926Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:45:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:45:58.914173 kubelet[2585]: E0117 00:45:58.909662 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:45:58.935800 containerd[1470]: time="2026-01-17T00:45:58.932911008Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:45:59.083389 containerd[1470]: time="2026-01-17T00:45:59.080436721Z" level=info msg="CreateContainer within sandbox \"24f032371f59fdf3992bb47a5fa5e05c3f4bf58abc08e4d731f68332c43f39a2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50e1ecfaa18c3660ab7290511010ddb37435dcda4972a4d32b933ab3c60f6e29\"" Jan 17 00:45:59.083389 containerd[1470]: time="2026-01-17T00:45:59.081856826Z" level=info msg="StartContainer for \"50e1ecfaa18c3660ab7290511010ddb37435dcda4972a4d32b933ab3c60f6e29\"" Jan 17 00:45:59.194151 systemd[1]: Started cri-containerd-50e1ecfaa18c3660ab7290511010ddb37435dcda4972a4d32b933ab3c60f6e29.scope - libcontainer container 50e1ecfaa18c3660ab7290511010ddb37435dcda4972a4d32b933ab3c60f6e29. Jan 17 00:45:59.397392 containerd[1470]: time="2026-01-17T00:45:59.383649112Z" level=info msg="StartContainer for \"50e1ecfaa18c3660ab7290511010ddb37435dcda4972a4d32b933ab3c60f6e29\" returns successfully" Jan 17 00:45:59.928692 kubelet[2585]: E0117 00:45:59.927980 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:00.036799 kubelet[2585]: I0117 00:46:00.033961 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ttkgk" podStartSLOduration=7.033863361 podStartE2EDuration="7.033863361s" podCreationTimestamp="2026-01-17 00:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:46:00.033147325 +0000 UTC m=+353.786960860" watchObservedRunningTime="2026-01-17 00:46:00.033863361 +0000 UTC m=+353.787676894" Jan 17 00:46:00.943595 kubelet[2585]: E0117 00:46:00.942873 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:01.254142 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:46:01.953487 kubelet[2585]: E0117 00:46:01.948869 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:04.738145 kubelet[2585]: E0117 00:46:04.736660 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:06.770254 containerd[1470]: time="2026-01-17T00:46:06.769820131Z" level=info msg="StopPodSandbox for \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\"" Jan 17 00:46:06.770254 containerd[1470]: time="2026-01-17T00:46:06.769928143Z" level=info msg="TearDown network for sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" successfully" Jan 17 00:46:06.770254 containerd[1470]: time="2026-01-17T00:46:06.769941869Z" level=info msg="StopPodSandbox for \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" returns successfully" Jan 17 00:46:06.773960 containerd[1470]: time="2026-01-17T00:46:06.771826672Z" level=info msg="RemovePodSandbox for \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\"" Jan 17 00:46:06.773960 containerd[1470]: time="2026-01-17T00:46:06.771859854Z" level=info msg="Forcibly stopping sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\"" Jan 17 00:46:06.773960 containerd[1470]: time="2026-01-17T00:46:06.772006247Z" level=info msg="TearDown network for sandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" successfully" Jan 17 00:46:06.785568 containerd[1470]: time="2026-01-17T00:46:06.785506739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:46:06.785929 containerd[1470]: time="2026-01-17T00:46:06.785851613Z" level=info msg="RemovePodSandbox \"74bcea9deac53f372d67f59381eff1ba3ce4c35837e23ac66c5165d3e8df079f\" returns successfully" Jan 17 00:46:06.789466 containerd[1470]: time="2026-01-17T00:46:06.786634351Z" level=info msg="StopPodSandbox for \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\"" Jan 17 00:46:06.789466 containerd[1470]: time="2026-01-17T00:46:06.786719741Z" level=info msg="TearDown network for sandbox \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" successfully" Jan 17 00:46:06.789466 containerd[1470]: time="2026-01-17T00:46:06.786733086Z" level=info msg="StopPodSandbox for \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" returns successfully" Jan 17 00:46:06.789466 containerd[1470]: time="2026-01-17T00:46:06.787303819Z" level=info msg="RemovePodSandbox for \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\"" Jan 17 00:46:06.789466 containerd[1470]: time="2026-01-17T00:46:06.787328225Z" level=info msg="Forcibly stopping sandbox \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\"" Jan 17 00:46:06.789466 containerd[1470]: time="2026-01-17T00:46:06.787399267Z" level=info msg="TearDown network for sandbox \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" successfully" Jan 17 00:46:06.806942 containerd[1470]: time="2026-01-17T00:46:06.806532603Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:46:06.806942 containerd[1470]: time="2026-01-17T00:46:06.806638500Z" level=info msg="RemovePodSandbox \"df5f12176247337cd16f8d0570dee88f211c9b03d42949c4e56a1d51c2f26beb\" returns successfully" Jan 17 00:46:09.822993 systemd-networkd[1391]: lxc_health: Link UP Jan 17 00:46:09.888596 systemd-networkd[1391]: lxc_health: Gained carrier Jan 17 00:46:10.357613 kubelet[2585]: E0117 00:46:10.354096 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:11.065835 kubelet[2585]: E0117 00:46:11.060169 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:11.449467 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 17 00:46:12.065525 kubelet[2585]: E0117 00:46:12.065405 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 00:46:15.898785 sshd[5005]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:15.910806 systemd[1]: sshd@46-10.0.0.107:22-10.0.0.1:53378.service: Deactivated successfully. Jan 17 00:46:15.920897 systemd[1]: session-47.scope: Deactivated successfully. Jan 17 00:46:15.934751 systemd-logind[1452]: Session 47 logged out. Waiting for processes to exit. Jan 17 00:46:15.941054 systemd-logind[1452]: Removed session 47.