Jan 17 01:06:13.045778 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 01:06:13.045819 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 01:06:13.045833 kernel: BIOS-provided physical RAM map: Jan 17 01:06:13.045849 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 01:06:13.045859 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 01:06:13.045868 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 01:06:13.045879 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 17 01:06:13.045889 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 17 01:06:13.045899 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 01:06:13.045908 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 01:06:13.045918 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 01:06:13.045928 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 01:06:13.045943 kernel: NX (Execute Disable) protection: active Jan 17 01:06:13.045953 kernel: APIC: Static calls initialized Jan 17 01:06:13.045964 kernel: SMBIOS 2.8 present. Jan 17 01:06:13.045976 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 17 01:06:13.045986 kernel: Hypervisor detected: KVM Jan 17 01:06:13.046001 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 01:06:13.046012 kernel: kvm-clock: using sched offset of 4373197650 cycles Jan 17 01:06:13.046024 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 01:06:13.046035 kernel: tsc: Detected 2799.998 MHz processor Jan 17 01:06:13.046046 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 01:06:13.046057 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 01:06:13.046067 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 17 01:06:13.046078 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 01:06:13.046089 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 01:06:13.046104 kernel: Using GB pages for direct mapping Jan 17 01:06:13.046115 kernel: ACPI: Early table checksum verification disabled Jan 17 01:06:13.046126 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 17 01:06:13.046137 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:06:13.046148 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:06:13.046158 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:06:13.046169 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 17 01:06:13.046180 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:06:13.046190 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:06:13.046206 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:06:13.046216 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 01:06:13.046227 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 17 01:06:13.046238 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 17 01:06:13.046249 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 17 01:06:13.046265 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 17 01:06:13.046277 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 17 01:06:13.046292 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 17 01:06:13.046304 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 17 01:06:13.046315 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 01:06:13.046334 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 01:06:13.046346 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 01:06:13.046357 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 17 01:06:13.046368 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 01:06:13.046379 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 17 01:06:13.046396 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 01:06:13.046407 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 17 01:06:13.046418 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 01:06:13.046429 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 17 01:06:13.046440 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 01:06:13.046451 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 17 01:06:13.046463 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 01:06:13.046474 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 17 01:06:13.046484 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 01:06:13.046500 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 17 01:06:13.046511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 01:06:13.046523 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 01:06:13.046534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 17 01:06:13.046545 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 17 01:06:13.046557 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 17 01:06:13.046568 kernel: Zone ranges: Jan 17 01:06:13.046580 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 01:06:13.046591 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 17 01:06:13.046606 kernel: Normal empty Jan 17 01:06:13.046618 kernel: Movable zone start for each node Jan 17 01:06:13.046629 kernel: Early memory node ranges Jan 17 01:06:13.046640 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 01:06:13.046651 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 17 01:06:13.046662 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 17 01:06:13.046673 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 01:06:13.046684 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 01:06:13.046706 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 17 01:06:13.046718 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 01:06:13.046734 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 01:06:13.048770 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 01:06:13.048794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 01:06:13.048807 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 01:06:13.048818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 01:06:13.048830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 01:06:13.048841 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 01:06:13.048852 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 01:06:13.048863 kernel: TSC deadline timer available Jan 17 01:06:13.048882 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 17 01:06:13.048893 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 01:06:13.048905 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 01:06:13.048917 kernel: Booting paravirtualized kernel on KVM Jan 17 01:06:13.048928 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 01:06:13.048940 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 17 01:06:13.048951 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 17 01:06:13.048962 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 17 01:06:13.048974 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 17 01:06:13.048990 kernel: kvm-guest: PV spinlocks enabled Jan 17 01:06:13.049001 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 01:06:13.049014 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 01:06:13.049026 kernel: random: crng init done Jan 17 01:06:13.049038 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 01:06:13.049049 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 01:06:13.049060 kernel: Fallback order for Node 0: 0 Jan 17 01:06:13.049072 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 17 01:06:13.049088 kernel: Policy zone: DMA32 Jan 17 01:06:13.049099 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 01:06:13.049111 kernel: software IO TLB: area num 16. Jan 17 01:06:13.049122 kernel: Memory: 1901588K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194768K reserved, 0K cma-reserved) Jan 17 01:06:13.049134 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 17 01:06:13.049145 kernel: Kernel/User page tables isolation: enabled Jan 17 01:06:13.049156 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 01:06:13.049168 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 01:06:13.049179 kernel: Dynamic Preempt: voluntary Jan 17 01:06:13.049196 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 01:06:13.049212 kernel: rcu: RCU event tracing is enabled. Jan 17 01:06:13.049225 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 17 01:06:13.049236 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 01:06:13.049248 kernel: Rude variant of Tasks RCU enabled. Jan 17 01:06:13.049270 kernel: Tracing variant of Tasks RCU enabled. Jan 17 01:06:13.049286 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 01:06:13.049299 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 17 01:06:13.049318 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 17 01:06:13.049330 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 01:06:13.049341 kernel: Console: colour VGA+ 80x25 Jan 17 01:06:13.049353 kernel: printk: console [tty0] enabled Jan 17 01:06:13.049369 kernel: printk: console [ttyS0] enabled Jan 17 01:06:13.049381 kernel: ACPI: Core revision 20230628 Jan 17 01:06:13.049393 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 01:06:13.049405 kernel: x2apic enabled Jan 17 01:06:13.049417 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 01:06:13.049433 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 17 01:06:13.049445 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 17 01:06:13.049457 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 01:06:13.049469 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 01:06:13.049481 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 01:06:13.049493 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 01:06:13.049504 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 01:06:13.049516 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 01:06:13.049528 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 01:06:13.049540 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 01:06:13.049556 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 01:06:13.049568 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 01:06:13.049579 kernel: MMIO Stale Data: Unknown: No mitigations Jan 17 01:06:13.049591 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 01:06:13.049602 kernel: active return thunk: its_return_thunk Jan 17 01:06:13.049614 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 01:06:13.049626 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 01:06:13.049637 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 01:06:13.049649 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 01:06:13.049661 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 01:06:13.049673 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 01:06:13.049689 kernel: Freeing SMP alternatives memory: 32K Jan 17 01:06:13.049712 kernel: pid_max: default: 32768 minimum: 301 Jan 17 01:06:13.049724 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 01:06:13.049736 kernel: landlock: Up and running. Jan 17 01:06:13.049761 kernel: SELinux: Initializing. Jan 17 01:06:13.049775 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 01:06:13.049787 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 01:06:13.049799 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 17 01:06:13.049811 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 01:06:13.049823 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 01:06:13.049842 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 01:06:13.049854 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 17 01:06:13.049866 kernel: signal: max sigframe size: 1776 Jan 17 01:06:13.049878 kernel: rcu: Hierarchical SRCU implementation. Jan 17 01:06:13.049890 kernel: rcu: Max phase no-delay instances is 400. Jan 17 01:06:13.049902 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 01:06:13.049914 kernel: smp: Bringing up secondary CPUs ... Jan 17 01:06:13.049926 kernel: smpboot: x86: Booting SMP configuration: Jan 17 01:06:13.049938 kernel: .... node #0, CPUs: #1 Jan 17 01:06:13.049954 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 01:06:13.049966 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 01:06:13.049978 kernel: smpboot: Max logical packages: 16 Jan 17 01:06:13.049990 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 17 01:06:13.050002 kernel: devtmpfs: initialized Jan 17 01:06:13.050013 kernel: x86/mm: Memory block size: 128MB Jan 17 01:06:13.050025 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 01:06:13.050038 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 17 01:06:13.050049 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 01:06:13.050061 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 01:06:13.050078 kernel: audit: initializing netlink subsys (disabled) Jan 17 01:06:13.050090 kernel: audit: type=2000 audit(1768611971.002:1): state=initialized audit_enabled=0 res=1 Jan 17 01:06:13.050101 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 01:06:13.050113 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 01:06:13.050125 kernel: cpuidle: using governor menu Jan 17 01:06:13.050137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 01:06:13.050149 kernel: dca service started, version 1.12.1 Jan 17 01:06:13.050161 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 01:06:13.050177 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 01:06:13.050189 kernel: PCI: Using configuration type 1 for base access Jan 17 01:06:13.050201 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 01:06:13.050213 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 01:06:13.050225 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 01:06:13.050237 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 01:06:13.050249 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 01:06:13.050261 kernel: ACPI: Added _OSI(Module Device) Jan 17 01:06:13.050273 kernel: ACPI: Added _OSI(Processor Device) Jan 17 01:06:13.050289 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 01:06:13.050301 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 01:06:13.050313 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 01:06:13.050325 kernel: ACPI: Interpreter enabled Jan 17 01:06:13.050337 kernel: ACPI: PM: (supports S0 S5) Jan 17 01:06:13.050348 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 01:06:13.050360 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 01:06:13.050372 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 01:06:13.050384 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 01:06:13.050401 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 01:06:13.050658 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 01:06:13.052909 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 01:06:13.053079 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 01:06:13.053099 kernel: PCI host bridge to bus 0000:00 Jan 17 01:06:13.053270 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 01:06:13.053464 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 01:06:13.053726 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 01:06:13.053959 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 17 01:06:13.054115 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 01:06:13.054260 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 17 01:06:13.054414 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 01:06:13.054621 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 01:06:13.054866 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 17 01:06:13.055065 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 17 01:06:13.055229 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 17 01:06:13.055386 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 17 01:06:13.055545 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 01:06:13.056283 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 01:06:13.056591 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 17 01:06:13.056841 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 01:06:13.057024 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 17 01:06:13.057213 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 01:06:13.057385 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 17 01:06:13.057563 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 01:06:13.057739 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 17 01:06:13.057938 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 01:06:13.058110 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 17 01:06:13.058284 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 01:06:13.058480 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 17 01:06:13.058660 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 01:06:13.058969 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 17 01:06:13.059144 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 01:06:13.059299 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 17 01:06:13.059473 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 01:06:13.059628 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 17 01:06:13.059825 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 17 01:06:13.059984 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 17 01:06:13.060139 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 17 01:06:13.060312 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 17 01:06:13.060467 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 01:06:13.060624 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 17 01:06:13.060875 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 17 01:06:13.061050 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 01:06:13.061205 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 01:06:13.061366 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 01:06:13.061528 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 17 01:06:13.061681 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 17 01:06:13.061880 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 01:06:13.062034 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 01:06:13.062203 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 17 01:06:13.062361 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 17 01:06:13.062525 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 01:06:13.062680 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 01:06:13.062883 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:06:13.063067 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 01:06:13.063260 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 17 01:06:13.063431 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 17 01:06:13.063606 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 01:06:13.063842 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 01:06:13.064024 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 01:06:13.064184 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 17 01:06:13.064340 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 01:06:13.064496 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 01:06:13.064655 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 01:06:13.064872 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 01:06:13.065044 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 17 01:06:13.065200 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 01:06:13.065354 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 01:06:13.065512 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 01:06:13.065672 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 01:06:13.065882 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 01:06:13.066038 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 01:06:13.066201 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 01:06:13.066354 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 01:06:13.066508 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 01:06:13.066662 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 01:06:13.066860 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 01:06:13.067014 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 01:06:13.067169 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 01:06:13.067322 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 01:06:13.067483 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 01:06:13.067643 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 01:06:13.067863 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 01:06:13.068019 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 01:06:13.068037 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 01:06:13.068050 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 01:06:13.068062 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 01:06:13.068074 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 01:06:13.068086 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 01:06:13.068106 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 01:06:13.068118 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 01:06:13.068130 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 01:06:13.068142 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 01:06:13.068154 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 01:06:13.068166 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 01:06:13.068178 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 01:06:13.068189 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 01:06:13.068201 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 01:06:13.068218 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 01:06:13.068230 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 01:06:13.068242 kernel: iommu: Default domain type: Translated Jan 17 01:06:13.068254 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 01:06:13.068266 kernel: PCI: Using ACPI for IRQ routing Jan 17 01:06:13.068278 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 01:06:13.068290 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 01:06:13.068302 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 17 01:06:13.068452 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 01:06:13.068614 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 01:06:13.068810 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 01:06:13.068829 kernel: vgaarb: loaded Jan 17 01:06:13.068842 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 01:06:13.068854 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 01:06:13.068866 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 01:06:13.068878 kernel: pnp: PnP ACPI init Jan 17 01:06:13.069033 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 01:06:13.069060 kernel: pnp: PnP ACPI: found 5 devices Jan 17 01:06:13.069073 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 01:06:13.069085 kernel: NET: Registered PF_INET protocol family Jan 17 01:06:13.069097 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 01:06:13.069109 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 01:06:13.069121 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 01:06:13.069133 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 01:06:13.069146 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 01:06:13.069163 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 01:06:13.069175 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 01:06:13.069187 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 01:06:13.069199 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 01:06:13.069211 kernel: NET: Registered PF_XDP protocol family Jan 17 01:06:13.069361 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 17 01:06:13.069515 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 17 01:06:13.069674 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 17 01:06:13.069896 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 17 01:06:13.070052 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 01:06:13.070205 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 01:06:13.070360 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 01:06:13.070515 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 01:06:13.070670 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 01:06:13.070874 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 01:06:13.071030 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 01:06:13.071186 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 17 01:06:13.071342 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 17 01:06:13.071496 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 17 01:06:13.071672 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 17 01:06:13.071893 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 17 01:06:13.072057 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 01:06:13.072241 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 01:06:13.072396 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 01:06:13.072560 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 17 01:06:13.072772 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 01:06:13.072933 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:06:13.073120 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 01:06:13.073284 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 17 01:06:13.073474 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 01:06:13.073632 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 01:06:13.073869 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 01:06:13.074026 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 17 01:06:13.074208 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 01:06:13.074390 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 01:06:13.074544 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 01:06:13.074718 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 17 01:06:13.074904 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 01:06:13.075063 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 01:06:13.075219 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 01:06:13.075376 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 17 01:06:13.075544 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 01:06:13.075734 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 01:06:13.075930 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 01:06:13.076087 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 17 01:06:13.076249 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 01:06:13.076404 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 01:06:13.076559 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 01:06:13.076734 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 17 01:06:13.076919 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 01:06:13.077085 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 01:06:13.077241 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 01:06:13.077418 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 17 01:06:13.077603 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 01:06:13.077811 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 01:06:13.077964 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 01:06:13.078105 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 01:06:13.078245 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 01:06:13.078384 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 17 01:06:13.078531 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 01:06:13.078703 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 17 01:06:13.078893 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 17 01:06:13.079043 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 17 01:06:13.079200 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 01:06:13.079360 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 17 01:06:13.079539 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 17 01:06:13.079721 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 17 01:06:13.079915 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 01:06:13.080072 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 17 01:06:13.080218 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 17 01:06:13.080363 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 01:06:13.080517 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 17 01:06:13.080672 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 17 01:06:13.080865 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 01:06:13.081041 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 17 01:06:13.081192 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 17 01:06:13.081349 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 01:06:13.081506 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 17 01:06:13.081659 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 17 01:06:13.081856 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 01:06:13.082013 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 17 01:06:13.082159 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 17 01:06:13.082305 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 01:06:13.082459 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 17 01:06:13.082604 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 17 01:06:13.083489 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 01:06:13.083518 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 01:06:13.083532 kernel: PCI: CLS 0 bytes, default 64 Jan 17 01:06:13.083545 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 01:06:13.083558 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 17 01:06:13.083570 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 01:06:13.083583 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 17 01:06:13.083604 kernel: Initialise system trusted keyrings Jan 17 01:06:13.083617 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 01:06:13.083635 kernel: Key type asymmetric registered Jan 17 01:06:13.083648 kernel: Asymmetric key parser 'x509' registered Jan 17 01:06:13.083661 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 01:06:13.083673 kernel: io scheduler mq-deadline registered Jan 17 01:06:13.083699 kernel: io scheduler kyber registered Jan 17 01:06:13.083714 kernel: io scheduler bfq registered Jan 17 01:06:13.083903 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 01:06:13.084065 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 01:06:13.084251 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:06:13.084450 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 01:06:13.084629 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 01:06:13.084850 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:06:13.085010 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 01:06:13.085165 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 01:06:13.085320 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:06:13.085507 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 01:06:13.085661 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 01:06:13.085861 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:06:13.086020 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 01:06:13.086174 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 01:06:13.086330 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:06:13.086497 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 01:06:13.086655 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 01:06:13.086874 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:06:13.087032 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 01:06:13.087200 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 01:06:13.087380 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:06:13.087564 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 01:06:13.087739 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 01:06:13.087924 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 01:06:13.087945 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 01:06:13.087959 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 01:06:13.087972 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 01:06:13.087984 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 01:06:13.088005 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 01:06:13.088018 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 01:06:13.088030 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 01:06:13.088043 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 01:06:13.088199 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 01:06:13.088219 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 01:06:13.088362 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 01:06:13.088509 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T01:06:12 UTC (1768611972) Jan 17 01:06:13.088664 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 01:06:13.088683 kernel: intel_pstate: CPU model not supported Jan 17 01:06:13.088707 kernel: NET: Registered PF_INET6 protocol family Jan 17 01:06:13.088721 kernel: Segment Routing with IPv6 Jan 17 01:06:13.088733 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 01:06:13.088746 kernel: NET: Registered PF_PACKET protocol family Jan 17 01:06:13.088799 kernel: Key type dns_resolver registered Jan 17 01:06:13.088812 kernel: IPI shorthand broadcast: enabled Jan 17 01:06:13.088825 kernel: sched_clock: Marking stable (1240003227, 227960160)->(1585974533, -118011146) Jan 17 01:06:13.088845 kernel: registered taskstats version 1 Jan 17 01:06:13.088858 kernel: Loading compiled-in X.509 certificates Jan 17 01:06:13.088870 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 01:06:13.088883 kernel: Key type .fscrypt registered Jan 17 01:06:13.088895 kernel: Key type fscrypt-provisioning registered Jan 17 01:06:13.088907 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 01:06:13.088920 kernel: ima: Allocated hash algorithm: sha1 Jan 17 01:06:13.088932 kernel: ima: No architecture policies found Jan 17 01:06:13.088945 kernel: clk: Disabling unused clocks Jan 17 01:06:13.088963 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 01:06:13.088976 kernel: Write protecting the kernel read-only data: 36864k Jan 17 01:06:13.089000 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 01:06:13.089012 kernel: Run /init as init process Jan 17 01:06:13.089024 kernel: with arguments: Jan 17 01:06:13.089037 kernel: /init Jan 17 01:06:13.089048 kernel: with environment: Jan 17 01:06:13.089060 kernel: HOME=/ Jan 17 01:06:13.089072 kernel: TERM=linux Jan 17 01:06:13.089093 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 01:06:13.089109 systemd[1]: Detected virtualization kvm. Jan 17 01:06:13.089122 systemd[1]: Detected architecture x86-64. Jan 17 01:06:13.089135 systemd[1]: Running in initrd. Jan 17 01:06:13.089147 systemd[1]: No hostname configured, using default hostname. Jan 17 01:06:13.089160 systemd[1]: Hostname set to . Jan 17 01:06:13.089186 systemd[1]: Initializing machine ID from VM UUID. Jan 17 01:06:13.089203 systemd[1]: Queued start job for default target initrd.target. Jan 17 01:06:13.089216 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 01:06:13.089229 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 01:06:13.089242 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 01:06:13.089268 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 01:06:13.089280 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 01:06:13.089293 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 01:06:13.089311 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 01:06:13.089324 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 01:06:13.089337 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 01:06:13.089349 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 01:06:13.089378 systemd[1]: Reached target paths.target - Path Units. Jan 17 01:06:13.089391 systemd[1]: Reached target slices.target - Slice Units. Jan 17 01:06:13.089404 systemd[1]: Reached target swap.target - Swaps. Jan 17 01:06:13.089430 systemd[1]: Reached target timers.target - Timer Units. Jan 17 01:06:13.089447 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 01:06:13.089461 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 01:06:13.089475 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 01:06:13.089501 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 01:06:13.089514 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 01:06:13.089528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 01:06:13.089553 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 01:06:13.089567 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 01:06:13.089581 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 01:06:13.089600 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 01:06:13.089614 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 01:06:13.089627 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 01:06:13.089641 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 01:06:13.089654 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 01:06:13.089668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:06:13.089681 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 01:06:13.089706 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 01:06:13.089789 systemd-journald[201]: Collecting audit messages is disabled. Jan 17 01:06:13.089822 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 01:06:13.089844 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 01:06:13.089859 systemd-journald[201]: Journal started Jan 17 01:06:13.089884 systemd-journald[201]: Runtime Journal (/run/log/journal/dd47edcc01d440b782eb9cc654237abf) is 4.7M, max 38.0M, 33.2M free. Jan 17 01:06:13.054934 systemd-modules-load[202]: Inserted module 'overlay' Jan 17 01:06:13.139898 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 01:06:13.139930 kernel: Bridge firewalling registered Jan 17 01:06:13.104370 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 17 01:06:13.147799 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 01:06:13.149084 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 01:06:13.150176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:06:13.157964 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 01:06:13.159957 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 01:06:13.173090 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 01:06:13.174275 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 01:06:13.181734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 01:06:13.196840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:06:13.198799 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 01:06:13.211992 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 01:06:13.213052 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:06:13.216295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 01:06:13.227101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 01:06:13.244586 dracut-cmdline[237]: dracut-dracut-053 Jan 17 01:06:13.250789 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 01:06:13.254589 systemd-resolved[233]: Positive Trust Anchors: Jan 17 01:06:13.254603 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 01:06:13.254644 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 01:06:13.259431 systemd-resolved[233]: Defaulting to hostname 'linux'. Jan 17 01:06:13.261092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 01:06:13.262639 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 01:06:13.356806 kernel: SCSI subsystem initialized Jan 17 01:06:13.368804 kernel: Loading iSCSI transport class v2.0-870. Jan 17 01:06:13.382795 kernel: iscsi: registered transport (tcp) Jan 17 01:06:13.409046 kernel: iscsi: registered transport (qla4xxx) Jan 17 01:06:13.409173 kernel: QLogic iSCSI HBA Driver Jan 17 01:06:13.466301 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 01:06:13.473952 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 01:06:13.516785 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 01:06:13.516882 kernel: device-mapper: uevent: version 1.0.3 Jan 17 01:06:13.519258 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 01:06:13.567814 kernel: raid6: sse2x4 gen() 14229 MB/s Jan 17 01:06:13.585793 kernel: raid6: sse2x2 gen() 9603 MB/s Jan 17 01:06:13.604436 kernel: raid6: sse2x1 gen() 9525 MB/s Jan 17 01:06:13.604489 kernel: raid6: using algorithm sse2x4 gen() 14229 MB/s Jan 17 01:06:13.623788 kernel: raid6: .... xor() 7817 MB/s, rmw enabled Jan 17 01:06:13.623845 kernel: raid6: using ssse3x2 recovery algorithm Jan 17 01:06:13.648789 kernel: xor: automatically using best checksumming function avx Jan 17 01:06:13.848217 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 01:06:13.868028 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 01:06:13.876985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 01:06:13.896640 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 17 01:06:13.903503 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 01:06:13.913334 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 01:06:13.935211 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Jan 17 01:06:13.974190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 01:06:13.987980 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 01:06:14.095969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 01:06:14.104926 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 01:06:14.136796 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 01:06:14.140060 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 01:06:14.141948 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 01:06:14.142652 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 01:06:14.151972 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 01:06:14.169845 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 01:06:14.220800 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 17 01:06:14.237951 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 01:06:14.250778 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 01:06:14.265896 kernel: libata version 3.00 loaded. Jan 17 01:06:14.281206 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 01:06:14.281420 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:06:14.300553 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 01:06:14.300586 kernel: GPT:17805311 != 125829119 Jan 17 01:06:14.300603 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 01:06:14.300620 kernel: GPT:17805311 != 125829119 Jan 17 01:06:14.300636 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 01:06:14.300682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:06:14.300730 kernel: ACPI: bus type USB registered Jan 17 01:06:14.300766 kernel: usbcore: registered new interface driver usbfs Jan 17 01:06:14.300786 kernel: usbcore: registered new interface driver hub Jan 17 01:06:14.300803 kernel: usbcore: registered new device driver usb Jan 17 01:06:14.284879 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 01:06:14.302462 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 01:06:14.304010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:06:14.306841 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 01:06:14.307222 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 01:06:14.305878 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:06:14.317336 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 01:06:14.317585 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 01:06:14.317809 kernel: AVX version of gcm_enc/dec engaged. Jan 17 01:06:14.317829 kernel: AES CTR mode by8 optimization enabled Jan 17 01:06:14.326238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:06:14.349787 kernel: scsi host0: ahci Jan 17 01:06:14.356015 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 01:06:14.376791 kernel: scsi host1: ahci Jan 17 01:06:14.378778 kernel: scsi host2: ahci Jan 17 01:06:14.378991 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Jan 17 01:06:14.383042 kernel: scsi host3: ahci Jan 17 01:06:14.383312 kernel: scsi host4: ahci Jan 17 01:06:14.385405 kernel: scsi host5: ahci Jan 17 01:06:14.385734 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 17 01:06:14.385797 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 17 01:06:14.385878 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 17 01:06:14.385899 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 17 01:06:14.385915 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 17 01:06:14.385931 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 17 01:06:14.386775 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (476) Jan 17 01:06:14.397440 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 01:06:14.481865 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:06:14.488554 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 01:06:14.489351 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 01:06:14.497541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 01:06:14.503955 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 01:06:14.508919 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 01:06:14.512590 disk-uuid[557]: Primary Header is updated. Jan 17 01:06:14.512590 disk-uuid[557]: Secondary Entries is updated. Jan 17 01:06:14.512590 disk-uuid[557]: Secondary Header is updated. Jan 17 01:06:14.520911 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:06:14.529775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:06:14.550115 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:06:14.691963 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 01:06:14.692030 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 01:06:14.692783 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 01:06:14.695443 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 01:06:14.703954 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 01:06:14.704046 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 01:06:14.716815 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 01:06:14.717116 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 17 01:06:14.720791 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 01:06:14.724780 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 01:06:14.725030 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 17 01:06:14.725940 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 17 01:06:14.727899 kernel: hub 1-0:1.0: USB hub found Jan 17 01:06:14.729078 kernel: hub 1-0:1.0: 4 ports detected Jan 17 01:06:14.730984 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 01:06:14.733154 kernel: hub 2-0:1.0: USB hub found Jan 17 01:06:14.734964 kernel: hub 2-0:1.0: 4 ports detected Jan 17 01:06:14.974870 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 01:06:15.115781 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 01:06:15.122260 kernel: usbcore: registered new interface driver usbhid Jan 17 01:06:15.122323 kernel: usbhid: USB HID core driver Jan 17 01:06:15.129391 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 17 01:06:15.129429 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 17 01:06:15.530265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 01:06:15.530353 disk-uuid[558]: The operation has completed successfully. Jan 17 01:06:15.593886 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 01:06:15.594023 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 01:06:15.611975 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 01:06:15.617824 sh[584]: Success Jan 17 01:06:15.634806 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 17 01:06:15.697799 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 01:06:15.707858 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 01:06:15.709760 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 01:06:15.729897 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 01:06:15.729967 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:06:15.731942 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 01:06:15.734145 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 01:06:15.736737 kernel: BTRFS info (device dm-0): using free space tree Jan 17 01:06:15.746439 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 01:06:15.747894 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 01:06:15.763015 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 01:06:15.767101 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 01:06:15.782555 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:06:15.782618 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:06:15.782650 kernel: BTRFS info (device vda6): using free space tree Jan 17 01:06:15.788778 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 01:06:15.803664 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 01:06:15.804583 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:06:15.812807 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 01:06:15.817931 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 01:06:15.904463 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 01:06:15.917419 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 01:06:15.955037 systemd-networkd[766]: lo: Link UP Jan 17 01:06:15.955790 systemd-networkd[766]: lo: Gained carrier Jan 17 01:06:15.959510 systemd-networkd[766]: Enumeration completed Jan 17 01:06:15.960852 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 01:06:15.962594 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:06:15.962599 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 01:06:15.963851 systemd[1]: Reached target network.target - Network. Jan 17 01:06:15.966076 systemd-networkd[766]: eth0: Link UP Jan 17 01:06:15.966081 systemd-networkd[766]: eth0: Gained carrier Jan 17 01:06:15.966091 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:06:15.978433 ignition[678]: Ignition 2.19.0 Jan 17 01:06:15.978466 ignition[678]: Stage: fetch-offline Jan 17 01:06:15.978566 ignition[678]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:06:15.980477 systemd-networkd[766]: eth0: DHCPv4 address 10.230.49.38/30, gateway 10.230.49.37 acquired from 10.230.49.37 Jan 17 01:06:15.978585 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:06:15.981194 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 01:06:15.978819 ignition[678]: parsed url from cmdline: "" Jan 17 01:06:15.978826 ignition[678]: no config URL provided Jan 17 01:06:15.978836 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 01:06:15.978852 ignition[678]: no config at "/usr/lib/ignition/user.ign" Jan 17 01:06:15.978861 ignition[678]: failed to fetch config: resource requires networking Jan 17 01:06:15.979195 ignition[678]: Ignition finished successfully Jan 17 01:06:15.991026 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 01:06:16.010523 ignition[775]: Ignition 2.19.0 Jan 17 01:06:16.010558 ignition[775]: Stage: fetch Jan 17 01:06:16.011809 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:06:16.011839 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:06:16.012041 ignition[775]: parsed url from cmdline: "" Jan 17 01:06:16.012048 ignition[775]: no config URL provided Jan 17 01:06:16.012058 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 01:06:16.012083 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jan 17 01:06:16.012209 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 17 01:06:16.012274 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 17 01:06:16.012310 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 17 01:06:16.030251 ignition[775]: GET result: OK Jan 17 01:06:16.030361 ignition[775]: parsing config with SHA512: 0623a3ab8663f8e211f07019dbde1720de6cbb94fb995c68f43593ad5fa553e1137c0265c4eb4dde3298e9a8dec525605453098b77350fa3ed5ebc1564be8cc7 Jan 17 01:06:16.035215 unknown[775]: fetched base config from "system" Jan 17 01:06:16.035231 unknown[775]: fetched base config from "system" Jan 17 01:06:16.036112 ignition[775]: fetch: fetch complete Jan 17 01:06:16.035240 unknown[775]: fetched user config from "openstack" Jan 17 01:06:16.036121 ignition[775]: fetch: fetch passed Jan 17 01:06:16.039167 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 01:06:16.036286 ignition[775]: Ignition finished successfully Jan 17 01:06:16.057864 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 01:06:16.078472 ignition[781]: Ignition 2.19.0 Jan 17 01:06:16.078490 ignition[781]: Stage: kargs Jan 17 01:06:16.078794 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:06:16.078815 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:06:16.079720 ignition[781]: kargs: kargs passed Jan 17 01:06:16.082360 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 01:06:16.079811 ignition[781]: Ignition finished successfully Jan 17 01:06:16.089975 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 01:06:16.109600 ignition[788]: Ignition 2.19.0 Jan 17 01:06:16.109632 ignition[788]: Stage: disks Jan 17 01:06:16.109912 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 17 01:06:16.109932 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:06:16.114691 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 01:06:16.111046 ignition[788]: disks: disks passed Jan 17 01:06:16.111132 ignition[788]: Ignition finished successfully Jan 17 01:06:16.116716 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 01:06:16.118248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 01:06:16.119791 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 01:06:16.121284 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 01:06:16.123074 systemd[1]: Reached target basic.target - Basic System. Jan 17 01:06:16.129941 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 01:06:16.149537 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 01:06:16.152513 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 01:06:16.162923 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 01:06:16.276805 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 01:06:16.278556 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 01:06:16.279895 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 01:06:16.286889 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 01:06:16.290877 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 01:06:16.291947 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 01:06:16.293938 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 17 01:06:16.295839 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 01:06:16.297276 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 01:06:16.306799 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (805) Jan 17 01:06:16.312192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 01:06:16.316259 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:06:16.316290 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:06:16.316309 kernel: BTRFS info (device vda6): using free space tree Jan 17 01:06:16.327787 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 01:06:16.333982 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 01:06:16.337311 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 01:06:16.396835 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 01:06:16.405628 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 17 01:06:16.415819 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 01:06:16.422033 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 01:06:16.526986 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 01:06:16.531896 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 01:06:16.534936 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 01:06:16.550779 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:06:16.573082 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 01:06:16.583997 ignition[921]: INFO : Ignition 2.19.0 Jan 17 01:06:16.583997 ignition[921]: INFO : Stage: mount Jan 17 01:06:16.586594 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 01:06:16.586594 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:06:16.586594 ignition[921]: INFO : mount: mount passed Jan 17 01:06:16.586594 ignition[921]: INFO : Ignition finished successfully Jan 17 01:06:16.587245 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 01:06:16.728331 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 01:06:17.523042 systemd-networkd[766]: eth0: Gained IPv6LL Jan 17 01:06:19.028898 systemd-networkd[766]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c49:24:19ff:fee6:3126/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c49:24:19ff:fee6:3126/64 assigned by NDisc. Jan 17 01:06:19.028913 systemd-networkd[766]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 01:06:23.463779 coreos-metadata[807]: Jan 17 01:06:23.463 WARN failed to locate config-drive, using the metadata service API instead Jan 17 01:06:23.486062 coreos-metadata[807]: Jan 17 01:06:23.485 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 01:06:23.502129 coreos-metadata[807]: Jan 17 01:06:23.502 INFO Fetch successful Jan 17 01:06:23.504082 coreos-metadata[807]: Jan 17 01:06:23.503 INFO wrote hostname srv-jkx7b.gb1.brightbox.com to /sysroot/etc/hostname Jan 17 01:06:23.505487 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 17 01:06:23.505646 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 17 01:06:23.520042 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 01:06:23.549199 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 01:06:23.560793 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Jan 17 01:06:23.564248 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 01:06:23.564296 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 01:06:23.566662 kernel: BTRFS info (device vda6): using free space tree Jan 17 01:06:23.570775 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 01:06:23.573941 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 01:06:23.610546 ignition[956]: INFO : Ignition 2.19.0 Jan 17 01:06:23.612774 ignition[956]: INFO : Stage: files Jan 17 01:06:23.612774 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 01:06:23.612774 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:06:23.615722 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 17 01:06:23.616676 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 01:06:23.617796 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 01:06:23.621787 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 01:06:23.622793 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 01:06:23.622793 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 01:06:23.622496 unknown[956]: wrote ssh authorized keys file for user: core Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 01:06:23.625938 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 01:06:24.074505 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 17 01:06:26.907502 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 01:06:26.907502 ignition[956]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 17 01:06:26.910843 ignition[956]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 01:06:26.910843 ignition[956]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 01:06:26.910843 ignition[956]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 17 01:06:26.914592 ignition[956]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 01:06:26.914592 ignition[956]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 01:06:26.914592 ignition[956]: INFO : files: files passed Jan 17 01:06:26.914592 ignition[956]: INFO : Ignition finished successfully Jan 17 01:06:26.913765 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 01:06:26.932070 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 01:06:26.935969 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 01:06:26.939423 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 01:06:26.939618 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 01:06:26.961770 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 01:06:26.961770 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 01:06:26.964642 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 01:06:26.967413 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 01:06:26.968538 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 01:06:26.976063 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 01:06:27.005050 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 01:06:27.005225 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 01:06:27.006630 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 01:06:27.007435 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 01:06:27.010677 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 01:06:27.015981 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 01:06:27.035666 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 01:06:27.045014 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 01:06:27.057794 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 01:06:27.058674 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 01:06:27.059611 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 01:06:27.061115 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 01:06:27.061285 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 01:06:27.063143 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 01:06:27.064100 systemd[1]: Stopped target basic.target - Basic System. Jan 17 01:06:27.065334 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 01:06:27.067048 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 01:06:27.068455 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 01:06:27.069879 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 01:06:27.071295 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 01:06:27.072951 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 01:06:27.074401 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 01:06:27.075882 systemd[1]: Stopped target swap.target - Swaps. Jan 17 01:06:27.077233 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 01:06:27.077443 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 01:06:27.079249 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 01:06:27.080182 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 01:06:27.081441 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 01:06:27.081769 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 01:06:27.082912 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 01:06:27.083079 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 01:06:27.085048 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 01:06:27.085232 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 01:06:27.087140 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 01:06:27.087299 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 01:06:27.097656 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 01:06:27.101149 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 01:06:27.101855 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 01:06:27.102104 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 01:06:27.104082 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 01:06:27.104250 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 01:06:27.124567 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 01:06:27.124737 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 01:06:27.130629 ignition[1008]: INFO : Ignition 2.19.0 Jan 17 01:06:27.130629 ignition[1008]: INFO : Stage: umount Jan 17 01:06:27.130629 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 01:06:27.130629 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 01:06:27.130629 ignition[1008]: INFO : umount: umount passed Jan 17 01:06:27.130629 ignition[1008]: INFO : Ignition finished successfully Jan 17 01:06:27.131392 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 01:06:27.131552 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 01:06:27.133718 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 01:06:27.133895 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 01:06:27.135237 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 01:06:27.135315 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 01:06:27.137886 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 01:06:27.137949 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 01:06:27.139909 systemd[1]: Stopped target network.target - Network. Jan 17 01:06:27.140523 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 01:06:27.140595 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 01:06:27.141389 systemd[1]: Stopped target paths.target - Path Units. Jan 17 01:06:27.142012 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 01:06:27.143876 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 01:06:27.146138 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 01:06:27.148050 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 01:06:27.150900 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 01:06:27.150994 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 01:06:27.152236 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 01:06:27.152296 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 01:06:27.159381 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 01:06:27.159458 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 01:06:27.160879 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 01:06:27.160970 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 01:06:27.162555 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 01:06:27.164814 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 01:06:27.167219 systemd-networkd[766]: eth0: DHCPv6 lease lost Jan 17 01:06:27.168086 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 01:06:27.169000 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 01:06:27.169158 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 01:06:27.172733 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 01:06:27.173035 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 01:06:27.177246 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 01:06:27.177936 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 01:06:27.179417 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 01:06:27.179489 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 01:06:27.187919 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 01:06:27.192668 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 01:06:27.192767 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 01:06:27.194389 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 01:06:27.196576 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 01:06:27.196727 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 01:06:27.211443 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 01:06:27.212687 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 01:06:27.214511 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 01:06:27.214650 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 01:06:27.218128 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 01:06:27.218207 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 01:06:27.219128 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 01:06:27.219184 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 01:06:27.220593 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 01:06:27.220673 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 01:06:27.222795 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 01:06:27.222858 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 01:06:27.224212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 01:06:27.224272 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 01:06:27.230962 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 01:06:27.231813 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 01:06:27.231902 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:06:27.233429 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 01:06:27.233494 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 01:06:27.235489 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 01:06:27.235554 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 01:06:27.239026 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 01:06:27.239100 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 01:06:27.240529 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 01:06:27.240592 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 01:06:27.243239 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 01:06:27.243315 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 01:06:27.244086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 01:06:27.244175 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:06:27.247586 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 01:06:27.247730 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 01:06:27.250077 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 01:06:27.256963 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 01:06:27.269021 systemd[1]: Switching root. Jan 17 01:06:27.302793 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 17 01:06:27.302895 systemd-journald[201]: Journal stopped Jan 17 01:06:28.840640 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 01:06:28.840777 kernel: SELinux: policy capability open_perms=1 Jan 17 01:06:28.840807 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 01:06:28.840851 kernel: SELinux: policy capability always_check_network=0 Jan 17 01:06:28.840888 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 01:06:28.840909 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 01:06:28.840960 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 01:06:28.840979 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 01:06:28.841014 kernel: audit: type=1403 audit(1768611987.623:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 01:06:28.841046 systemd[1]: Successfully loaded SELinux policy in 50.263ms. Jan 17 01:06:28.841106 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.506ms. Jan 17 01:06:28.841140 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 01:06:28.841183 systemd[1]: Detected virtualization kvm. Jan 17 01:06:28.841210 systemd[1]: Detected architecture x86-64. Jan 17 01:06:28.841230 systemd[1]: Detected first boot. Jan 17 01:06:28.841248 systemd[1]: Hostname set to . Jan 17 01:06:28.841267 systemd[1]: Initializing machine ID from VM UUID. Jan 17 01:06:28.841312 zram_generator::config[1068]: No configuration found. Jan 17 01:06:28.841344 systemd[1]: Populated /etc with preset unit settings. Jan 17 01:06:28.841378 systemd[1]: Queued start job for default target multi-user.target. Jan 17 01:06:28.841412 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 01:06:28.841435 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 01:06:28.841463 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 01:06:28.841483 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 01:06:28.856952 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 01:06:28.856994 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 01:06:28.857016 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 01:06:28.857055 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 01:06:28.857103 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 01:06:28.857137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 01:06:28.857164 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 01:06:28.857216 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 01:06:28.857245 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 01:06:28.857289 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 01:06:28.857325 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 01:06:28.857345 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 01:06:28.857385 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 01:06:28.857407 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 01:06:28.857426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 01:06:28.857452 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 01:06:28.857478 systemd[1]: Reached target slices.target - Slice Units. Jan 17 01:06:28.857498 systemd[1]: Reached target swap.target - Swaps. Jan 17 01:06:28.857518 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 01:06:28.857549 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 01:06:28.857572 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 01:06:28.857612 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 01:06:28.857633 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 01:06:28.857652 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 01:06:28.857672 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 01:06:28.857691 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 01:06:28.857711 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 01:06:28.857730 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 01:06:28.857774 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 01:06:28.857798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:06:28.857824 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 01:06:28.857845 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 01:06:28.857870 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 01:06:28.857890 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 01:06:28.857909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:06:28.857928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 01:06:28.857953 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 01:06:28.857987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 01:06:28.858032 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 01:06:28.858065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 01:06:28.858087 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 01:06:28.858106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 01:06:28.858139 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 01:06:28.858160 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 01:06:28.858188 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 01:06:28.858211 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 01:06:28.858230 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 01:06:28.858259 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 01:06:28.858279 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 01:06:28.858310 kernel: loop: module loaded Jan 17 01:06:28.858331 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 01:06:28.858368 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:06:28.858395 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 01:06:28.858417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 01:06:28.858442 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 01:06:28.858462 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 01:06:28.858481 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 01:06:28.858501 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 01:06:28.858526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 01:06:28.858552 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 01:06:28.858627 systemd-journald[1176]: Collecting audit messages is disabled. Jan 17 01:06:28.858682 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 01:06:28.858704 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 01:06:28.858730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 01:06:28.858781 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 01:06:28.858819 kernel: ACPI: bus type drm_connector registered Jan 17 01:06:28.858840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 01:06:28.858859 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 01:06:28.858878 systemd-journald[1176]: Journal started Jan 17 01:06:28.858913 systemd-journald[1176]: Runtime Journal (/run/log/journal/dd47edcc01d440b782eb9cc654237abf) is 4.7M, max 38.0M, 33.2M free. Jan 17 01:06:28.863858 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 01:06:28.870587 kernel: fuse: init (API version 7.39) Jan 17 01:06:28.868138 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 01:06:28.868412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 01:06:28.869708 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 01:06:28.869946 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 01:06:28.871870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 01:06:28.873190 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 01:06:28.878278 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 01:06:28.879621 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 01:06:28.881494 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 01:06:28.896169 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 01:06:28.904926 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 01:06:28.910873 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 01:06:28.913047 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 01:06:28.925034 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 01:06:28.930171 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 01:06:28.933225 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 01:06:28.945922 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 01:06:28.950185 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 01:06:28.955095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 01:06:28.967962 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 01:06:28.970867 systemd-journald[1176]: Time spent on flushing to /var/log/journal/dd47edcc01d440b782eb9cc654237abf is 80.822ms for 1108 entries. Jan 17 01:06:28.970867 systemd-journald[1176]: System Journal (/var/log/journal/dd47edcc01d440b782eb9cc654237abf) is 8.0M, max 584.8M, 576.8M free. Jan 17 01:06:29.090034 systemd-journald[1176]: Received client request to flush runtime journal. Jan 17 01:06:28.973679 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 01:06:28.974539 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 01:06:28.993560 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 01:06:28.994622 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 01:06:29.037675 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:06:29.084457 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 17 01:06:29.084483 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 17 01:06:29.095536 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 01:06:29.105662 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 01:06:29.118019 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 01:06:29.122426 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 01:06:29.130812 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 01:06:29.164684 udevadm[1243]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 01:06:29.183001 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 01:06:29.194941 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 01:06:29.217510 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 17 01:06:29.217537 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 17 01:06:29.224941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 01:06:29.690730 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 01:06:29.698985 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 01:06:29.737294 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Jan 17 01:06:29.765526 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 01:06:29.776992 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 01:06:29.804927 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 01:06:29.870355 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 01:06:29.895520 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 01:06:29.915244 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1261) Jan 17 01:06:29.974101 systemd-networkd[1258]: lo: Link UP Jan 17 01:06:29.974114 systemd-networkd[1258]: lo: Gained carrier Jan 17 01:06:29.977164 systemd-networkd[1258]: Enumeration completed Jan 17 01:06:29.977880 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:06:29.977991 systemd-networkd[1258]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 01:06:29.979377 systemd-networkd[1258]: eth0: Link UP Jan 17 01:06:29.979487 systemd-networkd[1258]: eth0: Gained carrier Jan 17 01:06:29.979578 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:06:29.980571 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 01:06:29.987501 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 01:06:29.997831 systemd-networkd[1258]: eth0: DHCPv4 address 10.230.49.38/30, gateway 10.230.49.37 acquired from 10.230.49.37 Jan 17 01:06:30.019206 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 01:06:30.056823 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 01:06:30.066823 kernel: ACPI: button: Power Button [PWRF] Jan 17 01:06:30.068174 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 01:06:30.078824 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 01:06:30.125787 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 01:06:30.130771 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 01:06:30.130838 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 01:06:30.132978 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 01:06:30.189129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 01:06:30.356207 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 01:06:30.378688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 01:06:30.386004 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 01:06:30.404918 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 01:06:30.440366 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 01:06:30.442151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 01:06:30.457019 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 01:06:30.462937 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 01:06:30.503181 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 01:06:30.505366 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 01:06:30.506386 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 01:06:30.506585 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 01:06:30.507501 systemd[1]: Reached target machines.target - Containers. Jan 17 01:06:30.510239 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 01:06:30.517998 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 01:06:30.522938 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 01:06:30.523935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:06:30.525203 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 01:06:30.530993 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 01:06:30.537851 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 01:06:30.546128 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 01:06:30.568208 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 01:06:30.582789 kernel: loop0: detected capacity change from 0 to 224512 Jan 17 01:06:30.603801 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 01:06:30.605217 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 01:06:30.610963 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 01:06:30.633788 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 01:06:30.682788 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 01:06:30.761204 kernel: loop3: detected capacity change from 0 to 8 Jan 17 01:06:30.786922 kernel: loop4: detected capacity change from 0 to 224512 Jan 17 01:06:30.817787 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 01:06:30.842827 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 01:06:30.871221 kernel: loop7: detected capacity change from 0 to 8 Jan 17 01:06:30.870352 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 17 01:06:30.871271 (sd-merge)[1317]: Merged extensions into '/usr'. Jan 17 01:06:30.897324 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 01:06:30.897364 systemd[1]: Reloading... Jan 17 01:06:30.999806 zram_generator::config[1345]: No configuration found. Jan 17 01:06:31.201039 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 01:06:31.228450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:06:31.318158 systemd[1]: Reloading finished in 420 ms. Jan 17 01:06:31.342280 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 01:06:31.343556 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 01:06:31.359043 systemd[1]: Starting ensure-sysext.service... Jan 17 01:06:31.363965 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 01:06:31.367516 systemd[1]: Reloading requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Jan 17 01:06:31.367538 systemd[1]: Reloading... Jan 17 01:06:31.408032 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 01:06:31.409351 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 01:06:31.411095 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 01:06:31.411674 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jan 17 01:06:31.411930 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jan 17 01:06:31.416518 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 01:06:31.416650 systemd-tmpfiles[1409]: Skipping /boot Jan 17 01:06:31.431873 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 01:06:31.432071 systemd-tmpfiles[1409]: Skipping /boot Jan 17 01:06:31.458777 zram_generator::config[1437]: No configuration found. Jan 17 01:06:31.603634 systemd-networkd[1258]: eth0: Gained IPv6LL Jan 17 01:06:31.645119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:06:31.732822 systemd[1]: Reloading finished in 364 ms. Jan 17 01:06:31.754268 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 01:06:31.780732 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 01:06:31.793027 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 01:06:31.797938 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 01:06:31.801879 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 01:06:31.815818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 01:06:31.834152 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 01:06:31.847158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:06:31.847441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:06:31.856777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 01:06:31.870102 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 01:06:31.874531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 01:06:31.878378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:06:31.878881 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:06:31.886489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 01:06:31.887813 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 01:06:31.897563 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:06:31.899150 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:06:31.910112 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 01:06:31.911098 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:06:31.911278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:06:31.920052 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 01:06:31.923346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 01:06:31.923917 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 01:06:31.926465 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 01:06:31.926691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 01:06:31.929397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 01:06:31.930382 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 01:06:31.939631 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 01:06:31.945542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 01:06:31.949560 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 01:06:31.955289 augenrules[1538]: No rules Jan 17 01:06:31.958167 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 01:06:31.964840 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 01:06:31.984572 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 01:06:31.987482 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:06:31.988291 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 01:06:31.996136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 01:06:32.000070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 01:06:32.012164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 01:06:32.025307 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 01:06:32.026680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 01:06:32.027010 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 01:06:32.027154 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 01:06:32.031522 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 01:06:32.033325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 01:06:32.033566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 01:06:32.036424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 01:06:32.038633 systemd-resolved[1513]: Positive Trust Anchors: Jan 17 01:06:32.038963 systemd-resolved[1513]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 01:06:32.039007 systemd-resolved[1513]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 01:06:32.041109 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 01:06:32.048778 systemd[1]: Finished ensure-sysext.service. Jan 17 01:06:32.051520 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 01:06:32.052518 systemd-resolved[1513]: Using system hostname 'srv-jkx7b.gb1.brightbox.com'. Jan 17 01:06:32.054697 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 01:06:32.056635 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 01:06:32.057874 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 01:06:32.058326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 01:06:32.065754 systemd[1]: Reached target network.target - Network. Jan 17 01:06:32.066923 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 01:06:32.067720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 01:06:32.068638 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 01:06:32.068926 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 01:06:32.073953 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 01:06:32.159803 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 01:06:32.162162 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 01:06:32.162992 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 01:06:32.163830 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 01:06:32.164624 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 01:06:32.165443 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 01:06:32.165495 systemd[1]: Reached target paths.target - Path Units. Jan 17 01:06:32.166119 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 01:06:32.167030 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 01:06:32.167878 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 01:06:32.168608 systemd[1]: Reached target timers.target - Timer Units. Jan 17 01:06:32.170045 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 01:06:32.172978 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 01:06:32.176109 systemd-networkd[1258]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c49:24:19ff:fee6:3126/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c49:24:19ff:fee6:3126/64 assigned by NDisc. Jan 17 01:06:32.176120 systemd-networkd[1258]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 01:06:32.176300 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 01:06:32.177652 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 01:06:32.178464 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 01:06:32.179121 systemd[1]: Reached target basic.target - Basic System. Jan 17 01:06:32.180049 systemd[1]: System is tainted: cgroupsv1 Jan 17 01:06:32.180121 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 01:06:32.180179 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 01:06:32.183871 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 01:06:32.186971 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 01:06:32.192033 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 01:06:32.196441 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 01:06:32.209964 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 01:06:32.212836 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 01:06:32.217084 jq[1577]: false Jan 17 01:06:32.222876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:06:32.230492 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 01:06:32.244978 dbus-daemon[1575]: [system] SELinux support is enabled Jan 17 01:06:32.248190 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 01:06:32.254854 dbus-daemon[1575]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1258 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 01:06:32.261992 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 01:06:32.275990 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 01:06:32.283010 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 01:06:32.285643 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 01:06:32.290570 extend-filesystems[1578]: Found loop4 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found loop5 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found loop6 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found loop7 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found vda Jan 17 01:06:32.297941 extend-filesystems[1578]: Found vda1 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found vda2 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found vda3 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found usr Jan 17 01:06:32.297941 extend-filesystems[1578]: Found vda4 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found vda6 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found vda7 Jan 17 01:06:32.297941 extend-filesystems[1578]: Found vda9 Jan 17 01:06:32.297941 extend-filesystems[1578]: Checking size of /dev/vda9 Jan 17 01:06:32.296064 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 01:06:32.394079 extend-filesystems[1578]: Resized partition /dev/vda9 Jan 17 01:06:32.374575 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 01:06:32.310930 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 01:06:32.420297 extend-filesystems[1621]: resize2fs 1.47.1 (20-May-2024) Jan 17 01:06:32.317958 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 01:06:32.433799 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 17 01:06:32.342184 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 01:06:32.342587 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 01:06:32.434255 jq[1602]: true Jan 17 01:06:32.347248 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 01:06:32.347636 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 01:06:32.373399 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 01:06:32.373455 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 01:06:32.380238 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 01:06:32.380272 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 01:06:32.390856 (ntainerd)[1616]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 01:06:32.407316 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 01:06:32.496029 update_engine[1596]: I20260117 01:06:32.487617 1596 main.cc:92] Flatcar Update Engine starting Jan 17 01:06:32.421380 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 01:06:32.424294 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 01:06:32.438909 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 01:06:32.451926 systemd-timesyncd[1569]: Contacted time server 213.5.132.231:123 (0.flatcar.pool.ntp.org). Jan 17 01:06:32.452024 systemd-timesyncd[1569]: Initial clock synchronization to Sat 2026-01-17 01:06:32.424487 UTC. Jan 17 01:06:32.502636 systemd[1]: Started update-engine.service - Update Engine. Jan 17 01:06:32.506458 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 01:06:32.511775 update_engine[1596]: I20260117 01:06:32.509730 1596 update_check_scheduler.cc:74] Next update check in 10m36s Jan 17 01:06:32.530841 jq[1624]: true Jan 17 01:06:32.604044 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1256) Jan 17 01:06:32.516407 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 01:06:32.807598 systemd-logind[1592]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 01:06:32.809909 systemd-logind[1592]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 01:06:32.812260 systemd-logind[1592]: New seat seat0. Jan 17 01:06:32.818306 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 01:06:32.821026 bash[1651]: Updated "/home/core/.ssh/authorized_keys" Jan 17 01:06:32.824056 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 01:06:32.844310 systemd[1]: Starting sshkeys.service... Jan 17 01:06:32.872671 sshd_keygen[1607]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 01:06:32.891255 containerd[1616]: time="2026-01-17T01:06:32.891116127Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 01:06:32.896223 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 01:06:32.902119 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 01:06:32.925484 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 01:06:32.957166 extend-filesystems[1621]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 01:06:32.957166 extend-filesystems[1621]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 01:06:32.957166 extend-filesystems[1621]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 01:06:32.967891 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Jan 17 01:06:32.958356 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 01:06:32.958718 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 01:06:33.005316 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 01:06:33.016822 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 01:06:33.020543 locksmithd[1629]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 01:06:33.023410 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 01:06:33.023586 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 01:06:33.026116 dbus-daemon[1575]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1620 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 01:06:33.034304 containerd[1616]: time="2026-01-17T01:06:33.032039323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.035802474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.036602348Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.036631704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.036979070Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.037035140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.037158019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.037182211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.037462371Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.037485439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.037503906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:06:33.041518 containerd[1616]: time="2026-01-17T01:06:33.037520154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 01:06:33.043819 containerd[1616]: time="2026-01-17T01:06:33.037639235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:06:33.043819 containerd[1616]: time="2026-01-17T01:06:33.038887639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 01:06:33.043819 containerd[1616]: time="2026-01-17T01:06:33.039138173Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 01:06:33.043819 containerd[1616]: time="2026-01-17T01:06:33.039163947Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 01:06:33.043819 containerd[1616]: time="2026-01-17T01:06:33.039289351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 01:06:33.043819 containerd[1616]: time="2026-01-17T01:06:33.039385750Z" level=info msg="metadata content store policy set" policy=shared Jan 17 01:06:33.041941 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 01:06:33.045387 containerd[1616]: time="2026-01-17T01:06:33.045218597Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 01:06:33.045387 containerd[1616]: time="2026-01-17T01:06:33.045296342Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 01:06:33.045387 containerd[1616]: time="2026-01-17T01:06:33.045324382Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 01:06:33.045508 containerd[1616]: time="2026-01-17T01:06:33.045391595Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 01:06:33.045508 containerd[1616]: time="2026-01-17T01:06:33.045432108Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 01:06:33.045999 containerd[1616]: time="2026-01-17T01:06:33.045613967Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046385030Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046575402Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046599748Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046618908Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046638953Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046678645Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046703624Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046723722Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046755205Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046794950Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 01:06:33.046822 containerd[1616]: time="2026-01-17T01:06:33.046813284Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.046832984Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.046871748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.046903188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.046920608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.046951415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.046980623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.046999026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.047015356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.047032650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.047055931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.047077134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.047095791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.047125949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.047148284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047265 containerd[1616]: time="2026-01-17T01:06:33.047169785Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047206963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047247744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047266967Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047338794Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047368548Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047386767Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047415694Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047429102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047450646Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047491899Z" level=info msg="NRI interface is disabled by configuration." Jan 17 01:06:33.047733 containerd[1616]: time="2026-01-17T01:06:33.047508533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 01:06:33.049816 containerd[1616]: time="2026-01-17T01:06:33.048471872Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 01:06:33.049816 containerd[1616]: time="2026-01-17T01:06:33.048558290Z" level=info msg="Connect containerd service" Jan 17 01:06:33.049816 containerd[1616]: time="2026-01-17T01:06:33.048645549Z" level=info msg="using legacy CRI server" Jan 17 01:06:33.049816 containerd[1616]: time="2026-01-17T01:06:33.048661180Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 01:06:33.049816 containerd[1616]: time="2026-01-17T01:06:33.048899433Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 01:06:33.052345 containerd[1616]: time="2026-01-17T01:06:33.051786420Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 01:06:33.052345 containerd[1616]: time="2026-01-17T01:06:33.052262522Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 01:06:33.052345 containerd[1616]: time="2026-01-17T01:06:33.052335700Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 01:06:33.053165 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 01:06:33.054924 containerd[1616]: time="2026-01-17T01:06:33.052527856Z" level=info msg="Start subscribing containerd event" Jan 17 01:06:33.054924 containerd[1616]: time="2026-01-17T01:06:33.054842604Z" level=info msg="Start recovering state" Jan 17 01:06:33.054342 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 01:06:33.055842 containerd[1616]: time="2026-01-17T01:06:33.054942475Z" level=info msg="Start event monitor" Jan 17 01:06:33.055842 containerd[1616]: time="2026-01-17T01:06:33.054973060Z" level=info msg="Start snapshots syncer" Jan 17 01:06:33.055842 containerd[1616]: time="2026-01-17T01:06:33.054996506Z" level=info msg="Start cni network conf syncer for default" Jan 17 01:06:33.055842 containerd[1616]: time="2026-01-17T01:06:33.055012718Z" level=info msg="Start streaming server" Jan 17 01:06:33.055842 containerd[1616]: time="2026-01-17T01:06:33.055119463Z" level=info msg="containerd successfully booted in 0.171500s" Jan 17 01:06:33.059000 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 01:06:33.071223 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 01:06:33.078153 polkitd[1688]: Started polkitd version 121 Jan 17 01:06:33.088226 polkitd[1688]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 01:06:33.088315 polkitd[1688]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 01:06:33.089723 polkitd[1688]: Finished loading, compiling and executing 2 rules Jan 17 01:06:33.091997 dbus-daemon[1575]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 01:06:33.093018 polkitd[1688]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 01:06:33.093313 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 01:06:33.104685 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 01:06:33.116354 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 01:06:33.125298 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 01:06:33.127069 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 01:06:33.131325 systemd-hostnamed[1620]: Hostname set to (static) Jan 17 01:06:33.846890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:06:33.860411 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:06:34.429263 kubelet[1717]: E0117 01:06:34.429170 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:06:34.431487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:06:34.431802 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:06:38.187223 login[1706]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 17 01:06:38.188092 login[1705]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 01:06:38.204003 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 01:06:38.218382 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 01:06:38.222890 systemd-logind[1592]: New session 1 of user core. Jan 17 01:06:38.243016 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 01:06:38.250317 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 01:06:38.268738 (systemd)[1737]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 01:06:38.411695 systemd[1737]: Queued start job for default target default.target. Jan 17 01:06:38.412343 systemd[1737]: Created slice app.slice - User Application Slice. Jan 17 01:06:38.412381 systemd[1737]: Reached target paths.target - Paths. Jan 17 01:06:38.412401 systemd[1737]: Reached target timers.target - Timers. Jan 17 01:06:38.418868 systemd[1737]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 01:06:38.429114 systemd[1737]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 01:06:38.429302 systemd[1737]: Reached target sockets.target - Sockets. Jan 17 01:06:38.429446 systemd[1737]: Reached target basic.target - Basic System. Jan 17 01:06:38.429669 systemd[1737]: Reached target default.target - Main User Target. Jan 17 01:06:38.429747 systemd[1737]: Startup finished in 152ms. Jan 17 01:06:38.430081 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 01:06:38.439389 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 01:06:39.189564 login[1706]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 01:06:39.197830 systemd-logind[1592]: New session 2 of user core. Jan 17 01:06:39.206418 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 01:06:39.313072 coreos-metadata[1574]: Jan 17 01:06:39.312 WARN failed to locate config-drive, using the metadata service API instead Jan 17 01:06:39.337436 coreos-metadata[1574]: Jan 17 01:06:39.337 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 17 01:06:39.344074 coreos-metadata[1574]: Jan 17 01:06:39.344 INFO Fetch failed with 404: resource not found Jan 17 01:06:39.344074 coreos-metadata[1574]: Jan 17 01:06:39.344 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 01:06:39.344848 coreos-metadata[1574]: Jan 17 01:06:39.344 INFO Fetch successful Jan 17 01:06:39.344999 coreos-metadata[1574]: Jan 17 01:06:39.344 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 17 01:06:39.360967 coreos-metadata[1574]: Jan 17 01:06:39.360 INFO Fetch successful Jan 17 01:06:39.360967 coreos-metadata[1574]: Jan 17 01:06:39.360 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 17 01:06:39.373624 coreos-metadata[1574]: Jan 17 01:06:39.373 INFO Fetch successful Jan 17 01:06:39.373624 coreos-metadata[1574]: Jan 17 01:06:39.373 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 17 01:06:39.388795 coreos-metadata[1574]: Jan 17 01:06:39.388 INFO Fetch successful Jan 17 01:06:39.388942 coreos-metadata[1574]: Jan 17 01:06:39.388 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 17 01:06:39.406932 coreos-metadata[1574]: Jan 17 01:06:39.406 INFO Fetch successful Jan 17 01:06:39.438963 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 01:06:39.440355 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 01:06:40.093227 coreos-metadata[1666]: Jan 17 01:06:40.093 WARN failed to locate config-drive, using the metadata service API instead Jan 17 01:06:40.117373 coreos-metadata[1666]: Jan 17 01:06:40.117 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 17 01:06:40.206688 coreos-metadata[1666]: Jan 17 01:06:40.206 INFO Fetch successful Jan 17 01:06:40.206939 coreos-metadata[1666]: Jan 17 01:06:40.206 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 01:06:40.234828 coreos-metadata[1666]: Jan 17 01:06:40.234 INFO Fetch successful Jan 17 01:06:40.236981 unknown[1666]: wrote ssh authorized keys file for user: core Jan 17 01:06:40.256589 update-ssh-keys[1780]: Updated "/home/core/.ssh/authorized_keys" Jan 17 01:06:40.261102 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 01:06:40.265542 systemd[1]: Finished sshkeys.service. Jan 17 01:06:40.269255 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 01:06:40.269804 systemd[1]: Startup finished in 16.289s (kernel) + 12.695s (userspace) = 28.985s. Jan 17 01:06:40.566925 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 01:06:40.582166 systemd[1]: Started sshd@0-10.230.49.38:22-20.161.92.111:46304.service - OpenSSH per-connection server daemon (20.161.92.111:46304). Jan 17 01:06:41.161610 sshd[1788]: Accepted publickey for core from 20.161.92.111 port 46304 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:06:41.164222 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:06:41.172537 systemd-logind[1592]: New session 3 of user core. Jan 17 01:06:41.183277 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 01:06:41.667128 systemd[1]: Started sshd@1-10.230.49.38:22-20.161.92.111:46316.service - OpenSSH per-connection server daemon (20.161.92.111:46316). Jan 17 01:06:42.239041 sshd[1793]: Accepted publickey for core from 20.161.92.111 port 46316 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:06:42.241324 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:06:42.249244 systemd-logind[1592]: New session 4 of user core. Jan 17 01:06:42.256180 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 01:06:42.643742 sshd[1793]: pam_unix(sshd:session): session closed for user core Jan 17 01:06:42.648899 systemd[1]: sshd@1-10.230.49.38:22-20.161.92.111:46316.service: Deactivated successfully. Jan 17 01:06:42.653433 systemd-logind[1592]: Session 4 logged out. Waiting for processes to exit. Jan 17 01:06:42.654211 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 01:06:42.656046 systemd-logind[1592]: Removed session 4. Jan 17 01:06:42.743102 systemd[1]: Started sshd@2-10.230.49.38:22-20.161.92.111:35118.service - OpenSSH per-connection server daemon (20.161.92.111:35118). Jan 17 01:06:43.316131 sshd[1801]: Accepted publickey for core from 20.161.92.111 port 35118 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:06:43.318407 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:06:43.326045 systemd-logind[1592]: New session 5 of user core. Jan 17 01:06:43.333196 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 01:06:43.720153 sshd[1801]: pam_unix(sshd:session): session closed for user core Jan 17 01:06:43.725383 systemd[1]: sshd@2-10.230.49.38:22-20.161.92.111:35118.service: Deactivated successfully. Jan 17 01:06:43.728618 systemd-logind[1592]: Session 5 logged out. Waiting for processes to exit. Jan 17 01:06:43.729639 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 01:06:43.731615 systemd-logind[1592]: Removed session 5. Jan 17 01:06:43.821086 systemd[1]: Started sshd@3-10.230.49.38:22-20.161.92.111:35128.service - OpenSSH per-connection server daemon (20.161.92.111:35128). Jan 17 01:06:44.419971 sshd[1809]: Accepted publickey for core from 20.161.92.111 port 35128 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:06:44.422378 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:06:44.429141 systemd-logind[1592]: New session 6 of user core. Jan 17 01:06:44.437172 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 01:06:44.437841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 01:06:44.443235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:06:44.652053 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:06:44.658526 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 01:06:44.783673 kubelet[1825]: E0117 01:06:44.783378 1825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 01:06:44.789085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 01:06:44.789703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 01:06:44.849196 sshd[1809]: pam_unix(sshd:session): session closed for user core Jan 17 01:06:44.855182 systemd[1]: sshd@3-10.230.49.38:22-20.161.92.111:35128.service: Deactivated successfully. Jan 17 01:06:44.858556 systemd-logind[1592]: Session 6 logged out. Waiting for processes to exit. Jan 17 01:06:44.859145 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 01:06:44.862031 systemd-logind[1592]: Removed session 6. Jan 17 01:06:44.944061 systemd[1]: Started sshd@4-10.230.49.38:22-20.161.92.111:35132.service - OpenSSH per-connection server daemon (20.161.92.111:35132). Jan 17 01:06:45.537152 sshd[1837]: Accepted publickey for core from 20.161.92.111 port 35132 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:06:45.539393 sshd[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:06:45.547011 systemd-logind[1592]: New session 7 of user core. Jan 17 01:06:45.557288 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 01:06:45.883960 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 01:06:45.884476 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:06:45.900211 sudo[1841]: pam_unix(sudo:session): session closed for user root Jan 17 01:06:46.001404 sshd[1837]: pam_unix(sshd:session): session closed for user core Jan 17 01:06:46.006101 systemd[1]: sshd@4-10.230.49.38:22-20.161.92.111:35132.service: Deactivated successfully. Jan 17 01:06:46.011185 systemd-logind[1592]: Session 7 logged out. Waiting for processes to exit. Jan 17 01:06:46.012567 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 01:06:46.014177 systemd-logind[1592]: Removed session 7. Jan 17 01:06:46.104359 systemd[1]: Started sshd@5-10.230.49.38:22-20.161.92.111:35140.service - OpenSSH per-connection server daemon (20.161.92.111:35140). Jan 17 01:06:46.665018 sshd[1846]: Accepted publickey for core from 20.161.92.111 port 35140 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:06:46.667760 sshd[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:06:46.675778 systemd-logind[1592]: New session 8 of user core. Jan 17 01:06:46.683338 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 01:06:46.982018 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 01:06:46.982515 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:06:46.988018 sudo[1851]: pam_unix(sudo:session): session closed for user root Jan 17 01:06:46.995544 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 01:06:46.996005 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:06:47.020121 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 01:06:47.023052 auditctl[1854]: No rules Jan 17 01:06:47.023657 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 01:06:47.024109 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 01:06:47.033234 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 01:06:47.067202 augenrules[1873]: No rules Jan 17 01:06:47.068346 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 01:06:47.071166 sudo[1850]: pam_unix(sudo:session): session closed for user root Jan 17 01:06:47.162182 sshd[1846]: pam_unix(sshd:session): session closed for user core Jan 17 01:06:47.166441 systemd[1]: sshd@5-10.230.49.38:22-20.161.92.111:35140.service: Deactivated successfully. Jan 17 01:06:47.170594 systemd-logind[1592]: Session 8 logged out. Waiting for processes to exit. Jan 17 01:06:47.172147 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 01:06:47.173238 systemd-logind[1592]: Removed session 8. Jan 17 01:06:47.261112 systemd[1]: Started sshd@6-10.230.49.38:22-20.161.92.111:35148.service - OpenSSH per-connection server daemon (20.161.92.111:35148). Jan 17 01:06:47.836462 sshd[1882]: Accepted publickey for core from 20.161.92.111 port 35148 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 01:06:47.838652 sshd[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 01:06:47.845790 systemd-logind[1592]: New session 9 of user core. Jan 17 01:06:47.853137 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 01:06:48.155971 sudo[1886]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 01:06:48.156443 sudo[1886]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 01:06:48.835703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:06:48.845083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:06:48.895928 systemd[1]: Reloading requested from client PID 1920 ('systemctl') (unit session-9.scope)... Jan 17 01:06:48.896147 systemd[1]: Reloading... Jan 17 01:06:49.047860 zram_generator::config[1969]: No configuration found. Jan 17 01:06:49.240497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 01:06:49.345431 systemd[1]: Reloading finished in 448 ms. Jan 17 01:06:49.414720 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 01:06:49.414948 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 01:06:49.415464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:06:49.425181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 01:06:49.612988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 01:06:49.627275 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 01:06:49.682274 kubelet[2038]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 01:06:49.682274 kubelet[2038]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 01:06:49.682274 kubelet[2038]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 01:06:49.682967 kubelet[2038]: I0117 01:06:49.682390 2038 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 01:06:50.244866 kubelet[2038]: I0117 01:06:50.244795 2038 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 01:06:50.246258 kubelet[2038]: I0117 01:06:50.245871 2038 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 01:06:50.246334 kubelet[2038]: I0117 01:06:50.246272 2038 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 01:06:50.278797 kubelet[2038]: I0117 01:06:50.278655 2038 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 01:06:50.287030 kubelet[2038]: E0117 01:06:50.286980 2038 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 01:06:50.287030 kubelet[2038]: I0117 01:06:50.287028 2038 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 01:06:50.294808 kubelet[2038]: I0117 01:06:50.294769 2038 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 01:06:50.298120 kubelet[2038]: I0117 01:06:50.297806 2038 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 01:06:50.298120 kubelet[2038]: I0117 01:06:50.297867 2038 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.49.38","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 01:06:50.298479 kubelet[2038]: I0117 01:06:50.298131 2038 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 01:06:50.298479 kubelet[2038]: I0117 01:06:50.298148 2038 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 01:06:50.298479 kubelet[2038]: I0117 01:06:50.298336 2038 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:06:50.303893 kubelet[2038]: I0117 01:06:50.303488 2038 kubelet.go:446] "Attempting to sync node with API server" Jan 17 01:06:50.303893 kubelet[2038]: I0117 01:06:50.303532 2038 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 01:06:50.303893 kubelet[2038]: I0117 01:06:50.303568 2038 kubelet.go:352] "Adding apiserver pod source" Jan 17 01:06:50.303893 kubelet[2038]: I0117 01:06:50.303595 2038 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 01:06:50.304808 kubelet[2038]: E0117 01:06:50.304783 2038 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:50.304990 kubelet[2038]: E0117 01:06:50.304968 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:50.306935 kubelet[2038]: I0117 01:06:50.306885 2038 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 01:06:50.307725 kubelet[2038]: I0117 01:06:50.307556 2038 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 01:06:50.308325 kubelet[2038]: W0117 01:06:50.308304 2038 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 01:06:50.313794 kubelet[2038]: I0117 01:06:50.311663 2038 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 01:06:50.313794 kubelet[2038]: I0117 01:06:50.311797 2038 server.go:1287] "Started kubelet" Jan 17 01:06:50.313794 kubelet[2038]: I0117 01:06:50.312026 2038 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 01:06:50.316200 kubelet[2038]: I0117 01:06:50.316178 2038 server.go:479] "Adding debug handlers to kubelet server" Jan 17 01:06:50.320002 kubelet[2038]: I0117 01:06:50.319885 2038 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 01:06:50.320868 kubelet[2038]: I0117 01:06:50.320808 2038 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 01:06:50.323583 kubelet[2038]: W0117 01:06:50.322811 2038 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.49.38" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 01:06:50.323583 kubelet[2038]: E0117 01:06:50.322890 2038 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.230.49.38\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 17 01:06:50.323682 kubelet[2038]: W0117 01:06:50.323656 2038 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 01:06:50.323727 kubelet[2038]: E0117 01:06:50.323684 2038 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 17 01:06:50.328584 kubelet[2038]: I0117 01:06:50.328552 2038 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 01:06:50.337489 kubelet[2038]: I0117 01:06:50.337437 2038 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 01:06:50.339347 kubelet[2038]: E0117 01:06:50.338023 2038 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.49.38.188b5f40bc1e2651 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.49.38,UID:10.230.49.38,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.49.38,},FirstTimestamp:2026-01-17 01:06:50.311689809 +0000 UTC m=+0.678256987,LastTimestamp:2026-01-17 01:06:50.311689809 +0000 UTC m=+0.678256987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.49.38,}" Jan 17 01:06:50.340803 kubelet[2038]: E0117 01:06:50.340772 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:50.341174 kubelet[2038]: I0117 01:06:50.341151 2038 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 01:06:50.341602 kubelet[2038]: I0117 01:06:50.341579 2038 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 01:06:50.341798 kubelet[2038]: I0117 01:06:50.341781 2038 reconciler.go:26] "Reconciler: start to sync state" Jan 17 01:06:50.343251 kubelet[2038]: I0117 01:06:50.343204 2038 factory.go:221] Registration of the systemd container factory successfully Jan 17 01:06:50.343554 kubelet[2038]: I0117 01:06:50.343519 2038 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 01:06:50.344472 kubelet[2038]: E0117 01:06:50.344442 2038 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 01:06:50.346652 kubelet[2038]: I0117 01:06:50.346633 2038 factory.go:221] Registration of the containerd container factory successfully Jan 17 01:06:50.359572 kubelet[2038]: E0117 01:06:50.359539 2038 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.49.38\" not found" node="10.230.49.38" Jan 17 01:06:50.389106 kubelet[2038]: I0117 01:06:50.389067 2038 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 01:06:50.389294 kubelet[2038]: I0117 01:06:50.389275 2038 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 01:06:50.389445 kubelet[2038]: I0117 01:06:50.389427 2038 state_mem.go:36] "Initialized new in-memory state store" Jan 17 01:06:50.392770 kubelet[2038]: I0117 01:06:50.392740 2038 policy_none.go:49] "None policy: Start" Jan 17 01:06:50.392934 kubelet[2038]: I0117 01:06:50.392914 2038 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 01:06:50.393034 kubelet[2038]: I0117 01:06:50.393019 2038 state_mem.go:35] "Initializing new in-memory state store" Jan 17 01:06:50.400172 kubelet[2038]: I0117 01:06:50.400148 2038 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 01:06:50.400809 kubelet[2038]: I0117 01:06:50.400506 2038 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 01:06:50.400809 kubelet[2038]: I0117 01:06:50.400537 2038 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 01:06:50.403168 kubelet[2038]: I0117 01:06:50.403148 2038 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 01:06:50.415397 kubelet[2038]: E0117 01:06:50.415206 2038 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 01:06:50.415871 kubelet[2038]: E0117 01:06:50.415837 2038 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.49.38\" not found" Jan 17 01:06:50.446050 kubelet[2038]: I0117 01:06:50.445829 2038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 01:06:50.447468 kubelet[2038]: I0117 01:06:50.447414 2038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 01:06:50.448627 kubelet[2038]: I0117 01:06:50.447881 2038 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 01:06:50.448627 kubelet[2038]: I0117 01:06:50.447918 2038 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 01:06:50.448627 kubelet[2038]: I0117 01:06:50.447931 2038 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 01:06:50.448627 kubelet[2038]: E0117 01:06:50.448122 2038 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 01:06:50.502164 kubelet[2038]: I0117 01:06:50.501981 2038 kubelet_node_status.go:75] "Attempting to register node" node="10.230.49.38" Jan 17 01:06:50.510216 kubelet[2038]: I0117 01:06:50.510174 2038 kubelet_node_status.go:78] "Successfully registered node" node="10.230.49.38" Jan 17 01:06:50.510352 kubelet[2038]: E0117 01:06:50.510334 2038 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.230.49.38\": node \"10.230.49.38\" not found" Jan 17 01:06:50.529294 kubelet[2038]: E0117 01:06:50.529255 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:50.630176 kubelet[2038]: E0117 01:06:50.630126 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:50.652596 sudo[1886]: pam_unix(sudo:session): session closed for user root Jan 17 01:06:50.731263 kubelet[2038]: E0117 01:06:50.731191 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:50.745360 sshd[1882]: pam_unix(sshd:session): session closed for user core Jan 17 01:06:50.750849 systemd[1]: sshd@6-10.230.49.38:22-20.161.92.111:35148.service: Deactivated successfully. Jan 17 01:06:50.755085 systemd-logind[1592]: Session 9 logged out. Waiting for processes to exit. Jan 17 01:06:50.757112 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 01:06:50.759988 systemd-logind[1592]: Removed session 9. Jan 17 01:06:50.832164 kubelet[2038]: E0117 01:06:50.832054 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:50.932931 kubelet[2038]: E0117 01:06:50.932821 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:51.034167 kubelet[2038]: E0117 01:06:51.033692 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:51.134631 kubelet[2038]: E0117 01:06:51.134558 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:51.235797 kubelet[2038]: E0117 01:06:51.235640 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:51.248389 kubelet[2038]: I0117 01:06:51.248293 2038 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 01:06:51.248570 kubelet[2038]: W0117 01:06:51.248540 2038 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 17 01:06:51.248650 kubelet[2038]: W0117 01:06:51.248594 2038 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 17 01:06:51.248826 kubelet[2038]: W0117 01:06:51.248540 2038 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 17 01:06:51.306229 kubelet[2038]: E0117 01:06:51.306071 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:51.336206 kubelet[2038]: E0117 01:06:51.336132 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:51.436940 kubelet[2038]: E0117 01:06:51.436829 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:51.537505 kubelet[2038]: E0117 01:06:51.537412 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:51.638265 kubelet[2038]: E0117 01:06:51.638186 2038 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.230.49.38\" not found" Jan 17 01:06:51.740111 kubelet[2038]: I0117 01:06:51.740050 2038 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 01:06:51.740849 kubelet[2038]: I0117 01:06:51.740832 2038 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 01:06:51.740909 containerd[1616]: time="2026-01-17T01:06:51.740594141Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 01:06:52.306594 kubelet[2038]: I0117 01:06:52.306517 2038 apiserver.go:52] "Watching apiserver" Jan 17 01:06:52.306857 kubelet[2038]: E0117 01:06:52.306511 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:52.343704 kubelet[2038]: I0117 01:06:52.343639 2038 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 01:06:52.356004 kubelet[2038]: I0117 01:06:52.355927 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06329847-eece-4917-964b-7ece0c830f5c-clustermesh-secrets\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356004 kubelet[2038]: I0117 01:06:52.355988 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9gzv\" (UniqueName: \"kubernetes.io/projected/fd498511-4cc0-4030-b6e9-878ac66e986a-kube-api-access-j9gzv\") pod \"kube-proxy-4tgsf\" (UID: \"fd498511-4cc0-4030-b6e9-878ac66e986a\") " pod="kube-system/kube-proxy-4tgsf" Jan 17 01:06:52.356223 kubelet[2038]: I0117 01:06:52.356023 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06329847-eece-4917-964b-7ece0c830f5c-hubble-tls\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356223 kubelet[2038]: I0117 01:06:52.356051 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd498511-4cc0-4030-b6e9-878ac66e986a-xtables-lock\") pod \"kube-proxy-4tgsf\" (UID: \"fd498511-4cc0-4030-b6e9-878ac66e986a\") " pod="kube-system/kube-proxy-4tgsf" Jan 17 01:06:52.356223 kubelet[2038]: I0117 01:06:52.356076 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cilium-run\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356223 kubelet[2038]: I0117 01:06:52.356111 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cilium-cgroup\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356223 kubelet[2038]: I0117 01:06:52.356136 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cni-path\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356223 kubelet[2038]: I0117 01:06:52.356160 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-xtables-lock\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356462 kubelet[2038]: I0117 01:06:52.356197 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-host-proc-sys-net\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356462 kubelet[2038]: I0117 01:06:52.356234 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-hostproc\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356462 kubelet[2038]: I0117 01:06:52.356262 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06329847-eece-4917-964b-7ece0c830f5c-cilium-config-path\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356462 kubelet[2038]: I0117 01:06:52.356308 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-host-proc-sys-kernel\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356462 kubelet[2038]: I0117 01:06:52.356335 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fd498511-4cc0-4030-b6e9-878ac66e986a-kube-proxy\") pod \"kube-proxy-4tgsf\" (UID: \"fd498511-4cc0-4030-b6e9-878ac66e986a\") " pod="kube-system/kube-proxy-4tgsf" Jan 17 01:06:52.356462 kubelet[2038]: I0117 01:06:52.356408 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-bpf-maps\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356677 kubelet[2038]: I0117 01:06:52.356439 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-etc-cni-netd\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356677 kubelet[2038]: I0117 01:06:52.356472 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-lib-modules\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356677 kubelet[2038]: I0117 01:06:52.356496 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qnf6\" (UniqueName: \"kubernetes.io/projected/06329847-eece-4917-964b-7ece0c830f5c-kube-api-access-7qnf6\") pod \"cilium-mjgcl\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " pod="kube-system/cilium-mjgcl" Jan 17 01:06:52.356677 kubelet[2038]: I0117 01:06:52.356525 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd498511-4cc0-4030-b6e9-878ac66e986a-lib-modules\") pod \"kube-proxy-4tgsf\" (UID: \"fd498511-4cc0-4030-b6e9-878ac66e986a\") " pod="kube-system/kube-proxy-4tgsf" Jan 17 01:06:52.617713 containerd[1616]: time="2026-01-17T01:06:52.617193460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4tgsf,Uid:fd498511-4cc0-4030-b6e9-878ac66e986a,Namespace:kube-system,Attempt:0,}" Jan 17 01:06:52.617976 containerd[1616]: time="2026-01-17T01:06:52.617823424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjgcl,Uid:06329847-eece-4917-964b-7ece0c830f5c,Namespace:kube-system,Attempt:0,}" Jan 17 01:06:53.272070 containerd[1616]: time="2026-01-17T01:06:53.271971072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:06:53.274083 containerd[1616]: time="2026-01-17T01:06:53.274042936Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 01:06:53.275773 containerd[1616]: time="2026-01-17T01:06:53.274946623Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:06:53.275973 containerd[1616]: time="2026-01-17T01:06:53.275928835Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:06:53.276479 containerd[1616]: time="2026-01-17T01:06:53.276428141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 01:06:53.280581 containerd[1616]: time="2026-01-17T01:06:53.279880309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 01:06:53.282386 containerd[1616]: time="2026-01-17T01:06:53.282321581Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.353242ms" Jan 17 01:06:53.284208 containerd[1616]: time="2026-01-17T01:06:53.284166722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 666.0284ms" Jan 17 01:06:53.308396 kubelet[2038]: E0117 01:06:53.307361 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:53.432394 containerd[1616]: time="2026-01-17T01:06:53.432253298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:06:53.433458 containerd[1616]: time="2026-01-17T01:06:53.433320315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:06:53.433600 containerd[1616]: time="2026-01-17T01:06:53.433437752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:06:53.434214 containerd[1616]: time="2026-01-17T01:06:53.434157267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:06:53.436798 containerd[1616]: time="2026-01-17T01:06:53.436699445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:06:53.437766 containerd[1616]: time="2026-01-17T01:06:53.437497175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:06:53.437766 containerd[1616]: time="2026-01-17T01:06:53.437524506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:06:53.439188 containerd[1616]: time="2026-01-17T01:06:53.437657130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:06:53.477150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3487501310.mount: Deactivated successfully. Jan 17 01:06:53.566390 containerd[1616]: time="2026-01-17T01:06:53.566261312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mjgcl,Uid:06329847-eece-4917-964b-7ece0c830f5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\"" Jan 17 01:06:53.572496 containerd[1616]: time="2026-01-17T01:06:53.572463490Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 01:06:53.591893 containerd[1616]: time="2026-01-17T01:06:53.591833624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4tgsf,Uid:fd498511-4cc0-4030-b6e9-878ac66e986a,Namespace:kube-system,Attempt:0,} returns sandbox id \"696f18f18515b025d1a2519801d63101b19f3ff2448e2a19ef53fa9b725c185b\"" Jan 17 01:06:54.308148 kubelet[2038]: E0117 01:06:54.308068 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:55.309017 kubelet[2038]: E0117 01:06:55.308944 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:56.311922 kubelet[2038]: E0117 01:06:56.311806 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:57.312401 kubelet[2038]: E0117 01:06:57.312278 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:58.312937 kubelet[2038]: E0117 01:06:58.312832 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:06:59.313895 kubelet[2038]: E0117 01:06:59.313823 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:00.314587 kubelet[2038]: E0117 01:07:00.314535 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:00.315896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470326078.mount: Deactivated successfully. Jan 17 01:07:01.315704 kubelet[2038]: E0117 01:07:01.315638 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:02.316547 kubelet[2038]: E0117 01:07:02.316413 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:03.137209 containerd[1616]: time="2026-01-17T01:07:03.137111914Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:03.138778 containerd[1616]: time="2026-01-17T01:07:03.138710334Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 01:07:03.140383 containerd[1616]: time="2026-01-17T01:07:03.139816478Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:03.143319 containerd[1616]: time="2026-01-17T01:07:03.143253519Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.570739374s" Jan 17 01:07:03.143429 containerd[1616]: time="2026-01-17T01:07:03.143322916Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 01:07:03.145411 containerd[1616]: time="2026-01-17T01:07:03.145082914Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 01:07:03.148061 containerd[1616]: time="2026-01-17T01:07:03.148017040Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 01:07:03.161701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812284626.mount: Deactivated successfully. Jan 17 01:07:03.166986 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 01:07:03.170578 containerd[1616]: time="2026-01-17T01:07:03.170524479Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\"" Jan 17 01:07:03.172793 containerd[1616]: time="2026-01-17T01:07:03.172482365Z" level=info msg="StartContainer for \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\"" Jan 17 01:07:03.254066 containerd[1616]: time="2026-01-17T01:07:03.254017468Z" level=info msg="StartContainer for \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\" returns successfully" Jan 17 01:07:03.317611 kubelet[2038]: E0117 01:07:03.317507 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:03.571463 containerd[1616]: time="2026-01-17T01:07:03.571271308Z" level=info msg="shim disconnected" id=587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e namespace=k8s.io Jan 17 01:07:03.572014 containerd[1616]: time="2026-01-17T01:07:03.571439205Z" level=warning msg="cleaning up after shim disconnected" id=587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e namespace=k8s.io Jan 17 01:07:03.572014 containerd[1616]: time="2026-01-17T01:07:03.571787674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:07:04.159307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e-rootfs.mount: Deactivated successfully. Jan 17 01:07:04.318180 kubelet[2038]: E0117 01:07:04.318117 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:04.514789 containerd[1616]: time="2026-01-17T01:07:04.513863711Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 01:07:04.531897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622854035.mount: Deactivated successfully. Jan 17 01:07:04.545418 containerd[1616]: time="2026-01-17T01:07:04.545282009Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\"" Jan 17 01:07:04.546683 containerd[1616]: time="2026-01-17T01:07:04.546604654Z" level=info msg="StartContainer for \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\"" Jan 17 01:07:04.665406 containerd[1616]: time="2026-01-17T01:07:04.665353974Z" level=info msg="StartContainer for \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\" returns successfully" Jan 17 01:07:04.685264 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 01:07:04.685816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:07:04.685918 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 01:07:04.695551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 01:07:04.730495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 01:07:04.800505 containerd[1616]: time="2026-01-17T01:07:04.800150174Z" level=info msg="shim disconnected" id=f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99 namespace=k8s.io Jan 17 01:07:04.800505 containerd[1616]: time="2026-01-17T01:07:04.800212055Z" level=warning msg="cleaning up after shim disconnected" id=f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99 namespace=k8s.io Jan 17 01:07:04.800505 containerd[1616]: time="2026-01-17T01:07:04.800226333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:07:05.158235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99-rootfs.mount: Deactivated successfully. Jan 17 01:07:05.273916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1346517806.mount: Deactivated successfully. Jan 17 01:07:05.319346 kubelet[2038]: E0117 01:07:05.319280 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:05.515814 containerd[1616]: time="2026-01-17T01:07:05.515412915Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 01:07:05.546596 containerd[1616]: time="2026-01-17T01:07:05.545920959Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\"" Jan 17 01:07:05.547196 containerd[1616]: time="2026-01-17T01:07:05.547164057Z" level=info msg="StartContainer for \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\"" Jan 17 01:07:05.632217 containerd[1616]: time="2026-01-17T01:07:05.632161238Z" level=info msg="StartContainer for \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\" returns successfully" Jan 17 01:07:05.743605 containerd[1616]: time="2026-01-17T01:07:05.743529693Z" level=info msg="shim disconnected" id=940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e namespace=k8s.io Jan 17 01:07:05.743605 containerd[1616]: time="2026-01-17T01:07:05.743600978Z" level=warning msg="cleaning up after shim disconnected" id=940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e namespace=k8s.io Jan 17 01:07:05.743605 containerd[1616]: time="2026-01-17T01:07:05.743617810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:07:06.157877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3636556144.mount: Deactivated successfully. Jan 17 01:07:06.320132 kubelet[2038]: E0117 01:07:06.320067 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:06.419472 containerd[1616]: time="2026-01-17T01:07:06.419285050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:06.421100 containerd[1616]: time="2026-01-17T01:07:06.421049072Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 17 01:07:06.422159 containerd[1616]: time="2026-01-17T01:07:06.422115149Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:06.424378 containerd[1616]: time="2026-01-17T01:07:06.424318841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:06.425512 containerd[1616]: time="2026-01-17T01:07:06.425350876Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.280222017s" Jan 17 01:07:06.425512 containerd[1616]: time="2026-01-17T01:07:06.425394873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 01:07:06.428421 containerd[1616]: time="2026-01-17T01:07:06.428387298Z" level=info msg="CreateContainer within sandbox \"696f18f18515b025d1a2519801d63101b19f3ff2448e2a19ef53fa9b725c185b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 01:07:06.445160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435727223.mount: Deactivated successfully. Jan 17 01:07:06.447137 containerd[1616]: time="2026-01-17T01:07:06.447018055Z" level=info msg="CreateContainer within sandbox \"696f18f18515b025d1a2519801d63101b19f3ff2448e2a19ef53fa9b725c185b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26d0c9d988f61134ee23c6eb0fb5a1db870ffbb55c104e221dd0d71eeb2f9d1c\"" Jan 17 01:07:06.447828 containerd[1616]: time="2026-01-17T01:07:06.447789480Z" level=info msg="StartContainer for \"26d0c9d988f61134ee23c6eb0fb5a1db870ffbb55c104e221dd0d71eeb2f9d1c\"" Jan 17 01:07:06.523943 containerd[1616]: time="2026-01-17T01:07:06.523839192Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 01:07:06.542262 containerd[1616]: time="2026-01-17T01:07:06.540895557Z" level=info msg="StartContainer for \"26d0c9d988f61134ee23c6eb0fb5a1db870ffbb55c104e221dd0d71eeb2f9d1c\" returns successfully" Jan 17 01:07:06.548273 containerd[1616]: time="2026-01-17T01:07:06.548228032Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\"" Jan 17 01:07:06.550071 containerd[1616]: time="2026-01-17T01:07:06.549032088Z" level=info msg="StartContainer for \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\"" Jan 17 01:07:06.640282 containerd[1616]: time="2026-01-17T01:07:06.640198300Z" level=info msg="StartContainer for \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\" returns successfully" Jan 17 01:07:06.826877 containerd[1616]: time="2026-01-17T01:07:06.826498337Z" level=info msg="shim disconnected" id=aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7 namespace=k8s.io Jan 17 01:07:06.826877 containerd[1616]: time="2026-01-17T01:07:06.826620466Z" level=warning msg="cleaning up after shim disconnected" id=aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7 namespace=k8s.io Jan 17 01:07:06.826877 containerd[1616]: time="2026-01-17T01:07:06.826637581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:07:07.321197 kubelet[2038]: E0117 01:07:07.321100 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:07.530646 containerd[1616]: time="2026-01-17T01:07:07.530433128Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 01:07:07.538545 kubelet[2038]: I0117 01:07:07.538273 2038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4tgsf" podStartSLOduration=4.705186108 podStartE2EDuration="17.538230154s" podCreationTimestamp="2026-01-17 01:06:50 +0000 UTC" firstStartedPulling="2026-01-17 01:06:53.593390513 +0000 UTC m=+3.959957697" lastFinishedPulling="2026-01-17 01:07:06.426434558 +0000 UTC m=+16.793001743" observedRunningTime="2026-01-17 01:07:07.537764572 +0000 UTC m=+17.904331767" watchObservedRunningTime="2026-01-17 01:07:07.538230154 +0000 UTC m=+17.904797346" Jan 17 01:07:07.548999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335773139.mount: Deactivated successfully. Jan 17 01:07:07.553189 containerd[1616]: time="2026-01-17T01:07:07.553103017Z" level=info msg="CreateContainer within sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\"" Jan 17 01:07:07.555288 containerd[1616]: time="2026-01-17T01:07:07.553984723Z" level=info msg="StartContainer for \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\"" Jan 17 01:07:07.640826 containerd[1616]: time="2026-01-17T01:07:07.640635810Z" level=info msg="StartContainer for \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\" returns successfully" Jan 17 01:07:07.780103 kubelet[2038]: I0117 01:07:07.779813 2038 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 01:07:08.232863 kernel: Initializing XFRM netlink socket Jan 17 01:07:08.322272 kubelet[2038]: E0117 01:07:08.322157 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:08.557312 kubelet[2038]: I0117 01:07:08.557122 2038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mjgcl" podStartSLOduration=8.982951877 podStartE2EDuration="18.557088314s" podCreationTimestamp="2026-01-17 01:06:50 +0000 UTC" firstStartedPulling="2026-01-17 01:06:53.570392427 +0000 UTC m=+3.936959604" lastFinishedPulling="2026-01-17 01:07:03.144528854 +0000 UTC m=+13.511096041" observedRunningTime="2026-01-17 01:07:08.557003569 +0000 UTC m=+18.923570782" watchObservedRunningTime="2026-01-17 01:07:08.557088314 +0000 UTC m=+18.923655497" Jan 17 01:07:09.323144 kubelet[2038]: E0117 01:07:09.323070 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:09.965288 systemd-networkd[1258]: cilium_host: Link UP Jan 17 01:07:09.965915 systemd-networkd[1258]: cilium_net: Link UP Jan 17 01:07:09.965921 systemd-networkd[1258]: cilium_net: Gained carrier Jan 17 01:07:09.966864 systemd-networkd[1258]: cilium_host: Gained carrier Jan 17 01:07:09.967515 systemd-networkd[1258]: cilium_host: Gained IPv6LL Jan 17 01:07:10.126246 systemd-networkd[1258]: cilium_vxlan: Link UP Jan 17 01:07:10.126256 systemd-networkd[1258]: cilium_vxlan: Gained carrier Jan 17 01:07:10.308568 kubelet[2038]: E0117 01:07:10.308372 2038 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:10.323740 kubelet[2038]: E0117 01:07:10.323682 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:10.435193 systemd-networkd[1258]: cilium_net: Gained IPv6LL Jan 17 01:07:10.525993 kernel: NET: Registered PF_ALG protocol family Jan 17 01:07:11.323882 kubelet[2038]: E0117 01:07:11.323827 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:11.504530 systemd-networkd[1258]: lxc_health: Link UP Jan 17 01:07:11.533726 systemd-networkd[1258]: lxc_health: Gained carrier Jan 17 01:07:11.923042 systemd-networkd[1258]: cilium_vxlan: Gained IPv6LL Jan 17 01:07:12.324038 kubelet[2038]: E0117 01:07:12.323986 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:12.398381 kubelet[2038]: I0117 01:07:12.398246 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6tn\" (UniqueName: \"kubernetes.io/projected/6ff98842-f04d-4734-b4bb-38e8c0ec5ea8-kube-api-access-xm6tn\") pod \"nginx-deployment-7fcdb87857-bghjk\" (UID: \"6ff98842-f04d-4734-b4bb-38e8c0ec5ea8\") " pod="default/nginx-deployment-7fcdb87857-bghjk" Jan 17 01:07:12.598934 containerd[1616]: time="2026-01-17T01:07:12.598616504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bghjk,Uid:6ff98842-f04d-4734-b4bb-38e8c0ec5ea8,Namespace:default,Attempt:0,}" Jan 17 01:07:12.789872 systemd-networkd[1258]: lxc1ea2256110db: Link UP Jan 17 01:07:12.810793 kernel: eth0: renamed from tmpef84f Jan 17 01:07:12.824319 systemd-networkd[1258]: lxc1ea2256110db: Gained carrier Jan 17 01:07:13.324798 kubelet[2038]: E0117 01:07:13.324659 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:13.523256 systemd-networkd[1258]: lxc_health: Gained IPv6LL Jan 17 01:07:14.227136 systemd-networkd[1258]: lxc1ea2256110db: Gained IPv6LL Jan 17 01:07:14.325524 kubelet[2038]: E0117 01:07:14.325366 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:15.325965 kubelet[2038]: E0117 01:07:15.325735 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:16.326676 kubelet[2038]: E0117 01:07:16.326612 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:17.288339 update_engine[1596]: I20260117 01:07:17.287101 1596 update_attempter.cc:509] Updating boot flags... Jan 17 01:07:17.327449 kubelet[2038]: E0117 01:07:17.327371 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:17.394233 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3121) Jan 17 01:07:17.460904 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (3121) Jan 17 01:07:18.106902 containerd[1616]: time="2026-01-17T01:07:18.106691329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:07:18.108375 containerd[1616]: time="2026-01-17T01:07:18.107739995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:07:18.108375 containerd[1616]: time="2026-01-17T01:07:18.107894912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:07:18.108375 containerd[1616]: time="2026-01-17T01:07:18.108188141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:07:18.153088 systemd[1]: run-containerd-runc-k8s.io-ef84ffd788a3c09e62bb0aab7f174fd0739caf6cba69a289f952d11afcd97bd7-runc.q1Kp9D.mount: Deactivated successfully. Jan 17 01:07:18.213262 containerd[1616]: time="2026-01-17T01:07:18.213164502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bghjk,Uid:6ff98842-f04d-4734-b4bb-38e8c0ec5ea8,Namespace:default,Attempt:0,} returns sandbox id \"ef84ffd788a3c09e62bb0aab7f174fd0739caf6cba69a289f952d11afcd97bd7\"" Jan 17 01:07:18.215372 containerd[1616]: time="2026-01-17T01:07:18.215115460Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 01:07:18.328323 kubelet[2038]: E0117 01:07:18.328231 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:19.328575 kubelet[2038]: E0117 01:07:19.328456 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:20.331171 kubelet[2038]: E0117 01:07:20.331078 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:21.331344 kubelet[2038]: E0117 01:07:21.331259 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:22.121018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141296114.mount: Deactivated successfully. Jan 17 01:07:22.332379 kubelet[2038]: E0117 01:07:22.332262 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:23.333929 kubelet[2038]: E0117 01:07:23.333860 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:23.458698 containerd[1616]: time="2026-01-17T01:07:23.457188183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:23.458698 containerd[1616]: time="2026-01-17T01:07:23.458292593Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=63840319" Jan 17 01:07:23.458698 containerd[1616]: time="2026-01-17T01:07:23.458636683Z" level=info msg="ImageCreate event name:\"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:23.462419 containerd[1616]: time="2026-01-17T01:07:23.462373493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:23.463916 containerd[1616]: time="2026-01-17T01:07:23.463878781Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 5.248721112s" Jan 17 01:07:23.464009 containerd[1616]: time="2026-01-17T01:07:23.463921828Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 01:07:23.467691 containerd[1616]: time="2026-01-17T01:07:23.467558582Z" level=info msg="CreateContainer within sandbox \"ef84ffd788a3c09e62bb0aab7f174fd0739caf6cba69a289f952d11afcd97bd7\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 01:07:23.487794 containerd[1616]: time="2026-01-17T01:07:23.487726809Z" level=info msg="CreateContainer within sandbox \"ef84ffd788a3c09e62bb0aab7f174fd0739caf6cba69a289f952d11afcd97bd7\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d915313900369b009112b3b657d5bbc090ab61b2cc0a9f74382835fa61edc53a\"" Jan 17 01:07:23.488787 containerd[1616]: time="2026-01-17T01:07:23.488646895Z" level=info msg="StartContainer for \"d915313900369b009112b3b657d5bbc090ab61b2cc0a9f74382835fa61edc53a\"" Jan 17 01:07:23.522597 systemd[1]: run-containerd-runc-k8s.io-d915313900369b009112b3b657d5bbc090ab61b2cc0a9f74382835fa61edc53a-runc.Mmkb8j.mount: Deactivated successfully. Jan 17 01:07:23.564598 containerd[1616]: time="2026-01-17T01:07:23.564544670Z" level=info msg="StartContainer for \"d915313900369b009112b3b657d5bbc090ab61b2cc0a9f74382835fa61edc53a\" returns successfully" Jan 17 01:07:23.613035 kubelet[2038]: I0117 01:07:23.612771 2038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-bghjk" podStartSLOduration=6.361573595 podStartE2EDuration="11.612732726s" podCreationTimestamp="2026-01-17 01:07:12 +0000 UTC" firstStartedPulling="2026-01-17 01:07:18.214359376 +0000 UTC m=+28.580926553" lastFinishedPulling="2026-01-17 01:07:23.465518499 +0000 UTC m=+33.832085684" observedRunningTime="2026-01-17 01:07:23.612026605 +0000 UTC m=+33.978593800" watchObservedRunningTime="2026-01-17 01:07:23.612732726 +0000 UTC m=+33.979299917" Jan 17 01:07:24.334144 kubelet[2038]: E0117 01:07:24.334062 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:25.334617 kubelet[2038]: E0117 01:07:25.334506 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:26.335320 kubelet[2038]: E0117 01:07:26.335246 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:27.335562 kubelet[2038]: E0117 01:07:27.335443 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:28.335770 kubelet[2038]: E0117 01:07:28.335693 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:29.336305 kubelet[2038]: E0117 01:07:29.336217 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:29.520852 kubelet[2038]: I0117 01:07:29.520731 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68wgx\" (UniqueName: \"kubernetes.io/projected/ef157da3-baf6-44fb-bf76-10b1919aba7c-kube-api-access-68wgx\") pod \"nfs-server-provisioner-0\" (UID: \"ef157da3-baf6-44fb-bf76-10b1919aba7c\") " pod="default/nfs-server-provisioner-0" Jan 17 01:07:29.521087 kubelet[2038]: I0117 01:07:29.520872 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ef157da3-baf6-44fb-bf76-10b1919aba7c-data\") pod \"nfs-server-provisioner-0\" (UID: \"ef157da3-baf6-44fb-bf76-10b1919aba7c\") " pod="default/nfs-server-provisioner-0" Jan 17 01:07:29.708621 containerd[1616]: time="2026-01-17T01:07:29.708453154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ef157da3-baf6-44fb-bf76-10b1919aba7c,Namespace:default,Attempt:0,}" Jan 17 01:07:29.757582 systemd-networkd[1258]: lxc1e64b14abdc9: Link UP Jan 17 01:07:29.772810 kernel: eth0: renamed from tmpaf677 Jan 17 01:07:29.780169 systemd-networkd[1258]: lxc1e64b14abdc9: Gained carrier Jan 17 01:07:30.029877 containerd[1616]: time="2026-01-17T01:07:30.029261480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:07:30.029877 containerd[1616]: time="2026-01-17T01:07:30.029345167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:07:30.029877 containerd[1616]: time="2026-01-17T01:07:30.029367579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:07:30.029877 containerd[1616]: time="2026-01-17T01:07:30.029510619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:07:30.127232 containerd[1616]: time="2026-01-17T01:07:30.127183235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ef157da3-baf6-44fb-bf76-10b1919aba7c,Namespace:default,Attempt:0,} returns sandbox id \"af6778ff902104cb6f296a85b970a4ce0d43582cb93e2e73828daf96fe6d0969\"" Jan 17 01:07:30.129453 containerd[1616]: time="2026-01-17T01:07:30.129368230Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 01:07:30.304429 kubelet[2038]: E0117 01:07:30.304241 2038 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:30.337297 kubelet[2038]: E0117 01:07:30.337266 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:31.123140 systemd-networkd[1258]: lxc1e64b14abdc9: Gained IPv6LL Jan 17 01:07:31.338397 kubelet[2038]: E0117 01:07:31.338315 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:32.338575 kubelet[2038]: E0117 01:07:32.338523 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:33.340212 kubelet[2038]: E0117 01:07:33.340155 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:33.591951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009838398.mount: Deactivated successfully. Jan 17 01:07:34.340527 kubelet[2038]: E0117 01:07:34.340445 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:35.340948 kubelet[2038]: E0117 01:07:35.340837 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:36.341774 kubelet[2038]: E0117 01:07:36.341714 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:37.342778 kubelet[2038]: E0117 01:07:37.342058 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:38.343087 kubelet[2038]: E0117 01:07:38.342989 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:38.941483 containerd[1616]: time="2026-01-17T01:07:38.941403532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:38.942966 containerd[1616]: time="2026-01-17T01:07:38.942923083Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 17 01:07:38.944285 containerd[1616]: time="2026-01-17T01:07:38.943734126Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:38.947435 containerd[1616]: time="2026-01-17T01:07:38.947387206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:38.949218 containerd[1616]: time="2026-01-17T01:07:38.949179111Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 8.819761115s" Jan 17 01:07:38.949315 containerd[1616]: time="2026-01-17T01:07:38.949222215Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 01:07:38.953466 containerd[1616]: time="2026-01-17T01:07:38.953435057Z" level=info msg="CreateContainer within sandbox \"af6778ff902104cb6f296a85b970a4ce0d43582cb93e2e73828daf96fe6d0969\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 01:07:38.971208 containerd[1616]: time="2026-01-17T01:07:38.971153458Z" level=info msg="CreateContainer within sandbox \"af6778ff902104cb6f296a85b970a4ce0d43582cb93e2e73828daf96fe6d0969\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4ff881acba7dc6ffb089b8a5a87df6201e7ba015742c312442975ca26e139347\"" Jan 17 01:07:38.972788 containerd[1616]: time="2026-01-17T01:07:38.972397899Z" level=info msg="StartContainer for \"4ff881acba7dc6ffb089b8a5a87df6201e7ba015742c312442975ca26e139347\"" Jan 17 01:07:39.009641 systemd[1]: run-containerd-runc-k8s.io-4ff881acba7dc6ffb089b8a5a87df6201e7ba015742c312442975ca26e139347-runc.Hycktq.mount: Deactivated successfully. Jan 17 01:07:39.050836 containerd[1616]: time="2026-01-17T01:07:39.050776467Z" level=info msg="StartContainer for \"4ff881acba7dc6ffb089b8a5a87df6201e7ba015742c312442975ca26e139347\" returns successfully" Jan 17 01:07:39.343238 kubelet[2038]: E0117 01:07:39.343172 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:39.662899 kubelet[2038]: I0117 01:07:39.662626 2038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.840514459 podStartE2EDuration="10.6626067s" podCreationTimestamp="2026-01-17 01:07:29 +0000 UTC" firstStartedPulling="2026-01-17 01:07:30.129059628 +0000 UTC m=+40.495626812" lastFinishedPulling="2026-01-17 01:07:38.951151872 +0000 UTC m=+49.317719053" observedRunningTime="2026-01-17 01:07:39.661264731 +0000 UTC m=+50.027831925" watchObservedRunningTime="2026-01-17 01:07:39.6626067 +0000 UTC m=+50.029173891" Jan 17 01:07:40.344489 kubelet[2038]: E0117 01:07:40.344374 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:41.344698 kubelet[2038]: E0117 01:07:41.344605 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:42.345169 kubelet[2038]: E0117 01:07:42.345079 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:43.345987 kubelet[2038]: E0117 01:07:43.345892 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:44.346963 kubelet[2038]: E0117 01:07:44.346878 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:45.348193 kubelet[2038]: E0117 01:07:45.348089 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:46.349110 kubelet[2038]: E0117 01:07:46.349014 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:47.350107 kubelet[2038]: E0117 01:07:47.350007 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:48.350797 kubelet[2038]: E0117 01:07:48.350676 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:49.153140 kubelet[2038]: I0117 01:07:49.152798 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksgj8\" (UniqueName: \"kubernetes.io/projected/e6f1c696-f32e-47a5-bad9-ec092a8eed0d-kube-api-access-ksgj8\") pod \"test-pod-1\" (UID: \"e6f1c696-f32e-47a5-bad9-ec092a8eed0d\") " pod="default/test-pod-1" Jan 17 01:07:49.153140 kubelet[2038]: I0117 01:07:49.152893 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6f4f9f5-4af6-418f-9ccb-8d47506daee3\" (UniqueName: \"kubernetes.io/nfs/e6f1c696-f32e-47a5-bad9-ec092a8eed0d-pvc-b6f4f9f5-4af6-418f-9ccb-8d47506daee3\") pod \"test-pod-1\" (UID: \"e6f1c696-f32e-47a5-bad9-ec092a8eed0d\") " pod="default/test-pod-1" Jan 17 01:07:49.300939 kernel: FS-Cache: Loaded Jan 17 01:07:49.351454 kubelet[2038]: E0117 01:07:49.351320 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:49.388323 kernel: RPC: Registered named UNIX socket transport module. Jan 17 01:07:49.388520 kernel: RPC: Registered udp transport module. Jan 17 01:07:49.389097 kernel: RPC: Registered tcp transport module. Jan 17 01:07:49.390116 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 01:07:49.391150 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 01:07:49.716937 kernel: NFS: Registering the id_resolver key type Jan 17 01:07:49.717192 kernel: Key type id_resolver registered Jan 17 01:07:49.718254 kernel: Key type id_legacy registered Jan 17 01:07:49.766869 nfsidmap[3449]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 17 01:07:49.775175 nfsidmap[3452]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 17 01:07:50.018953 containerd[1616]: time="2026-01-17T01:07:50.018232173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e6f1c696-f32e-47a5-bad9-ec092a8eed0d,Namespace:default,Attempt:0,}" Jan 17 01:07:50.066107 systemd-networkd[1258]: lxc3ad22919b698: Link UP Jan 17 01:07:50.076082 kernel: eth0: renamed from tmp72e65 Jan 17 01:07:50.083308 systemd-networkd[1258]: lxc3ad22919b698: Gained carrier Jan 17 01:07:50.304524 kubelet[2038]: E0117 01:07:50.304383 2038 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:50.352517 kubelet[2038]: E0117 01:07:50.352353 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:50.369793 containerd[1616]: time="2026-01-17T01:07:50.368376319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:07:50.369793 containerd[1616]: time="2026-01-17T01:07:50.368568997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:07:50.369793 containerd[1616]: time="2026-01-17T01:07:50.368593821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:07:50.369793 containerd[1616]: time="2026-01-17T01:07:50.368913404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:07:50.467960 containerd[1616]: time="2026-01-17T01:07:50.467909411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e6f1c696-f32e-47a5-bad9-ec092a8eed0d,Namespace:default,Attempt:0,} returns sandbox id \"72e652abc4282ad6d0f1b08b36999eabab0bcbe25d22a07d737a893f7b039a02\"" Jan 17 01:07:50.470453 containerd[1616]: time="2026-01-17T01:07:50.470423281Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 01:07:50.842959 containerd[1616]: time="2026-01-17T01:07:50.842861294Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:07:50.843553 containerd[1616]: time="2026-01-17T01:07:50.843414499Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 01:07:50.848541 containerd[1616]: time="2026-01-17T01:07:50.848504253Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:892d1d54ab079b8cffa2317ccb45829886a0c3c3edbdf92bb286904b09797767\", size \"63840197\" in 378.00356ms" Jan 17 01:07:50.848692 containerd[1616]: time="2026-01-17T01:07:50.848663929Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:66e7a6d326b670d959427bd421ca82232b0b81b18d85abaecb4ab9823d35056e\"" Jan 17 01:07:50.852076 containerd[1616]: time="2026-01-17T01:07:50.852033469Z" level=info msg="CreateContainer within sandbox \"72e652abc4282ad6d0f1b08b36999eabab0bcbe25d22a07d737a893f7b039a02\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 01:07:50.873573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4079345060.mount: Deactivated successfully. Jan 17 01:07:50.874481 containerd[1616]: time="2026-01-17T01:07:50.873993013Z" level=info msg="CreateContainer within sandbox \"72e652abc4282ad6d0f1b08b36999eabab0bcbe25d22a07d737a893f7b039a02\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b1aa5c05d571d0b67121c6f92fe69281019af9cd602d9a9936e5b6902e5de5af\"" Jan 17 01:07:50.875581 containerd[1616]: time="2026-01-17T01:07:50.875473764Z" level=info msg="StartContainer for \"b1aa5c05d571d0b67121c6f92fe69281019af9cd602d9a9936e5b6902e5de5af\"" Jan 17 01:07:50.950784 containerd[1616]: time="2026-01-17T01:07:50.950695779Z" level=info msg="StartContainer for \"b1aa5c05d571d0b67121c6f92fe69281019af9cd602d9a9936e5b6902e5de5af\" returns successfully" Jan 17 01:07:51.353184 kubelet[2038]: E0117 01:07:51.353123 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:51.698260 kubelet[2038]: I0117 01:07:51.697723 2038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=21.317425011 podStartE2EDuration="21.697680607s" podCreationTimestamp="2026-01-17 01:07:30 +0000 UTC" firstStartedPulling="2026-01-17 01:07:50.469367987 +0000 UTC m=+60.835935171" lastFinishedPulling="2026-01-17 01:07:50.849623584 +0000 UTC m=+61.216190767" observedRunningTime="2026-01-17 01:07:51.696407067 +0000 UTC m=+62.062974258" watchObservedRunningTime="2026-01-17 01:07:51.697680607 +0000 UTC m=+62.064247796" Jan 17 01:07:51.987146 systemd-networkd[1258]: lxc3ad22919b698: Gained IPv6LL Jan 17 01:07:52.354009 kubelet[2038]: E0117 01:07:52.353948 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:53.354894 kubelet[2038]: E0117 01:07:53.354819 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:54.355115 kubelet[2038]: E0117 01:07:54.355047 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:55.356000 kubelet[2038]: E0117 01:07:55.355927 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:56.357164 kubelet[2038]: E0117 01:07:56.357095 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:57.357544 kubelet[2038]: E0117 01:07:57.357477 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:58.358037 kubelet[2038]: E0117 01:07:58.357965 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:07:59.358302 kubelet[2038]: E0117 01:07:59.358158 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:00.358709 kubelet[2038]: E0117 01:08:00.358637 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:00.718649 containerd[1616]: time="2026-01-17T01:08:00.718186907Z" level=info msg="StopContainer for \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\" with timeout 2 (s)" Jan 17 01:08:00.719789 containerd[1616]: time="2026-01-17T01:08:00.719689236Z" level=info msg="Stop container \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\" with signal terminated" Jan 17 01:08:00.723158 containerd[1616]: time="2026-01-17T01:08:00.723071378Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 01:08:00.730700 systemd-networkd[1258]: lxc_health: Link DOWN Jan 17 01:08:00.730712 systemd-networkd[1258]: lxc_health: Lost carrier Jan 17 01:08:00.784764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545-rootfs.mount: Deactivated successfully. Jan 17 01:08:00.796140 containerd[1616]: time="2026-01-17T01:08:00.790201393Z" level=info msg="shim disconnected" id=66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545 namespace=k8s.io Jan 17 01:08:00.796282 containerd[1616]: time="2026-01-17T01:08:00.796140923Z" level=warning msg="cleaning up after shim disconnected" id=66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545 namespace=k8s.io Jan 17 01:08:00.796282 containerd[1616]: time="2026-01-17T01:08:00.796160601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:08:00.813936 containerd[1616]: time="2026-01-17T01:08:00.813881734Z" level=info msg="StopContainer for \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\" returns successfully" Jan 17 01:08:00.823687 containerd[1616]: time="2026-01-17T01:08:00.823651713Z" level=info msg="StopPodSandbox for \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\"" Jan 17 01:08:00.823802 containerd[1616]: time="2026-01-17T01:08:00.823716011Z" level=info msg="Container to stop \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 01:08:00.823802 containerd[1616]: time="2026-01-17T01:08:00.823739165Z" level=info msg="Container to stop \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 01:08:00.823802 containerd[1616]: time="2026-01-17T01:08:00.823776032Z" level=info msg="Container to stop \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 01:08:00.823802 containerd[1616]: time="2026-01-17T01:08:00.823792937Z" level=info msg="Container to stop \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 01:08:00.824027 containerd[1616]: time="2026-01-17T01:08:00.823809622Z" level=info msg="Container to stop \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 01:08:00.827200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31-shm.mount: Deactivated successfully. Jan 17 01:08:00.863124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31-rootfs.mount: Deactivated successfully. Jan 17 01:08:00.867252 containerd[1616]: time="2026-01-17T01:08:00.867173126Z" level=info msg="shim disconnected" id=cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31 namespace=k8s.io Jan 17 01:08:00.867535 containerd[1616]: time="2026-01-17T01:08:00.867253219Z" level=warning msg="cleaning up after shim disconnected" id=cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31 namespace=k8s.io Jan 17 01:08:00.867535 containerd[1616]: time="2026-01-17T01:08:00.867271133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:08:00.886677 containerd[1616]: time="2026-01-17T01:08:00.886591306Z" level=warning msg="cleanup warnings time=\"2026-01-17T01:08:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 01:08:00.896969 containerd[1616]: time="2026-01-17T01:08:00.896800471Z" level=info msg="TearDown network for sandbox \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" successfully" Jan 17 01:08:00.896969 containerd[1616]: time="2026-01-17T01:08:00.896833711Z" level=info msg="StopPodSandbox for \"cb6fe2a9ee94b9fb4a2cec656b4a91c3d1a02d7a7a0bfe5510b8ed0544ec0e31\" returns successfully" Jan 17 01:08:00.932997 kubelet[2038]: I0117 01:08:00.932916 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cilium-cgroup\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.932997 kubelet[2038]: I0117 01:08:00.932979 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cni-path\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933289 kubelet[2038]: I0117 01:08:00.933018 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-xtables-lock\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933289 kubelet[2038]: I0117 01:08:00.933065 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-hostproc\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933289 kubelet[2038]: I0117 01:08:00.933110 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06329847-eece-4917-964b-7ece0c830f5c-hubble-tls\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933289 kubelet[2038]: I0117 01:08:00.933134 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cilium-run\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933289 kubelet[2038]: I0117 01:08:00.933169 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-host-proc-sys-net\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933289 kubelet[2038]: I0117 01:08:00.933194 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-host-proc-sys-kernel\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933590 kubelet[2038]: I0117 01:08:00.933217 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-etc-cni-netd\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933590 kubelet[2038]: I0117 01:08:00.933242 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-bpf-maps\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933590 kubelet[2038]: I0117 01:08:00.933285 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06329847-eece-4917-964b-7ece0c830f5c-clustermesh-secrets\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933590 kubelet[2038]: I0117 01:08:00.933322 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06329847-eece-4917-964b-7ece0c830f5c-cilium-config-path\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933590 kubelet[2038]: I0117 01:08:00.933349 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-lib-modules\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.933590 kubelet[2038]: I0117 01:08:00.933376 2038 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qnf6\" (UniqueName: \"kubernetes.io/projected/06329847-eece-4917-964b-7ece0c830f5c-kube-api-access-7qnf6\") pod \"06329847-eece-4917-964b-7ece0c830f5c\" (UID: \"06329847-eece-4917-964b-7ece0c830f5c\") " Jan 17 01:08:00.935773 kubelet[2038]: I0117 01:08:00.934026 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.935773 kubelet[2038]: I0117 01:08:00.934114 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.935773 kubelet[2038]: I0117 01:08:00.934151 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cni-path" (OuterVolumeSpecName: "cni-path") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.935773 kubelet[2038]: I0117 01:08:00.934179 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.935773 kubelet[2038]: I0117 01:08:00.934204 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-hostproc" (OuterVolumeSpecName: "hostproc") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.936055 kubelet[2038]: I0117 01:08:00.936007 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.936130 kubelet[2038]: I0117 01:08:00.936079 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.936179 kubelet[2038]: I0117 01:08:00.936133 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.936631 kubelet[2038]: I0117 01:08:00.936579 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.938936 kubelet[2038]: I0117 01:08:00.938900 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 01:08:00.941801 kubelet[2038]: I0117 01:08:00.941773 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06329847-eece-4917-964b-7ece0c830f5c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 01:08:00.942906 kubelet[2038]: I0117 01:08:00.941956 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06329847-eece-4917-964b-7ece0c830f5c-kube-api-access-7qnf6" (OuterVolumeSpecName: "kube-api-access-7qnf6") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "kube-api-access-7qnf6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 01:08:00.943011 kubelet[2038]: I0117 01:08:00.942783 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06329847-eece-4917-964b-7ece0c830f5c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 01:08:00.944847 kubelet[2038]: I0117 01:08:00.944821 2038 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06329847-eece-4917-964b-7ece0c830f5c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "06329847-eece-4917-964b-7ece0c830f5c" (UID: "06329847-eece-4917-964b-7ece0c830f5c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 01:08:01.036079 kubelet[2038]: I0117 01:08:01.034300 2038 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/06329847-eece-4917-964b-7ece0c830f5c-clustermesh-secrets\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036079 kubelet[2038]: I0117 01:08:01.035803 2038 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/06329847-eece-4917-964b-7ece0c830f5c-cilium-config-path\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036079 kubelet[2038]: I0117 01:08:01.035828 2038 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-lib-modules\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036079 kubelet[2038]: I0117 01:08:01.035845 2038 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7qnf6\" (UniqueName: \"kubernetes.io/projected/06329847-eece-4917-964b-7ece0c830f5c-kube-api-access-7qnf6\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036079 kubelet[2038]: I0117 01:08:01.035864 2038 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-hostproc\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036079 kubelet[2038]: I0117 01:08:01.035878 2038 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cilium-cgroup\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036079 kubelet[2038]: I0117 01:08:01.035893 2038 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cni-path\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036079 kubelet[2038]: I0117 01:08:01.035907 2038 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-xtables-lock\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036693 kubelet[2038]: I0117 01:08:01.035919 2038 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/06329847-eece-4917-964b-7ece0c830f5c-hubble-tls\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036693 kubelet[2038]: I0117 01:08:01.035932 2038 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-cilium-run\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036693 kubelet[2038]: I0117 01:08:01.035954 2038 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-host-proc-sys-net\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036693 kubelet[2038]: I0117 01:08:01.035967 2038 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-host-proc-sys-kernel\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036693 kubelet[2038]: I0117 01:08:01.035980 2038 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-etc-cni-netd\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.036693 kubelet[2038]: I0117 01:08:01.035993 2038 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/06329847-eece-4917-964b-7ece0c830f5c-bpf-maps\") on node \"10.230.49.38\" DevicePath \"\"" Jan 17 01:08:01.359932 kubelet[2038]: E0117 01:08:01.359883 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:01.654662 systemd[1]: var-lib-kubelet-pods-06329847\x2deece\x2d4917\x2d964b\x2d7ece0c830f5c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7qnf6.mount: Deactivated successfully. Jan 17 01:08:01.654904 systemd[1]: var-lib-kubelet-pods-06329847\x2deece\x2d4917\x2d964b\x2d7ece0c830f5c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 01:08:01.655084 systemd[1]: var-lib-kubelet-pods-06329847\x2deece\x2d4917\x2d964b\x2d7ece0c830f5c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 01:08:01.726417 kubelet[2038]: I0117 01:08:01.726047 2038 scope.go:117] "RemoveContainer" containerID="66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545" Jan 17 01:08:01.729802 containerd[1616]: time="2026-01-17T01:08:01.729343212Z" level=info msg="RemoveContainer for \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\"" Jan 17 01:08:01.735006 containerd[1616]: time="2026-01-17T01:08:01.734850080Z" level=info msg="RemoveContainer for \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\" returns successfully" Jan 17 01:08:01.735958 kubelet[2038]: I0117 01:08:01.735929 2038 scope.go:117] "RemoveContainer" containerID="aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7" Jan 17 01:08:01.739104 containerd[1616]: time="2026-01-17T01:08:01.738808323Z" level=info msg="RemoveContainer for \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\"" Jan 17 01:08:01.744060 containerd[1616]: time="2026-01-17T01:08:01.743961228Z" level=info msg="RemoveContainer for \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\" returns successfully" Jan 17 01:08:01.744203 kubelet[2038]: I0117 01:08:01.744169 2038 scope.go:117] "RemoveContainer" containerID="940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e" Jan 17 01:08:01.745360 containerd[1616]: time="2026-01-17T01:08:01.745269785Z" level=info msg="RemoveContainer for \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\"" Jan 17 01:08:01.748338 containerd[1616]: time="2026-01-17T01:08:01.748079682Z" level=info msg="RemoveContainer for \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\" returns successfully" Jan 17 01:08:01.748437 kubelet[2038]: I0117 01:08:01.748247 2038 scope.go:117] "RemoveContainer" containerID="f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99" Jan 17 01:08:01.749630 containerd[1616]: time="2026-01-17T01:08:01.749573634Z" level=info msg="RemoveContainer for \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\"" Jan 17 01:08:01.752280 containerd[1616]: time="2026-01-17T01:08:01.752203758Z" level=info msg="RemoveContainer for \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\" returns successfully" Jan 17 01:08:01.752399 kubelet[2038]: I0117 01:08:01.752365 2038 scope.go:117] "RemoveContainer" containerID="587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e" Jan 17 01:08:01.753644 containerd[1616]: time="2026-01-17T01:08:01.753593691Z" level=info msg="RemoveContainer for \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\"" Jan 17 01:08:01.756191 containerd[1616]: time="2026-01-17T01:08:01.756132264Z" level=info msg="RemoveContainer for \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\" returns successfully" Jan 17 01:08:01.756336 kubelet[2038]: I0117 01:08:01.756300 2038 scope.go:117] "RemoveContainer" containerID="66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545" Jan 17 01:08:01.761128 containerd[1616]: time="2026-01-17T01:08:01.760776623Z" level=error msg="ContainerStatus for \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\": not found" Jan 17 01:08:01.770629 kubelet[2038]: E0117 01:08:01.770578 2038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\": not found" containerID="66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545" Jan 17 01:08:01.770803 kubelet[2038]: I0117 01:08:01.770654 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545"} err="failed to get container status \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\": rpc error: code = NotFound desc = an error occurred when try to find container \"66d27e17c81737c90bf98fc0cce2d115325579d6a9f6d3421bd2a506a13ac545\": not found" Jan 17 01:08:01.770858 kubelet[2038]: I0117 01:08:01.770804 2038 scope.go:117] "RemoveContainer" containerID="aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7" Jan 17 01:08:01.771170 containerd[1616]: time="2026-01-17T01:08:01.771005175Z" level=error msg="ContainerStatus for \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\": not found" Jan 17 01:08:01.771262 kubelet[2038]: E0117 01:08:01.771159 2038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\": not found" containerID="aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7" Jan 17 01:08:01.771262 kubelet[2038]: I0117 01:08:01.771207 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7"} err="failed to get container status \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa725ea584395e85de29f64eeef5309f92a1ab9c6098a6d671f525c6bb2f64d7\": not found" Jan 17 01:08:01.771262 kubelet[2038]: I0117 01:08:01.771230 2038 scope.go:117] "RemoveContainer" containerID="940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e" Jan 17 01:08:01.771679 containerd[1616]: time="2026-01-17T01:08:01.771572343Z" level=error msg="ContainerStatus for \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\": not found" Jan 17 01:08:01.771817 kubelet[2038]: E0117 01:08:01.771785 2038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\": not found" containerID="940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e" Jan 17 01:08:01.771889 kubelet[2038]: I0117 01:08:01.771832 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e"} err="failed to get container status \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\": rpc error: code = NotFound desc = an error occurred when try to find container \"940a17686b05d57c67a8a2bcc523223ae9aa91a4425209afd64900c3e167802e\": not found" Jan 17 01:08:01.771889 kubelet[2038]: I0117 01:08:01.771854 2038 scope.go:117] "RemoveContainer" containerID="f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99" Jan 17 01:08:01.772242 containerd[1616]: time="2026-01-17T01:08:01.772119527Z" level=error msg="ContainerStatus for \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\": not found" Jan 17 01:08:01.772349 kubelet[2038]: E0117 01:08:01.772295 2038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\": not found" containerID="f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99" Jan 17 01:08:01.772424 kubelet[2038]: I0117 01:08:01.772346 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99"} err="failed to get container status \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1fad73d9a1a7c54157ead6d138f2881bae8d5e805bc0ed0c4816201c2ff9b99\": not found" Jan 17 01:08:01.772424 kubelet[2038]: I0117 01:08:01.772365 2038 scope.go:117] "RemoveContainer" containerID="587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e" Jan 17 01:08:01.772622 containerd[1616]: time="2026-01-17T01:08:01.772573011Z" level=error msg="ContainerStatus for \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\": not found" Jan 17 01:08:01.772840 kubelet[2038]: E0117 01:08:01.772813 2038 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\": not found" containerID="587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e" Jan 17 01:08:01.772909 kubelet[2038]: I0117 01:08:01.772851 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e"} err="failed to get container status \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\": rpc error: code = NotFound desc = an error occurred when try to find container \"587c48b826ac234d5d4704e1c2d961c1f1b4270fdb1ca8e1ff44f61a6a84822e\": not found" Jan 17 01:08:02.360926 kubelet[2038]: E0117 01:08:02.360834 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:02.452476 kubelet[2038]: I0117 01:08:02.452391 2038 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06329847-eece-4917-964b-7ece0c830f5c" path="/var/lib/kubelet/pods/06329847-eece-4917-964b-7ece0c830f5c/volumes" Jan 17 01:08:03.361899 kubelet[2038]: E0117 01:08:03.361832 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:04.362904 kubelet[2038]: E0117 01:08:04.362829 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:04.521808 kubelet[2038]: I0117 01:08:04.520790 2038 memory_manager.go:355] "RemoveStaleState removing state" podUID="06329847-eece-4917-964b-7ece0c830f5c" containerName="cilium-agent" Jan 17 01:08:04.557270 kubelet[2038]: I0117 01:08:04.557163 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37e6bceb-430d-4884-abba-aaf47fcc6dea-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tlzhk\" (UID: \"37e6bceb-430d-4884-abba-aaf47fcc6dea\") " pod="kube-system/cilium-operator-6c4d7847fc-tlzhk" Jan 17 01:08:04.557270 kubelet[2038]: I0117 01:08:04.557209 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lt5\" (UniqueName: \"kubernetes.io/projected/37e6bceb-430d-4884-abba-aaf47fcc6dea-kube-api-access-86lt5\") pod \"cilium-operator-6c4d7847fc-tlzhk\" (UID: \"37e6bceb-430d-4884-abba-aaf47fcc6dea\") " pod="kube-system/cilium-operator-6c4d7847fc-tlzhk" Jan 17 01:08:04.596539 kubelet[2038]: W0117 01:08:04.596494 2038 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.230.49.38" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.49.38' and this object Jan 17 01:08:04.596853 kubelet[2038]: I0117 01:08:04.596486 2038 status_manager.go:890] "Failed to get status for pod" podUID="f404b6e5-7906-49d5-acd5-6c20346724ba" pod="kube-system/cilium-wk4r2" err="pods \"cilium-wk4r2\" is forbidden: User \"system:node:10.230.49.38\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.230.49.38' and this object" Jan 17 01:08:04.596853 kubelet[2038]: E0117 01:08:04.596642 2038 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:10.230.49.38\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.230.49.38' and this object" logger="UnhandledError" Jan 17 01:08:04.598549 kubelet[2038]: W0117 01:08:04.598524 2038 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.230.49.38" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.49.38' and this object Jan 17 01:08:04.598647 kubelet[2038]: E0117 01:08:04.598565 2038 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:10.230.49.38\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.230.49.38' and this object" logger="UnhandledError" Jan 17 01:08:04.598647 kubelet[2038]: W0117 01:08:04.598524 2038 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.230.49.38" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.49.38' and this object Jan 17 01:08:04.598647 kubelet[2038]: E0117 01:08:04.598610 2038 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:10.230.49.38\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.230.49.38' and this object" logger="UnhandledError" Jan 17 01:08:04.658229 kubelet[2038]: I0117 01:08:04.657423 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-host-proc-sys-net\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658229 kubelet[2038]: I0117 01:08:04.657483 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f404b6e5-7906-49d5-acd5-6c20346724ba-hubble-tls\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658229 kubelet[2038]: I0117 01:08:04.657510 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-cilium-run\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658229 kubelet[2038]: I0117 01:08:04.657544 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-cni-path\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658229 kubelet[2038]: I0117 01:08:04.657570 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-etc-cni-netd\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658229 kubelet[2038]: I0117 01:08:04.657650 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-cilium-cgroup\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658647 kubelet[2038]: I0117 01:08:04.657699 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-host-proc-sys-kernel\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658647 kubelet[2038]: I0117 01:08:04.657780 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpcbn\" (UniqueName: \"kubernetes.io/projected/f404b6e5-7906-49d5-acd5-6c20346724ba-kube-api-access-zpcbn\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658647 kubelet[2038]: I0117 01:08:04.657809 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-hostproc\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658647 kubelet[2038]: I0117 01:08:04.657838 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-xtables-lock\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658647 kubelet[2038]: I0117 01:08:04.657866 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f404b6e5-7906-49d5-acd5-6c20346724ba-cilium-ipsec-secrets\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658647 kubelet[2038]: I0117 01:08:04.657891 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-bpf-maps\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658960 kubelet[2038]: I0117 01:08:04.657914 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f404b6e5-7906-49d5-acd5-6c20346724ba-lib-modules\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658960 kubelet[2038]: I0117 01:08:04.657939 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f404b6e5-7906-49d5-acd5-6c20346724ba-clustermesh-secrets\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.658960 kubelet[2038]: I0117 01:08:04.657963 2038 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f404b6e5-7906-49d5-acd5-6c20346724ba-cilium-config-path\") pod \"cilium-wk4r2\" (UID: \"f404b6e5-7906-49d5-acd5-6c20346724ba\") " pod="kube-system/cilium-wk4r2" Jan 17 01:08:04.827183 containerd[1616]: time="2026-01-17T01:08:04.827109400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tlzhk,Uid:37e6bceb-430d-4884-abba-aaf47fcc6dea,Namespace:kube-system,Attempt:0,}" Jan 17 01:08:04.855538 containerd[1616]: time="2026-01-17T01:08:04.855167552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:08:04.855538 containerd[1616]: time="2026-01-17T01:08:04.855250372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:08:04.855538 containerd[1616]: time="2026-01-17T01:08:04.855274777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:08:04.855538 containerd[1616]: time="2026-01-17T01:08:04.855414058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:08:04.936118 containerd[1616]: time="2026-01-17T01:08:04.935943721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tlzhk,Uid:37e6bceb-430d-4884-abba-aaf47fcc6dea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a9cd02737bdd40812a107ab35bd0095388e9fc3735b91ba3c20f5362f1f9d21\"" Jan 17 01:08:04.939932 containerd[1616]: time="2026-01-17T01:08:04.939897884Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 01:08:05.363311 kubelet[2038]: E0117 01:08:05.363225 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:05.432090 kubelet[2038]: E0117 01:08:05.432012 2038 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 01:08:05.760746 kubelet[2038]: E0117 01:08:05.760086 2038 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 01:08:05.760746 kubelet[2038]: E0117 01:08:05.760147 2038 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-wk4r2: failed to sync secret cache: timed out waiting for the condition Jan 17 01:08:05.760746 kubelet[2038]: E0117 01:08:05.760282 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f404b6e5-7906-49d5-acd5-6c20346724ba-hubble-tls podName:f404b6e5-7906-49d5-acd5-6c20346724ba nodeName:}" failed. No retries permitted until 2026-01-17 01:08:06.260243185 +0000 UTC m=+76.626810372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/f404b6e5-7906-49d5-acd5-6c20346724ba-hubble-tls") pod "cilium-wk4r2" (UID: "f404b6e5-7906-49d5-acd5-6c20346724ba") : failed to sync secret cache: timed out waiting for the condition Jan 17 01:08:05.761229 kubelet[2038]: E0117 01:08:05.761099 2038 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 17 01:08:05.761229 kubelet[2038]: E0117 01:08:05.761204 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f404b6e5-7906-49d5-acd5-6c20346724ba-clustermesh-secrets podName:f404b6e5-7906-49d5-acd5-6c20346724ba nodeName:}" failed. No retries permitted until 2026-01-17 01:08:06.261177742 +0000 UTC m=+76.627744920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/f404b6e5-7906-49d5-acd5-6c20346724ba-clustermesh-secrets") pod "cilium-wk4r2" (UID: "f404b6e5-7906-49d5-acd5-6c20346724ba") : failed to sync secret cache: timed out waiting for the condition Jan 17 01:08:06.364211 kubelet[2038]: E0117 01:08:06.364135 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:06.395196 containerd[1616]: time="2026-01-17T01:08:06.394691448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wk4r2,Uid:f404b6e5-7906-49d5-acd5-6c20346724ba,Namespace:kube-system,Attempt:0,}" Jan 17 01:08:06.437101 containerd[1616]: time="2026-01-17T01:08:06.436960039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 01:08:06.437101 containerd[1616]: time="2026-01-17T01:08:06.437036399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 01:08:06.437101 containerd[1616]: time="2026-01-17T01:08:06.437053474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:08:06.437647 containerd[1616]: time="2026-01-17T01:08:06.437277144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 01:08:06.502437 containerd[1616]: time="2026-01-17T01:08:06.502270060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wk4r2,Uid:f404b6e5-7906-49d5-acd5-6c20346724ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\"" Jan 17 01:08:06.506046 containerd[1616]: time="2026-01-17T01:08:06.505894994Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 01:08:06.528389 containerd[1616]: time="2026-01-17T01:08:06.528351657Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f20f24a1eb802af5b8f0572b4d54446c9533f58bb24593cee9857ecb072e9274\"" Jan 17 01:08:06.529418 containerd[1616]: time="2026-01-17T01:08:06.529392295Z" level=info msg="StartContainer for \"f20f24a1eb802af5b8f0572b4d54446c9533f58bb24593cee9857ecb072e9274\"" Jan 17 01:08:06.603079 containerd[1616]: time="2026-01-17T01:08:06.603033009Z" level=info msg="StartContainer for \"f20f24a1eb802af5b8f0572b4d54446c9533f58bb24593cee9857ecb072e9274\" returns successfully" Jan 17 01:08:06.668382 containerd[1616]: time="2026-01-17T01:08:06.667902866Z" level=info msg="shim disconnected" id=f20f24a1eb802af5b8f0572b4d54446c9533f58bb24593cee9857ecb072e9274 namespace=k8s.io Jan 17 01:08:06.668382 containerd[1616]: time="2026-01-17T01:08:06.667976540Z" level=warning msg="cleaning up after shim disconnected" id=f20f24a1eb802af5b8f0572b4d54446c9533f58bb24593cee9857ecb072e9274 namespace=k8s.io Jan 17 01:08:06.668382 containerd[1616]: time="2026-01-17T01:08:06.668001600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:08:06.747503 containerd[1616]: time="2026-01-17T01:08:06.747230869Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 01:08:06.764416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4008648270.mount: Deactivated successfully. Jan 17 01:08:06.766214 containerd[1616]: time="2026-01-17T01:08:06.766042995Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b73100a20a5866e5c46b4c5bb9a2331357bc1e6c19360fe4a283f105c9e5836\"" Jan 17 01:08:06.767432 containerd[1616]: time="2026-01-17T01:08:06.767403844Z" level=info msg="StartContainer for \"8b73100a20a5866e5c46b4c5bb9a2331357bc1e6c19360fe4a283f105c9e5836\"" Jan 17 01:08:06.865167 containerd[1616]: time="2026-01-17T01:08:06.865080900Z" level=info msg="StartContainer for \"8b73100a20a5866e5c46b4c5bb9a2331357bc1e6c19360fe4a283f105c9e5836\" returns successfully" Jan 17 01:08:06.913297 containerd[1616]: time="2026-01-17T01:08:06.912943243Z" level=info msg="shim disconnected" id=8b73100a20a5866e5c46b4c5bb9a2331357bc1e6c19360fe4a283f105c9e5836 namespace=k8s.io Jan 17 01:08:06.913297 containerd[1616]: time="2026-01-17T01:08:06.913063227Z" level=warning msg="cleaning up after shim disconnected" id=8b73100a20a5866e5c46b4c5bb9a2331357bc1e6c19360fe4a283f105c9e5836 namespace=k8s.io Jan 17 01:08:06.913297 containerd[1616]: time="2026-01-17T01:08:06.913080458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:08:07.364928 kubelet[2038]: E0117 01:08:07.364838 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:07.677953 systemd[1]: run-containerd-runc-k8s.io-8b73100a20a5866e5c46b4c5bb9a2331357bc1e6c19360fe4a283f105c9e5836-runc.BDYAkO.mount: Deactivated successfully. Jan 17 01:08:07.678169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b73100a20a5866e5c46b4c5bb9a2331357bc1e6c19360fe4a283f105c9e5836-rootfs.mount: Deactivated successfully. Jan 17 01:08:07.755931 containerd[1616]: time="2026-01-17T01:08:07.755331271Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 01:08:07.792087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount345089179.mount: Deactivated successfully. Jan 17 01:08:07.811875 containerd[1616]: time="2026-01-17T01:08:07.811377542Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b48ceb88c247280796dbce04fc9e6526d8b100b28593ad4ba327cca62faf40c\"" Jan 17 01:08:07.813955 containerd[1616]: time="2026-01-17T01:08:07.813912582Z" level=info msg="StartContainer for \"8b48ceb88c247280796dbce04fc9e6526d8b100b28593ad4ba327cca62faf40c\"" Jan 17 01:08:08.042105 containerd[1616]: time="2026-01-17T01:08:08.041661943Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:08:08.045417 containerd[1616]: time="2026-01-17T01:08:08.044907958Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 01:08:08.048861 containerd[1616]: time="2026-01-17T01:08:08.048708189Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 01:08:08.053276 containerd[1616]: time="2026-01-17T01:08:08.053230365Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.113237343s" Jan 17 01:08:08.053372 containerd[1616]: time="2026-01-17T01:08:08.053283956Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 01:08:08.057027 containerd[1616]: time="2026-01-17T01:08:08.056970083Z" level=info msg="CreateContainer within sandbox \"8a9cd02737bdd40812a107ab35bd0095388e9fc3735b91ba3c20f5362f1f9d21\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 01:08:08.070457 containerd[1616]: time="2026-01-17T01:08:08.069754160Z" level=info msg="StartContainer for \"8b48ceb88c247280796dbce04fc9e6526d8b100b28593ad4ba327cca62faf40c\" returns successfully" Jan 17 01:08:08.074010 containerd[1616]: time="2026-01-17T01:08:08.073976928Z" level=info msg="CreateContainer within sandbox \"8a9cd02737bdd40812a107ab35bd0095388e9fc3735b91ba3c20f5362f1f9d21\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"09b6c50d138ab0501c76ade3445a7a1ee0621ddf652fbb2a3203320cd68d5a9a\"" Jan 17 01:08:08.074688 containerd[1616]: time="2026-01-17T01:08:08.074655786Z" level=info msg="StartContainer for \"09b6c50d138ab0501c76ade3445a7a1ee0621ddf652fbb2a3203320cd68d5a9a\"" Jan 17 01:08:08.118540 containerd[1616]: time="2026-01-17T01:08:08.118297305Z" level=info msg="shim disconnected" id=8b48ceb88c247280796dbce04fc9e6526d8b100b28593ad4ba327cca62faf40c namespace=k8s.io Jan 17 01:08:08.118540 containerd[1616]: time="2026-01-17T01:08:08.118421944Z" level=warning msg="cleaning up after shim disconnected" id=8b48ceb88c247280796dbce04fc9e6526d8b100b28593ad4ba327cca62faf40c namespace=k8s.io Jan 17 01:08:08.118540 containerd[1616]: time="2026-01-17T01:08:08.118472932Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:08:08.165658 containerd[1616]: time="2026-01-17T01:08:08.165609241Z" level=info msg="StartContainer for \"09b6c50d138ab0501c76ade3445a7a1ee0621ddf652fbb2a3203320cd68d5a9a\" returns successfully" Jan 17 01:08:08.365943 kubelet[2038]: E0117 01:08:08.365899 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:08.678635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b48ceb88c247280796dbce04fc9e6526d8b100b28593ad4ba327cca62faf40c-rootfs.mount: Deactivated successfully. Jan 17 01:08:08.759047 containerd[1616]: time="2026-01-17T01:08:08.758996341Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 01:08:08.768247 kubelet[2038]: I0117 01:08:08.767992 2038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tlzhk" podStartSLOduration=1.652530064 podStartE2EDuration="4.767968806s" podCreationTimestamp="2026-01-17 01:08:04 +0000 UTC" firstStartedPulling="2026-01-17 01:08:04.938584243 +0000 UTC m=+75.305151423" lastFinishedPulling="2026-01-17 01:08:08.054022973 +0000 UTC m=+78.420590165" observedRunningTime="2026-01-17 01:08:08.767379629 +0000 UTC m=+79.133946834" watchObservedRunningTime="2026-01-17 01:08:08.767968806 +0000 UTC m=+79.134536007" Jan 17 01:08:08.776454 containerd[1616]: time="2026-01-17T01:08:08.776297527Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b4f7836accf302d1335e30678745c67ffcc4c03689035b70a469c186079d6b07\"" Jan 17 01:08:08.778515 containerd[1616]: time="2026-01-17T01:08:08.777345065Z" level=info msg="StartContainer for \"b4f7836accf302d1335e30678745c67ffcc4c03689035b70a469c186079d6b07\"" Jan 17 01:08:08.823469 systemd[1]: run-containerd-runc-k8s.io-b4f7836accf302d1335e30678745c67ffcc4c03689035b70a469c186079d6b07-runc.Fsz8cV.mount: Deactivated successfully. Jan 17 01:08:08.862610 containerd[1616]: time="2026-01-17T01:08:08.862443878Z" level=info msg="StartContainer for \"b4f7836accf302d1335e30678745c67ffcc4c03689035b70a469c186079d6b07\" returns successfully" Jan 17 01:08:08.889243 containerd[1616]: time="2026-01-17T01:08:08.888943216Z" level=info msg="shim disconnected" id=b4f7836accf302d1335e30678745c67ffcc4c03689035b70a469c186079d6b07 namespace=k8s.io Jan 17 01:08:08.889243 containerd[1616]: time="2026-01-17T01:08:08.889076639Z" level=warning msg="cleaning up after shim disconnected" id=b4f7836accf302d1335e30678745c67ffcc4c03689035b70a469c186079d6b07 namespace=k8s.io Jan 17 01:08:08.889243 containerd[1616]: time="2026-01-17T01:08:08.889095009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 01:08:09.366284 kubelet[2038]: E0117 01:08:09.366227 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:09.676856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4f7836accf302d1335e30678745c67ffcc4c03689035b70a469c186079d6b07-rootfs.mount: Deactivated successfully. Jan 17 01:08:09.764982 containerd[1616]: time="2026-01-17T01:08:09.764921896Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 01:08:09.780645 containerd[1616]: time="2026-01-17T01:08:09.780016821Z" level=info msg="CreateContainer within sandbox \"76005385e9807bf7fc808c6bec54f79ed8dfcc0bfb1fecbe76643704c7aee198\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"feef80a12014e2fb055a97d19d38b4cc72398236d867c69e53f77429a5827f84\"" Jan 17 01:08:09.780994 containerd[1616]: time="2026-01-17T01:08:09.780910082Z" level=info msg="StartContainer for \"feef80a12014e2fb055a97d19d38b4cc72398236d867c69e53f77429a5827f84\"" Jan 17 01:08:09.853025 containerd[1616]: time="2026-01-17T01:08:09.852978383Z" level=info msg="StartContainer for \"feef80a12014e2fb055a97d19d38b4cc72398236d867c69e53f77429a5827f84\" returns successfully" Jan 17 01:08:10.303931 kubelet[2038]: E0117 01:08:10.303852 2038 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:10.367266 kubelet[2038]: E0117 01:08:10.367211 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:10.552793 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 01:08:10.677759 systemd[1]: run-containerd-runc-k8s.io-feef80a12014e2fb055a97d19d38b4cc72398236d867c69e53f77429a5827f84-runc.yTimrD.mount: Deactivated successfully. Jan 17 01:08:10.791504 kubelet[2038]: I0117 01:08:10.790978 2038 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wk4r2" podStartSLOduration=6.790957215 podStartE2EDuration="6.790957215s" podCreationTimestamp="2026-01-17 01:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 01:08:10.790430355 +0000 UTC m=+81.156997577" watchObservedRunningTime="2026-01-17 01:08:10.790957215 +0000 UTC m=+81.157524415" Jan 17 01:08:11.321043 systemd[1]: run-containerd-runc-k8s.io-feef80a12014e2fb055a97d19d38b4cc72398236d867c69e53f77429a5827f84-runc.UZSLkK.mount: Deactivated successfully. Jan 17 01:08:11.368245 kubelet[2038]: E0117 01:08:11.368178 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:12.368927 kubelet[2038]: E0117 01:08:12.368866 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:13.370118 kubelet[2038]: E0117 01:08:13.370044 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:14.061586 systemd-networkd[1258]: lxc_health: Link UP Jan 17 01:08:14.067424 systemd-networkd[1258]: lxc_health: Gained carrier Jan 17 01:08:14.370693 kubelet[2038]: E0117 01:08:14.370614 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:15.220087 systemd-networkd[1258]: lxc_health: Gained IPv6LL Jan 17 01:08:15.371092 kubelet[2038]: E0117 01:08:15.370995 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:16.215811 systemd[1]: run-containerd-runc-k8s.io-feef80a12014e2fb055a97d19d38b4cc72398236d867c69e53f77429a5827f84-runc.09cuP6.mount: Deactivated successfully. Jan 17 01:08:16.372130 kubelet[2038]: E0117 01:08:16.372048 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:17.373221 kubelet[2038]: E0117 01:08:17.373159 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:18.375038 kubelet[2038]: E0117 01:08:18.373993 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:19.374156 kubelet[2038]: E0117 01:08:19.374107 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:20.375389 kubelet[2038]: E0117 01:08:20.375289 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:21.376540 kubelet[2038]: E0117 01:08:21.376464 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:22.377545 kubelet[2038]: E0117 01:08:22.377475 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:23.378700 kubelet[2038]: E0117 01:08:23.378589 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:24.379541 kubelet[2038]: E0117 01:08:24.379458 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 01:08:25.380262 kubelet[2038]: E0117 01:08:25.380155 2038 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"