Jan 17 00:42:29.030774 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:42:29.030818 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:42:29.030830 kernel: BIOS-provided physical RAM map: Jan 17 00:42:29.030844 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 00:42:29.030853 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 00:42:29.030862 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 00:42:29.030872 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 17 00:42:29.030881 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 17 00:42:29.030890 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 00:42:29.030899 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 00:42:29.030908 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 00:42:29.030917 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 00:42:29.030942 kernel: NX (Execute Disable) protection: active Jan 17 00:42:29.030953 kernel: APIC: Static calls initialized Jan 17 00:42:29.030964 kernel: SMBIOS 2.8 present. Jan 17 00:42:29.030980 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Jan 17 00:42:29.030991 kernel: Hypervisor detected: KVM Jan 17 00:42:29.031006 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 00:42:29.031016 kernel: kvm-clock: using sched offset of 4933806858 cycles Jan 17 00:42:29.031027 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 00:42:29.031037 kernel: tsc: Detected 2799.998 MHz processor Jan 17 00:42:29.031048 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:42:29.031058 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:42:29.031441 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 17 00:42:29.031461 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 00:42:29.031472 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:42:29.031490 kernel: Using GB pages for direct mapping Jan 17 00:42:29.031501 kernel: ACPI: Early table checksum verification disabled Jan 17 00:42:29.031512 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Jan 17 00:42:29.031522 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:42:29.031533 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:42:29.031544 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:42:29.031555 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 17 00:42:29.031566 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:42:29.031576 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:42:29.031592 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:42:29.031603 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:42:29.031614 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 17 00:42:29.031625 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 17 00:42:29.031636 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 17 00:42:29.031653 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 17 00:42:29.031664 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 17 00:42:29.031680 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 17 00:42:29.031691 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 17 00:42:29.031703 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:42:29.031721 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:42:29.031733 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 00:42:29.031745 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 17 00:42:29.031756 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 00:42:29.031772 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 17 00:42:29.031784 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 00:42:29.031795 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 17 00:42:29.031806 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 00:42:29.031817 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 17 00:42:29.031828 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 00:42:29.031839 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 17 00:42:29.031862 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 00:42:29.031872 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 17 00:42:29.031891 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 00:42:29.031921 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 17 00:42:29.031931 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 00:42:29.031942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 00:42:29.031953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 17 00:42:29.031963 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 17 00:42:29.031974 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 17 00:42:29.031985 kernel: Zone ranges: Jan 17 00:42:29.032008 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:42:29.032019 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 17 00:42:29.032034 kernel: Normal empty Jan 17 00:42:29.032046 kernel: Movable zone start for each node Jan 17 00:42:29.032056 kernel: Early memory node ranges Jan 17 00:42:29.032067 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 00:42:29.033227 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 17 00:42:29.033242 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 17 00:42:29.033254 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:42:29.033265 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 00:42:29.033288 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 17 00:42:29.033301 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 00:42:29.033321 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 00:42:29.033333 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:42:29.033344 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 00:42:29.033355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 00:42:29.033367 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:42:29.033378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 00:42:29.033389 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 00:42:29.033401 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:42:29.033412 kernel: TSC deadline timer available Jan 17 00:42:29.033428 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 17 00:42:29.033440 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 00:42:29.033451 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 00:42:29.033462 kernel: Booting paravirtualized kernel on KVM Jan 17 00:42:29.033474 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:42:29.033485 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 17 00:42:29.033496 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 17 00:42:29.033508 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 17 00:42:29.033524 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 17 00:42:29.033535 kernel: kvm-guest: PV spinlocks enabled Jan 17 00:42:29.033546 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:42:29.033559 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:42:29.033571 kernel: random: crng init done Jan 17 00:42:29.033583 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:42:29.033607 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:42:29.033618 kernel: Fallback order for Node 0: 0 Jan 17 00:42:29.033628 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 17 00:42:29.033644 kernel: Policy zone: DMA32 Jan 17 00:42:29.033671 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:42:29.033682 kernel: software IO TLB: area num 16. Jan 17 00:42:29.033693 kernel: Memory: 1901596K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194760K reserved, 0K cma-reserved) Jan 17 00:42:29.033704 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 17 00:42:29.033714 kernel: Kernel/User page tables isolation: enabled Jan 17 00:42:29.033724 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:42:29.033734 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:42:29.033750 kernel: Dynamic Preempt: voluntary Jan 17 00:42:29.033760 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:42:29.033771 kernel: rcu: RCU event tracing is enabled. Jan 17 00:42:29.033782 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 17 00:42:29.033792 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:42:29.033803 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:42:29.033824 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:42:29.033839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:42:29.033851 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 17 00:42:29.033861 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 17 00:42:29.033872 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:42:29.033895 kernel: Console: colour VGA+ 80x25 Jan 17 00:42:29.033910 kernel: printk: console [tty0] enabled Jan 17 00:42:29.033922 kernel: printk: console [ttyS0] enabled Jan 17 00:42:29.033933 kernel: ACPI: Core revision 20230628 Jan 17 00:42:29.033944 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:42:29.033956 kernel: x2apic enabled Jan 17 00:42:29.033971 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 00:42:29.033982 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 17 00:42:29.033999 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 17 00:42:29.034011 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 00:42:29.034022 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 00:42:29.034033 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 00:42:29.034044 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:42:29.034055 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:42:29.034066 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:42:29.035629 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 00:42:29.035649 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 00:42:29.035662 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 00:42:29.035674 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 00:42:29.035698 kernel: MMIO Stale Data: Unknown: No mitigations Jan 17 00:42:29.035709 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 00:42:29.035720 kernel: active return thunk: its_return_thunk Jan 17 00:42:29.035731 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:42:29.035742 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:42:29.035765 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:42:29.035777 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:42:29.035788 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:42:29.035805 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 00:42:29.035829 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:42:29.035847 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:42:29.035860 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:42:29.035872 kernel: landlock: Up and running. Jan 17 00:42:29.035883 kernel: SELinux: Initializing. Jan 17 00:42:29.035895 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:42:29.035907 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 00:42:29.035919 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 17 00:42:29.035931 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 00:42:29.035943 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 00:42:29.035961 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 00:42:29.035973 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 17 00:42:29.035985 kernel: signal: max sigframe size: 1776 Jan 17 00:42:29.035997 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:42:29.036010 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:42:29.036022 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:42:29.036033 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:42:29.036045 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:42:29.036057 kernel: .... node #0, CPUs: #1 Jan 17 00:42:29.036120 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 00:42:29.036133 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:42:29.036145 kernel: smpboot: Max logical packages: 16 Jan 17 00:42:29.036157 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 17 00:42:29.036169 kernel: devtmpfs: initialized Jan 17 00:42:29.036181 kernel: x86/mm: Memory block size: 128MB Jan 17 00:42:29.036199 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:42:29.036213 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 17 00:42:29.036225 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:42:29.036243 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:42:29.036255 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:42:29.036268 kernel: audit: type=2000 audit(1768610547.442:1): state=initialized audit_enabled=0 res=1 Jan 17 00:42:29.036279 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:42:29.036292 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:42:29.036304 kernel: cpuidle: using governor menu Jan 17 00:42:29.036316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:42:29.036327 kernel: dca service started, version 1.12.1 Jan 17 00:42:29.036339 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 00:42:29.036357 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 00:42:29.036369 kernel: PCI: Using configuration type 1 for base access Jan 17 00:42:29.036381 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:42:29.036393 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:42:29.036405 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:42:29.036416 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:42:29.036428 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:42:29.036440 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:42:29.036452 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:42:29.036469 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:42:29.036481 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:42:29.036493 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:42:29.036504 kernel: ACPI: Interpreter enabled Jan 17 00:42:29.036516 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:42:29.036528 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:42:29.036540 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:42:29.036552 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 00:42:29.036564 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 00:42:29.036581 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:42:29.036841 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:42:29.037053 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 00:42:29.037739 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 00:42:29.037759 kernel: PCI host bridge to bus 0000:00 Jan 17 00:42:29.037930 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 00:42:29.038116 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 00:42:29.038279 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 00:42:29.038441 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 17 00:42:29.038644 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 00:42:29.038795 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 17 00:42:29.038964 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:42:29.040154 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 00:42:29.040378 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 17 00:42:29.040548 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 17 00:42:29.040714 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 17 00:42:29.040878 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 17 00:42:29.041046 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 00:42:29.041291 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:42:29.041458 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 17 00:42:29.041712 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 00:42:29.041889 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 17 00:42:29.044155 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 00:42:29.044340 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 17 00:42:29.044533 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 00:42:29.044701 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 17 00:42:29.044907 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 00:42:29.045132 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 17 00:42:29.045318 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 00:42:29.045485 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 17 00:42:29.045663 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 00:42:29.045839 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 17 00:42:29.046148 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 00:42:29.046322 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 17 00:42:29.046497 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 00:42:29.046662 kernel: pci 0000:00:03.0: reg 0x10: [io 0xd0c0-0xd0df] Jan 17 00:42:29.046826 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 17 00:42:29.046988 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 17 00:42:29.050125 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 17 00:42:29.050318 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 17 00:42:29.050507 kernel: pci 0000:00:04.0: reg 0x10: [io 0xd000-0xd07f] Jan 17 00:42:29.050670 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 17 00:42:29.050832 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 17 00:42:29.051013 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 00:42:29.051210 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 00:42:29.051382 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 00:42:29.051552 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xd0e0-0xd0ff] Jan 17 00:42:29.051711 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 17 00:42:29.051898 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 00:42:29.052061 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 00:42:29.054327 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 17 00:42:29.054520 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 17 00:42:29.054725 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 00:42:29.054889 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 17 00:42:29.055051 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 00:42:29.055243 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 00:42:29.055429 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 00:42:29.055649 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 17 00:42:29.055837 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 17 00:42:29.056008 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 00:42:29.056958 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 17 00:42:29.057169 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 00:42:29.057339 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 00:42:29.057533 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 00:42:29.057724 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 17 00:42:29.057921 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 00:42:29.058107 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 00:42:29.058278 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 00:42:29.058496 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 00:42:29.058659 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 17 00:42:29.058851 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 00:42:29.059015 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 00:42:29.060833 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 00:42:29.061011 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 00:42:29.061213 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 00:42:29.061391 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 00:42:29.061560 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 00:42:29.061725 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 00:42:29.061890 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 00:42:29.062066 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 00:42:29.062293 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 00:42:29.062455 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 00:42:29.062640 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 00:42:29.062799 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 00:42:29.062957 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 00:42:29.063193 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 00:42:29.063356 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 00:42:29.063524 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 00:42:29.063561 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 00:42:29.063574 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 00:42:29.063587 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 00:42:29.063599 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 00:42:29.063618 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 00:42:29.063630 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 00:42:29.063642 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 00:42:29.063654 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 00:42:29.063671 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 00:42:29.063688 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 00:42:29.063712 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 00:42:29.063724 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 00:42:29.063743 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 00:42:29.063754 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 00:42:29.063778 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 00:42:29.063789 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 00:42:29.063801 kernel: iommu: Default domain type: Translated Jan 17 00:42:29.063812 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:42:29.063827 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:42:29.063850 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 00:42:29.063861 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 00:42:29.063872 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 17 00:42:29.064044 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 00:42:29.064254 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 00:42:29.064427 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 00:42:29.064444 kernel: vgaarb: loaded Jan 17 00:42:29.064468 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 00:42:29.064496 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:42:29.064509 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:42:29.064521 kernel: pnp: PnP ACPI init Jan 17 00:42:29.064713 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 00:42:29.064733 kernel: pnp: PnP ACPI: found 5 devices Jan 17 00:42:29.064767 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:42:29.064779 kernel: NET: Registered PF_INET protocol family Jan 17 00:42:29.064790 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:42:29.064810 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 00:42:29.064834 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:42:29.064845 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:42:29.064856 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 00:42:29.064867 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 00:42:29.064886 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:42:29.064898 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 00:42:29.064908 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:42:29.064919 kernel: NET: Registered PF_XDP protocol family Jan 17 00:42:29.065148 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 17 00:42:29.065312 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 17 00:42:29.065494 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 17 00:42:29.065673 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 00:42:29.065856 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 00:42:29.066021 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 00:42:29.066268 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 00:42:29.066443 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 00:42:29.066614 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 00:42:29.066776 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 00:42:29.066937 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x4000-0x4fff] Jan 17 00:42:29.067124 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x5000-0x5fff] Jan 17 00:42:29.067288 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x6000-0x6fff] Jan 17 00:42:29.067482 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x7000-0x7fff] Jan 17 00:42:29.067695 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 00:42:29.067892 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 17 00:42:29.068071 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 00:42:29.068301 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 00:42:29.068465 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 00:42:29.068626 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 17 00:42:29.068804 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 00:42:29.068954 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 00:42:29.069155 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 00:42:29.069326 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Jan 17 00:42:29.069488 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 00:42:29.069656 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 00:42:29.069847 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 00:42:29.070012 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Jan 17 00:42:29.070221 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 00:42:29.070386 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 00:42:29.070559 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 00:42:29.070732 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Jan 17 00:42:29.070895 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 00:42:29.071063 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 00:42:29.071257 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 00:42:29.071454 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Jan 17 00:42:29.071617 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 00:42:29.071786 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 00:42:29.071947 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 00:42:29.072150 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Jan 17 00:42:29.072314 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 00:42:29.072474 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 00:42:29.072658 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 00:42:29.072829 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Jan 17 00:42:29.072988 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 00:42:29.073227 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 00:42:29.073388 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 00:42:29.073548 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Jan 17 00:42:29.073717 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 00:42:29.073899 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 00:42:29.074054 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 00:42:29.074275 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 00:42:29.074421 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 00:42:29.074578 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 17 00:42:29.074737 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 00:42:29.074883 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 17 00:42:29.075072 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Jan 17 00:42:29.075253 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 17 00:42:29.075414 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 00:42:29.075577 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Jan 17 00:42:29.075762 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 17 00:42:29.075921 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 00:42:29.076131 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Jan 17 00:42:29.076291 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 17 00:42:29.076444 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 00:42:29.076618 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Jan 17 00:42:29.076783 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 17 00:42:29.076936 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 00:42:29.077158 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Jan 17 00:42:29.077313 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 17 00:42:29.077473 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 00:42:29.077643 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Jan 17 00:42:29.077828 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 17 00:42:29.077980 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 00:42:29.078198 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Jan 17 00:42:29.078355 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 17 00:42:29.078507 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 00:42:29.078707 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Jan 17 00:42:29.078867 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 17 00:42:29.079042 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 00:42:29.079271 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Jan 17 00:42:29.079424 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 17 00:42:29.079575 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 00:42:29.079595 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 00:42:29.079608 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:42:29.079629 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:42:29.079642 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 17 00:42:29.079655 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:42:29.079668 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 17 00:42:29.079693 kernel: Initialise system trusted keyrings Jan 17 00:42:29.079705 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 00:42:29.079717 kernel: Key type asymmetric registered Jan 17 00:42:29.079729 kernel: Asymmetric key parser 'x509' registered Jan 17 00:42:29.079740 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:42:29.079757 kernel: io scheduler mq-deadline registered Jan 17 00:42:29.079769 kernel: io scheduler kyber registered Jan 17 00:42:29.079781 kernel: io scheduler bfq registered Jan 17 00:42:29.079946 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 00:42:29.080186 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 00:42:29.080351 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:42:29.080512 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 00:42:29.080686 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 00:42:29.080861 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:42:29.081030 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 00:42:29.081229 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 00:42:29.081392 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:42:29.081554 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 00:42:29.081724 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 00:42:29.081893 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:42:29.082063 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 00:42:29.082263 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 00:42:29.082426 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:42:29.082589 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 00:42:29.082777 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 00:42:29.082954 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:42:29.083560 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 00:42:29.083727 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 00:42:29.083912 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:42:29.084077 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 00:42:29.084269 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 00:42:29.084440 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:42:29.084460 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:42:29.084474 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 00:42:29.084487 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 00:42:29.084499 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:42:29.084512 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:42:29.084525 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 00:42:29.084538 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 00:42:29.084558 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 00:42:29.084739 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 00:42:29.084772 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 00:42:29.084916 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 00:42:29.085083 kernel: rtc_cmos 00:03: setting system clock to 2026-01-17T00:42:28 UTC (1768610548) Jan 17 00:42:29.085297 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 00:42:29.085315 kernel: intel_pstate: CPU model not supported Jan 17 00:42:29.085329 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:42:29.085349 kernel: Segment Routing with IPv6 Jan 17 00:42:29.085370 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:42:29.085382 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:42:29.085395 kernel: Key type dns_resolver registered Jan 17 00:42:29.085407 kernel: IPI shorthand broadcast: enabled Jan 17 00:42:29.085429 kernel: sched_clock: Marking stable (1546003651, 221259968)->(1926358046, -159094427) Jan 17 00:42:29.085441 kernel: registered taskstats version 1 Jan 17 00:42:29.085454 kernel: Loading compiled-in X.509 certificates Jan 17 00:42:29.085466 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:42:29.085484 kernel: Key type .fscrypt registered Jan 17 00:42:29.085496 kernel: Key type fscrypt-provisioning registered Jan 17 00:42:29.085508 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:42:29.085521 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:42:29.085533 kernel: ima: No architecture policies found Jan 17 00:42:29.085546 kernel: clk: Disabling unused clocks Jan 17 00:42:29.085559 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:42:29.085578 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:42:29.085590 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:42:29.085608 kernel: Run /init as init process Jan 17 00:42:29.085621 kernel: with arguments: Jan 17 00:42:29.085633 kernel: /init Jan 17 00:42:29.085645 kernel: with environment: Jan 17 00:42:29.085657 kernel: HOME=/ Jan 17 00:42:29.085669 kernel: TERM=linux Jan 17 00:42:29.085684 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:42:29.085700 systemd[1]: Detected virtualization kvm. Jan 17 00:42:29.085718 systemd[1]: Detected architecture x86-64. Jan 17 00:42:29.085731 systemd[1]: Running in initrd. Jan 17 00:42:29.085744 systemd[1]: No hostname configured, using default hostname. Jan 17 00:42:29.085757 systemd[1]: Hostname set to . Jan 17 00:42:29.085782 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:42:29.085795 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:42:29.085807 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:42:29.085820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:42:29.085852 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:42:29.085865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:42:29.085881 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:42:29.085895 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:42:29.085910 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:42:29.085924 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:42:29.085937 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:42:29.085964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:42:29.085978 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:42:29.085991 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:42:29.086004 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:42:29.086018 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:42:29.086031 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:42:29.086044 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:42:29.086058 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:42:29.086076 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:42:29.086178 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:42:29.086194 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:42:29.086211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:42:29.086233 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:42:29.086255 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:42:29.086278 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:42:29.086301 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:42:29.086323 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:42:29.086356 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:42:29.086378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:42:29.086406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:42:29.086428 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:42:29.086505 systemd-journald[202]: Collecting audit messages is disabled. Jan 17 00:42:29.086565 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:42:29.086587 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:42:29.086610 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:42:29.086640 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:42:29.086664 systemd-journald[202]: Journal started Jan 17 00:42:29.086706 systemd-journald[202]: Runtime Journal (/run/log/journal/a756102dd8294e34b25b5a680a4d6e32) is 4.7M, max 38.0M, 33.2M free. Jan 17 00:42:29.030471 systemd-modules-load[203]: Inserted module 'overlay' Jan 17 00:42:29.152664 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:42:29.152697 kernel: Bridge firewalling registered Jan 17 00:42:29.152729 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:42:29.094110 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 17 00:42:29.159424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:42:29.160848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:42:29.168316 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:42:29.174258 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:42:29.177318 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:42:29.185275 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:42:29.196233 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:42:29.208197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:42:29.209393 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:42:29.226345 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:42:29.227396 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:42:29.238305 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:42:29.240353 dracut-cmdline[234]: dracut-dracut-053 Jan 17 00:42:29.243770 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:42:29.277572 systemd-resolved[240]: Positive Trust Anchors: Jan 17 00:42:29.277590 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:42:29.277632 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:42:29.285963 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 17 00:42:29.288970 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:42:29.290624 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:42:29.340141 kernel: SCSI subsystem initialized Jan 17 00:42:29.351123 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:42:29.366110 kernel: iscsi: registered transport (tcp) Jan 17 00:42:29.391382 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:42:29.391444 kernel: QLogic iSCSI HBA Driver Jan 17 00:42:29.450457 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:42:29.460307 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:42:29.491467 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:42:29.493763 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:42:29.493783 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:42:29.541147 kernel: raid6: sse2x4 gen() 13394 MB/s Jan 17 00:42:29.559125 kernel: raid6: sse2x2 gen() 9216 MB/s Jan 17 00:42:29.577712 kernel: raid6: sse2x1 gen() 9859 MB/s Jan 17 00:42:29.577778 kernel: raid6: using algorithm sse2x4 gen() 13394 MB/s Jan 17 00:42:29.596743 kernel: raid6: .... xor() 7836 MB/s, rmw enabled Jan 17 00:42:29.596788 kernel: raid6: using ssse3x2 recovery algorithm Jan 17 00:42:29.626139 kernel: xor: automatically using best checksumming function avx Jan 17 00:42:29.815128 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:42:29.830829 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:42:29.839448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:42:29.857853 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 17 00:42:29.864867 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:42:29.874344 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:42:29.901590 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jan 17 00:42:29.940855 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:42:29.947296 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:42:30.072389 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:42:30.082261 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:42:30.107869 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:42:30.111407 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:42:30.112574 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:42:30.116472 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:42:30.124495 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:42:30.156415 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:42:30.195109 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 17 00:42:30.211223 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:42:30.215103 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 00:42:30.238336 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:42:30.238377 kernel: GPT:17805311 != 125829119 Jan 17 00:42:30.238405 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:42:30.238423 kernel: GPT:17805311 != 125829119 Jan 17 00:42:30.238442 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:42:30.238458 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:42:30.239095 kernel: AVX version of gcm_enc/dec engaged. Jan 17 00:42:30.240180 kernel: AES CTR mode by8 optimization enabled Jan 17 00:42:30.259171 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:42:30.259347 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:42:30.261948 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:42:30.264741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:42:30.264911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:42:30.268839 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:42:30.278094 kernel: ACPI: bus type USB registered Jan 17 00:42:30.281336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:42:30.285975 kernel: libata version 3.00 loaded. Jan 17 00:42:30.290863 kernel: usbcore: registered new interface driver usbfs Jan 17 00:42:30.294575 kernel: usbcore: registered new interface driver hub Jan 17 00:42:30.294615 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 00:42:30.295345 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 00:42:30.299475 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 00:42:30.299728 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 00:42:30.304146 kernel: usbcore: registered new device driver usb Jan 17 00:42:30.306101 kernel: scsi host0: ahci Jan 17 00:42:30.310102 kernel: scsi host1: ahci Jan 17 00:42:30.313165 kernel: scsi host2: ahci Jan 17 00:42:30.313389 kernel: scsi host3: ahci Jan 17 00:42:30.317103 kernel: scsi host4: ahci Jan 17 00:42:30.339120 kernel: scsi host5: ahci Jan 17 00:42:30.339413 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 17 00:42:30.339436 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 17 00:42:30.339454 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 17 00:42:30.339471 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 17 00:42:30.339487 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 17 00:42:30.339552 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 17 00:42:30.380861 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (466) Jan 17 00:42:30.385098 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (478) Jan 17 00:42:30.394860 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 00:42:30.444211 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:42:30.460950 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 00:42:30.466749 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 00:42:30.467527 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 00:42:30.475098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:42:30.493368 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:42:30.505408 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:42:30.510676 disk-uuid[564]: Primary Header is updated. Jan 17 00:42:30.510676 disk-uuid[564]: Secondary Entries is updated. Jan 17 00:42:30.510676 disk-uuid[564]: Secondary Header is updated. Jan 17 00:42:30.517111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:42:30.525117 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:42:30.561754 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:42:30.659280 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 00:42:30.659346 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 00:42:30.659377 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 00:42:30.659394 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 00:42:30.659411 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 00:42:30.659427 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 00:42:30.683676 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 00:42:30.685432 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 17 00:42:30.689118 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 00:42:30.693103 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 00:42:30.693332 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 17 00:42:30.695117 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 17 00:42:30.695523 kernel: hub 1-0:1.0: USB hub found Jan 17 00:42:30.697709 kernel: hub 1-0:1.0: 4 ports detected Jan 17 00:42:30.698161 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 00:42:30.701457 kernel: hub 2-0:1.0: USB hub found Jan 17 00:42:30.701705 kernel: hub 2-0:1.0: 4 ports detected Jan 17 00:42:30.935111 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 00:42:31.076106 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:42:31.081254 kernel: usbcore: registered new interface driver usbhid Jan 17 00:42:31.081296 kernel: usbhid: USB HID core driver Jan 17 00:42:31.089208 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 17 00:42:31.089247 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 17 00:42:31.528153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 00:42:31.528546 disk-uuid[565]: The operation has completed successfully. Jan 17 00:42:31.585530 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:42:31.585709 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:42:31.603280 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:42:31.610762 sh[586]: Success Jan 17 00:42:31.627096 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 17 00:42:31.687642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:42:31.696201 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:42:31.700451 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:42:31.728279 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:42:31.728329 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:42:31.730461 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:42:31.732633 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:42:31.734201 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:42:31.744847 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:42:31.746371 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:42:31.751257 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:42:31.755245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:42:31.774698 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:42:31.774760 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:42:31.776315 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:42:31.787107 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:42:31.799229 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:42:31.802755 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:42:31.812005 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:42:31.824095 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:42:31.918597 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:42:31.926301 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:42:31.973218 systemd-networkd[767]: lo: Link UP Jan 17 00:42:31.973231 systemd-networkd[767]: lo: Gained carrier Jan 17 00:42:31.977412 systemd-networkd[767]: Enumeration completed Jan 17 00:42:31.978685 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:42:31.978691 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:42:31.981232 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:42:31.982048 systemd[1]: Reached target network.target - Network. Jan 17 00:42:31.985963 systemd-networkd[767]: eth0: Link UP Jan 17 00:42:31.985968 systemd-networkd[767]: eth0: Gained carrier Jan 17 00:42:31.985979 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:42:32.005459 ignition[684]: Ignition 2.19.0 Jan 17 00:42:32.005499 ignition[684]: Stage: fetch-offline Jan 17 00:42:32.008185 systemd-networkd[767]: eth0: DHCPv4 address 10.243.73.150/30, gateway 10.243.73.149 acquired from 10.243.73.149 Jan 17 00:42:32.005580 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:42:32.009232 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:42:32.005599 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 00:42:32.005741 ignition[684]: parsed url from cmdline: "" Jan 17 00:42:32.005748 ignition[684]: no config URL provided Jan 17 00:42:32.005758 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:42:32.005775 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:42:32.005784 ignition[684]: failed to fetch config: resource requires networking Jan 17 00:42:32.006091 ignition[684]: Ignition finished successfully Jan 17 00:42:32.018301 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:42:32.068667 ignition[775]: Ignition 2.19.0 Jan 17 00:42:32.068686 ignition[775]: Stage: fetch Jan 17 00:42:32.068985 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:42:32.069006 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 00:42:32.069236 ignition[775]: parsed url from cmdline: "" Jan 17 00:42:32.069244 ignition[775]: no config URL provided Jan 17 00:42:32.069254 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:42:32.069271 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:42:32.071225 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 17 00:42:32.071261 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 17 00:42:32.072169 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 17 00:42:32.094157 ignition[775]: GET result: OK Jan 17 00:42:32.094874 ignition[775]: parsing config with SHA512: ddaf0374328efdfc540fc5b1d33583f323d27e4901cd6623f98949c7bb6f2ffe6397c5b1181ac87adf12bd668b1e71fb14b3e775339a57c1c4623a857475d462 Jan 17 00:42:32.102103 unknown[775]: fetched base config from "system" Jan 17 00:42:32.102154 unknown[775]: fetched base config from "system" Jan 17 00:42:32.103718 ignition[775]: fetch: fetch complete Jan 17 00:42:32.102167 unknown[775]: fetched user config from "openstack" Jan 17 00:42:32.103727 ignition[775]: fetch: fetch passed Jan 17 00:42:32.105851 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:42:32.103828 ignition[775]: Ignition finished successfully Jan 17 00:42:32.121252 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:42:32.139096 ignition[782]: Ignition 2.19.0 Jan 17 00:42:32.139117 ignition[782]: Stage: kargs Jan 17 00:42:32.139357 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:42:32.139378 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 00:42:32.140521 ignition[782]: kargs: kargs passed Jan 17 00:42:32.143161 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:42:32.140591 ignition[782]: Ignition finished successfully Jan 17 00:42:32.150269 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:42:32.173314 ignition[788]: Ignition 2.19.0 Jan 17 00:42:32.173336 ignition[788]: Stage: disks Jan 17 00:42:32.173585 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:42:32.173605 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 00:42:32.174710 ignition[788]: disks: disks passed Jan 17 00:42:32.174790 ignition[788]: Ignition finished successfully Jan 17 00:42:32.178150 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:42:32.179823 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:42:32.181250 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:42:32.182723 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:42:32.184347 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:42:32.185809 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:42:32.193264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:42:32.215124 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:42:32.219284 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:42:32.229185 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:42:32.400259 kernel: EXT4-fs (vda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:42:32.401189 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:42:32.402557 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:42:32.417224 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:42:32.422178 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:42:32.424093 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:42:32.428264 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 17 00:42:32.431411 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (805) Jan 17 00:42:32.431934 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:42:32.433331 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:42:32.437256 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:42:32.443528 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:42:32.443554 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:42:32.443572 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:42:32.450856 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:42:32.452659 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:42:32.458162 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:42:32.542202 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:42:32.551166 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:42:32.563040 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:42:32.569520 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:42:32.676845 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:42:32.684190 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:42:32.687255 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:42:32.700122 kernel: BTRFS info (device vda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:42:32.727276 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:42:32.740969 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:42:32.756251 ignition[922]: INFO : Ignition 2.19.0 Jan 17 00:42:32.756251 ignition[922]: INFO : Stage: mount Jan 17 00:42:32.758853 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:42:32.758853 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 00:42:32.758853 ignition[922]: INFO : mount: mount passed Jan 17 00:42:32.758853 ignition[922]: INFO : Ignition finished successfully Jan 17 00:42:32.760048 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:42:33.603307 systemd-networkd[767]: eth0: Gained IPv6LL Jan 17 00:42:35.111862 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d265:24:19ff:fef3:4996/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d265:24:19ff:fef3:4996/64 assigned by NDisc. Jan 17 00:42:35.111884 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 00:42:39.611278 coreos-metadata[807]: Jan 17 00:42:39.611 WARN failed to locate config-drive, using the metadata service API instead Jan 17 00:42:39.631753 coreos-metadata[807]: Jan 17 00:42:39.631 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 00:42:39.648624 coreos-metadata[807]: Jan 17 00:42:39.648 INFO Fetch successful Jan 17 00:42:39.649526 coreos-metadata[807]: Jan 17 00:42:39.649 INFO wrote hostname srv-jwpu3.gb1.brightbox.com to /sysroot/etc/hostname Jan 17 00:42:39.651494 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 17 00:42:39.651685 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 17 00:42:39.658185 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:42:39.676353 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:42:39.694093 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Jan 17 00:42:39.697132 kernel: BTRFS info (device vda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:42:39.697177 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:42:39.699862 kernel: BTRFS info (device vda6): using free space tree Jan 17 00:42:39.704106 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 00:42:39.706845 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:42:39.746927 ignition[958]: INFO : Ignition 2.19.0 Jan 17 00:42:39.748654 ignition[958]: INFO : Stage: files Jan 17 00:42:39.749584 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:42:39.751095 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 00:42:39.752686 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:42:39.753634 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:42:39.754713 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:42:39.759262 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:42:39.760565 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:42:39.762245 unknown[958]: wrote ssh authorized keys file for user: core Jan 17 00:42:39.763407 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:42:39.765733 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:42:39.767736 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:42:39.981867 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:42:40.249653 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:42:40.249653 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:42:40.257621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 17 00:42:40.517707 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:42:40.826676 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 00:42:40.829211 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:42:40.829211 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:42:40.829211 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:42:40.832613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:42:41.080135 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:42:42.227114 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:42:42.227114 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 17 00:42:42.230748 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:42:42.232160 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:42:42.232160 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 17 00:42:42.232160 ignition[958]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:42:42.232160 ignition[958]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:42:42.238426 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:42:42.238426 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:42:42.238426 ignition[958]: INFO : files: files passed Jan 17 00:42:42.238426 ignition[958]: INFO : Ignition finished successfully Jan 17 00:42:42.237700 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:42:42.248415 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:42:42.254262 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:42:42.262439 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:42:42.263545 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:42:42.295958 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf Jan 17 00:42:42.295958 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:42:42.298739 initrd-setup-root-after-ignition[987]: : No such file or directory Jan 17 00:42:42.298739 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:42:42.300441 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:42:42.302759 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:42:42.309350 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:42:42.359053 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:42:42.359411 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:42:42.361569 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:42:42.362733 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:42:42.364397 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:42:42.376293 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:42:42.394133 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:42:42.402320 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:42:42.418843 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:42:42.419848 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:42:42.421451 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:42:42.422888 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:42:42.423084 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:42:42.425974 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:42:42.426962 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:42:42.428290 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:42:42.429554 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:42:42.431089 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:42:42.432644 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:42:42.434159 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:42:42.435653 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:42:42.437373 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:42:42.438822 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:42:42.440150 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:42:42.440324 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:42:42.442140 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:42:42.443116 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:42:42.444531 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:42:42.444895 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:42:42.445999 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:42:42.446277 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:42:42.448091 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:42:42.448260 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:42:42.450161 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:42:42.450316 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:42:42.458384 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:42:42.471511 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:42:42.472257 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:42:42.472576 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:42:42.476464 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:42:42.476729 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:42:42.485794 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:42:42.485959 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:42:42.533733 ignition[1011]: INFO : Ignition 2.19.0 Jan 17 00:42:42.533733 ignition[1011]: INFO : Stage: umount Jan 17 00:42:42.535839 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:42:42.535839 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 00:42:42.535839 ignition[1011]: INFO : umount: umount passed Jan 17 00:42:42.535839 ignition[1011]: INFO : Ignition finished successfully Jan 17 00:42:42.537963 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:42:42.538189 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:42:42.539991 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:42:42.540243 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:42:42.541291 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:42:42.541362 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:42:42.542599 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:42:42.542693 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:42:42.544088 systemd[1]: Stopped target network.target - Network. Jan 17 00:42:42.545362 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:42:42.545461 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:42:42.546866 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:42:42.548107 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:42:42.550180 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:42:42.550994 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:42:42.552250 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:42:42.553735 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:42:42.553839 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:42:42.555218 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:42:42.555315 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:42:42.556457 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:42:42.556541 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:42:42.558133 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:42:42.558236 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:42:42.560139 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:42:42.562442 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:42:42.566303 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 17 00:42:42.570248 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:42:42.570555 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:42:42.572343 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:42:42.572405 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:42:42.577332 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:42:42.578377 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:42:42.578456 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:42:42.581665 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:42:42.584872 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:42:42.585098 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:42:42.593204 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:42:42.593409 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:42:42.597757 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:42:42.598034 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:42:42.599139 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:42:42.599236 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:42:42.601451 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:42:42.601677 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:42:42.611600 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:42:42.611780 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:42:42.614322 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:42:42.614436 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:42:42.615910 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:42:42.615993 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:42:42.617502 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:42:42.617572 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:42:42.619062 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:42:42.619169 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:42:42.630389 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:42:42.633458 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:42:42.633561 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:42:42.635161 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:42:42.635260 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:42:42.636623 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:42:42.636692 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:42:42.639497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:42:42.639612 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:42:42.641758 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:42:42.641928 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:42:42.648004 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:42:42.648214 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:42:42.702597 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:42:42.761843 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:42:42.762118 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:42:42.764376 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:42:42.765170 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:42:42.765266 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:42:42.777974 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:42:42.790853 systemd[1]: Switching root. Jan 17 00:42:42.838021 systemd-journald[202]: Journal stopped Jan 17 00:42:44.484306 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 17 00:42:44.484504 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:42:44.484548 kernel: SELinux: policy capability open_perms=1 Jan 17 00:42:44.484589 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:42:44.484641 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:42:44.484678 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:42:44.484710 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:42:44.484745 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:42:44.484774 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:42:44.484814 kernel: audit: type=1403 audit(1768610563.146:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:42:44.484852 systemd[1]: Successfully loaded SELinux policy in 53.983ms. Jan 17 00:42:44.484918 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.012ms. Jan 17 00:42:44.484941 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:42:44.484975 systemd[1]: Detected virtualization kvm. Jan 17 00:42:44.484997 systemd[1]: Detected architecture x86-64. Jan 17 00:42:44.485037 systemd[1]: Detected first boot. Jan 17 00:42:44.485065 systemd[1]: Hostname set to . Jan 17 00:42:44.485128 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:42:44.485150 zram_generator::config[1054]: No configuration found. Jan 17 00:42:44.485191 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:42:44.485220 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:42:44.485256 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:42:44.485285 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:42:44.485320 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:42:44.485342 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:42:44.485362 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:42:44.485390 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:42:44.485425 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:42:44.485450 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:42:44.485492 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:42:44.485523 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:42:44.485544 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:42:44.485563 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:42:44.485583 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:42:44.485602 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:42:44.485640 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:42:44.485670 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:42:44.485691 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:42:44.485740 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:42:44.485787 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:42:44.485810 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:42:44.485839 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:42:44.485861 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:42:44.485909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:42:44.485955 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:42:44.485976 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:42:44.486004 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:42:44.486032 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:42:44.486060 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:42:44.486144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:42:44.486186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:42:44.486209 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:42:44.486229 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:42:44.486248 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:42:44.486267 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:42:44.486286 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:42:44.486306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:42:44.486325 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:42:44.486345 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:42:44.486378 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:42:44.486418 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:42:44.486441 systemd[1]: Reached target machines.target - Containers. Jan 17 00:42:44.486461 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:42:44.486481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:42:44.486501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:42:44.486520 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:42:44.486538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:42:44.486571 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:42:44.486592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:42:44.486612 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:42:44.486631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:42:44.486650 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:42:44.486678 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:42:44.486699 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:42:44.486729 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:42:44.486752 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:42:44.486787 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:42:44.486809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:42:44.486842 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:42:44.486862 kernel: fuse: init (API version 7.39) Jan 17 00:42:44.486881 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:42:44.486900 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:42:44.486919 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:42:44.486937 systemd[1]: Stopped verity-setup.service. Jan 17 00:42:44.486957 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:42:44.486991 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:42:44.487012 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:42:44.487043 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:42:44.487064 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:42:44.487141 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:42:44.487163 kernel: loop: module loaded Jan 17 00:42:44.487182 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:42:44.487202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:42:44.487249 systemd-journald[1147]: Collecting audit messages is disabled. Jan 17 00:42:44.487337 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:42:44.487361 systemd-journald[1147]: Journal started Jan 17 00:42:44.487410 systemd-journald[1147]: Runtime Journal (/run/log/journal/a756102dd8294e34b25b5a680a4d6e32) is 4.7M, max 38.0M, 33.2M free. Jan 17 00:42:44.017319 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:42:44.040950 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 00:42:44.042036 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:42:44.490088 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:42:44.494606 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:42:44.494922 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:42:44.496219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:42:44.496444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:42:44.497545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:42:44.497772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:42:44.500063 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:42:44.500335 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:42:44.502541 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:42:44.502778 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:42:44.503972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:42:44.505057 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:42:44.508448 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:42:44.512115 kernel: ACPI: bus type drm_connector registered Jan 17 00:42:44.517909 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:42:44.518240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:42:44.535631 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:42:44.548151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:42:44.568247 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:42:44.570433 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:42:44.570497 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:42:44.573409 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:42:44.580405 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:42:44.588302 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:42:44.589316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:42:44.594316 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:42:44.610055 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:42:44.611361 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:42:44.615199 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:42:44.616827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:42:44.621316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:42:44.636202 systemd-journald[1147]: Time spent on flushing to /var/log/journal/a756102dd8294e34b25b5a680a4d6e32 is 134.087ms for 1143 entries. Jan 17 00:42:44.636202 systemd-journald[1147]: System Journal (/var/log/journal/a756102dd8294e34b25b5a680a4d6e32) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:42:44.893284 systemd-journald[1147]: Received client request to flush runtime journal. Jan 17 00:42:44.893361 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 00:42:44.893397 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:42:44.893429 kernel: loop1: detected capacity change from 0 to 224512 Jan 17 00:42:44.633285 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:42:44.666420 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:42:44.669991 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:42:44.670952 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:42:44.672112 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:42:44.729594 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:42:44.730586 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:42:44.742377 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:42:44.823672 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 17 00:42:44.823692 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 17 00:42:44.839252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:42:44.841538 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:42:44.842396 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:42:44.844310 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:42:44.855000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:42:44.904619 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:42:44.921904 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:42:44.933299 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:42:44.948097 kernel: loop2: detected capacity change from 0 to 8 Jan 17 00:42:44.950463 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:42:44.962304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:42:44.984137 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:42:44.997107 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:42:45.056541 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 17 00:42:45.056570 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jan 17 00:42:45.066907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:42:45.099122 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 00:42:45.135108 kernel: loop5: detected capacity change from 0 to 224512 Jan 17 00:42:45.231591 kernel: loop6: detected capacity change from 0 to 8 Jan 17 00:42:45.237094 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 00:42:45.291192 (sd-merge)[1217]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 17 00:42:45.304828 (sd-merge)[1217]: Merged extensions into '/usr'. Jan 17 00:42:45.320479 systemd[1]: Reloading requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:42:45.320513 systemd[1]: Reloading... Jan 17 00:42:45.451122 zram_generator::config[1243]: No configuration found. Jan 17 00:42:45.723169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:42:45.793445 systemd[1]: Reloading finished in 472 ms. Jan 17 00:42:45.828390 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:42:45.841325 systemd[1]: Starting ensure-sysext.service... Jan 17 00:42:45.844214 ldconfig[1182]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:42:45.844863 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:42:45.847158 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:42:45.871185 systemd[1]: Reloading requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:42:45.873149 systemd[1]: Reloading... Jan 17 00:42:46.062012 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:42:46.062654 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:42:46.069169 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:42:46.069750 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jan 17 00:42:46.070012 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jan 17 00:42:46.085833 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:42:46.085880 systemd-tmpfiles[1299]: Skipping /boot Jan 17 00:42:46.106222 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:42:46.106449 systemd-tmpfiles[1299]: Skipping /boot Jan 17 00:42:46.134108 zram_generator::config[1327]: No configuration found. Jan 17 00:42:46.330205 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:42:46.398227 systemd[1]: Reloading finished in 524 ms. Jan 17 00:42:46.422079 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:42:46.431040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:42:46.447561 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:42:46.452448 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:42:46.464416 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:42:46.471410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:42:46.476041 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:42:46.484520 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:42:46.488825 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:42:46.489598 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:42:46.497236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:42:46.506463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:42:46.512662 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:42:46.514619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:42:46.514874 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:42:46.530923 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:42:46.534750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:42:46.535024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:42:46.535300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:42:46.535424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:42:46.537128 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:42:46.537890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:42:46.553590 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:42:46.568162 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:42:46.568428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:42:46.571627 systemd[1]: Finished ensure-sysext.service. Jan 17 00:42:46.577105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:42:46.578393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:42:46.590976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:42:46.597520 augenrules[1414]: No rules Jan 17 00:42:46.598633 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Jan 17 00:42:46.602469 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:42:46.603380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:42:46.603481 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:42:46.607823 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:42:46.615263 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:42:46.630783 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:42:46.634587 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:42:46.637730 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:42:46.639162 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:42:46.639694 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:42:46.642620 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:42:46.642881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:42:46.644177 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:42:46.645531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:42:46.655735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:42:46.688828 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:42:46.690884 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:42:46.692307 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:42:46.695129 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:42:46.709436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:42:46.710218 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:42:46.838951 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:42:46.879038 systemd-networkd[1438]: lo: Link UP Jan 17 00:42:46.880323 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:42:46.881558 systemd-networkd[1438]: lo: Gained carrier Jan 17 00:42:46.881942 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:42:46.886308 systemd-networkd[1438]: Enumeration completed Jan 17 00:42:46.886504 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:42:46.889952 systemd-timesyncd[1420]: No network connectivity, watching for changes. Jan 17 00:42:46.895255 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:42:46.924939 systemd-resolved[1394]: Positive Trust Anchors: Jan 17 00:42:46.926121 systemd-resolved[1394]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:42:46.926248 systemd-resolved[1394]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:42:46.938941 systemd-resolved[1394]: Using system hostname 'srv-jwpu3.gb1.brightbox.com'. Jan 17 00:42:46.944333 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:42:46.945626 systemd[1]: Reached target network.target - Network. Jan 17 00:42:46.947313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:42:46.958140 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1441) Jan 17 00:42:47.004603 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:42:47.005019 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:42:47.009254 systemd-networkd[1438]: eth0: Link UP Jan 17 00:42:47.009271 systemd-networkd[1438]: eth0: Gained carrier Jan 17 00:42:47.009292 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:42:47.051105 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 00:42:47.057200 systemd-networkd[1438]: eth0: DHCPv4 address 10.243.73.150/30, gateway 10.243.73.149 acquired from 10.243.73.149 Jan 17 00:42:47.061677 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:42:47.061462 systemd-timesyncd[1420]: Network configuration changed, trying to establish connection. Jan 17 00:42:47.064112 kernel: ACPI: button: Power Button [PWRF] Jan 17 00:42:47.147113 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 00:42:47.152100 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 00:42:47.165854 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 00:42:47.166194 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 00:42:47.193915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 00:42:47.206437 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:42:47.253461 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:42:47.260557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:42:47.480005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:42:48.119472 systemd-resolved[1394]: Clock change detected. Flushing caches. Jan 17 00:42:48.119725 systemd-timesyncd[1420]: Contacted time server 109.74.206.120:123 (1.flatcar.pool.ntp.org). Jan 17 00:42:48.119949 systemd-timesyncd[1420]: Initial clock synchronization to Sat 2026-01-17 00:42:48.119246 UTC. Jan 17 00:42:48.121754 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:42:48.127630 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:42:48.150186 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:42:48.264512 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:42:48.265758 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:42:48.266577 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:42:48.267617 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:42:48.268493 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:42:48.269772 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:42:48.270614 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:42:48.271460 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:42:48.272208 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:42:48.272258 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:42:48.276182 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:42:48.279741 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:42:48.284010 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:42:48.296211 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:42:48.299358 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:42:48.300841 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:42:48.301745 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:42:48.302423 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:42:48.303137 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:42:48.303198 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:42:48.311557 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:42:48.317518 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:42:48.321496 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:42:48.325395 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:42:48.332554 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:42:48.337985 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:42:48.347415 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:42:48.357605 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:42:48.377150 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:42:48.384548 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:42:48.391581 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:42:48.393424 jq[1481]: false Jan 17 00:42:48.408576 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:42:48.410421 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:42:48.411236 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:42:48.412557 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:42:48.415887 dbus-daemon[1480]: [system] SELinux support is enabled Jan 17 00:42:48.422838 dbus-daemon[1480]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1438 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:42:48.423752 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:42:48.426735 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:42:48.433182 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:42:48.442675 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:42:48.443433 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:42:48.450012 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:42:48.450285 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:42:48.460156 extend-filesystems[1482]: Found loop4 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found loop5 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found loop6 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found loop7 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found vda Jan 17 00:42:48.463458 extend-filesystems[1482]: Found vda1 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found vda2 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found vda3 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found usr Jan 17 00:42:48.463458 extend-filesystems[1482]: Found vda4 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found vda6 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found vda7 Jan 17 00:42:48.463458 extend-filesystems[1482]: Found vda9 Jan 17 00:42:48.463458 extend-filesystems[1482]: Checking size of /dev/vda9 Jan 17 00:42:48.471249 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:42:48.490666 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:42:48.526238 update_engine[1492]: I20260117 00:42:48.487609 1492 main.cc:92] Flatcar Update Engine starting Jan 17 00:42:48.526238 update_engine[1492]: I20260117 00:42:48.489913 1492 update_check_scheduler.cc:74] Next update check in 8m52s Jan 17 00:42:48.497055 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:42:48.498240 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:42:48.545590 jq[1493]: true Jan 17 00:42:48.498289 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:42:48.508535 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:42:48.510012 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:42:48.510053 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:42:48.518980 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:42:48.522810 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:42:48.524415 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:42:48.555336 extend-filesystems[1482]: Resized partition /dev/vda9 Jan 17 00:42:48.570376 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:42:48.572787 tar[1500]: linux-amd64/LICENSE Jan 17 00:42:48.572787 tar[1500]: linux-amd64/helm Jan 17 00:42:48.580332 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 17 00:42:48.581117 jq[1515]: true Jan 17 00:42:48.587181 (ntainerd)[1518]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:42:48.658333 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1440) Jan 17 00:42:48.745365 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 00:42:48.745414 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:42:48.758765 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:42:48.745759 systemd-logind[1490]: New seat seat0. Jan 17 00:42:48.760084 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1511 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:42:48.748553 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:42:48.758968 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:42:48.785662 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:42:48.809728 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:42:48.810520 polkitd[1539]: Started polkitd version 121 Jan 17 00:42:48.816643 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:42:48.828647 systemd[1]: Starting sshkeys.service... Jan 17 00:42:48.839900 polkitd[1539]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:42:48.950747 polkitd[1539]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:42:49.026156 polkitd[1539]: Finished loading, compiling and executing 2 rules Jan 17 00:42:49.044477 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:42:49.045887 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:42:49.050394 polkitd[1539]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:42:49.086266 systemd-hostnamed[1511]: Hostname set to (static) Jan 17 00:42:49.141195 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:42:49.154613 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:42:49.270326 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:42:49.290932 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 00:42:49.323010 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 00:42:49.323010 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 00:42:49.323010 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 00:42:49.330809 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Jan 17 00:42:49.326084 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:42:49.342836 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:42:49.328086 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:42:49.329997 systemd-networkd[1438]: eth0: Gained IPv6LL Jan 17 00:42:49.354296 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:42:49.359078 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:42:49.379627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:42:49.389771 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:42:49.478460 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:42:49.492822 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:42:49.497572 systemd[1]: Started sshd@0-10.243.73.150:22-20.161.92.111:55064.service - OpenSSH per-connection server daemon (20.161.92.111:55064). Jan 17 00:42:49.514921 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:42:49.522442 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:42:49.522738 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:42:49.535537 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:42:49.646726 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:42:49.664062 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:42:49.675083 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:42:49.677146 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:42:49.699325 containerd[1518]: time="2026-01-17T00:42:49.697431322Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:42:49.765095 containerd[1518]: time="2026-01-17T00:42:49.764997242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:49.773120 containerd[1518]: time="2026-01-17T00:42:49.773068801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:49.774377 containerd[1518]: time="2026-01-17T00:42:49.774347027Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:42:49.775140 containerd[1518]: time="2026-01-17T00:42:49.774470612Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:42:49.775140 containerd[1518]: time="2026-01-17T00:42:49.774811279Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:42:49.775140 containerd[1518]: time="2026-01-17T00:42:49.774871801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:49.775140 containerd[1518]: time="2026-01-17T00:42:49.774993187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:49.775140 containerd[1518]: time="2026-01-17T00:42:49.775016494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:49.775542 containerd[1518]: time="2026-01-17T00:42:49.775512025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:49.775633 containerd[1518]: time="2026-01-17T00:42:49.775610861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:49.775761 containerd[1518]: time="2026-01-17T00:42:49.775733857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:49.775844 containerd[1518]: time="2026-01-17T00:42:49.775823299Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:49.776108 containerd[1518]: time="2026-01-17T00:42:49.776081153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:49.776689 containerd[1518]: time="2026-01-17T00:42:49.776663021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:42:49.776987 containerd[1518]: time="2026-01-17T00:42:49.776958558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:42:49.777354 containerd[1518]: time="2026-01-17T00:42:49.777057301Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:42:49.777463 containerd[1518]: time="2026-01-17T00:42:49.777434507Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:42:49.777632 containerd[1518]: time="2026-01-17T00:42:49.777602322Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:42:49.817978 containerd[1518]: time="2026-01-17T00:42:49.817895457Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:42:49.817978 containerd[1518]: time="2026-01-17T00:42:49.818001938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:42:49.818277 containerd[1518]: time="2026-01-17T00:42:49.818033738Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:42:49.818277 containerd[1518]: time="2026-01-17T00:42:49.818057930Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:42:49.818277 containerd[1518]: time="2026-01-17T00:42:49.818079856Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:42:49.818415 containerd[1518]: time="2026-01-17T00:42:49.818370932Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:42:49.818991 containerd[1518]: time="2026-01-17T00:42:49.818939120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:42:49.822343 containerd[1518]: time="2026-01-17T00:42:49.822064383Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:42:49.822343 containerd[1518]: time="2026-01-17T00:42:49.822157160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:42:49.822343 containerd[1518]: time="2026-01-17T00:42:49.822205190Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:42:49.822343 containerd[1518]: time="2026-01-17T00:42:49.822237802Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:42:49.822343 containerd[1518]: time="2026-01-17T00:42:49.822280614Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:42:49.822343 containerd[1518]: time="2026-01-17T00:42:49.822332666Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822364800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822390956Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822415844Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822458495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822483620Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822541204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822565916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822594886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822620738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822654095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822681219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822714040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822734338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.822778 containerd[1518]: time="2026-01-17T00:42:49.822768386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.822796486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.822831340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.822867045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.822901386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.822930093Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.822977611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.823007479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.823031200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.823115439Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.823168954Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.823193037Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.823229403Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:42:49.823265 containerd[1518]: time="2026-01-17T00:42:49.823246946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.823714 containerd[1518]: time="2026-01-17T00:42:49.823275125Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:42:49.823714 containerd[1518]: time="2026-01-17T00:42:49.823302693Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:42:49.823714 containerd[1518]: time="2026-01-17T00:42:49.823341204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:42:49.824907 containerd[1518]: time="2026-01-17T00:42:49.823808742Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:42:49.824907 containerd[1518]: time="2026-01-17T00:42:49.823929576Z" level=info msg="Connect containerd service" Jan 17 00:42:49.824907 containerd[1518]: time="2026-01-17T00:42:49.824005798Z" level=info msg="using legacy CRI server" Jan 17 00:42:49.824907 containerd[1518]: time="2026-01-17T00:42:49.824022131Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:42:49.824907 containerd[1518]: time="2026-01-17T00:42:49.824199175Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:42:49.828335 containerd[1518]: time="2026-01-17T00:42:49.827468695Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:42:49.828548 containerd[1518]: time="2026-01-17T00:42:49.828473196Z" level=info msg="Start subscribing containerd event" Jan 17 00:42:49.829564 containerd[1518]: time="2026-01-17T00:42:49.828560405Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:42:49.829648 containerd[1518]: time="2026-01-17T00:42:49.829625005Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:42:49.831389 containerd[1518]: time="2026-01-17T00:42:49.831357131Z" level=info msg="Start recovering state" Jan 17 00:42:49.831506 containerd[1518]: time="2026-01-17T00:42:49.831478302Z" level=info msg="Start event monitor" Jan 17 00:42:49.831553 containerd[1518]: time="2026-01-17T00:42:49.831528289Z" level=info msg="Start snapshots syncer" Jan 17 00:42:49.831597 containerd[1518]: time="2026-01-17T00:42:49.831552122Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:42:49.831597 containerd[1518]: time="2026-01-17T00:42:49.831566510Z" level=info msg="Start streaming server" Jan 17 00:42:49.831824 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:42:49.833715 containerd[1518]: time="2026-01-17T00:42:49.833685793Z" level=info msg="containerd successfully booted in 0.141247s" Jan 17 00:42:50.215874 tar[1500]: linux-amd64/README.md Jan 17 00:42:50.234966 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:42:50.271620 sshd[1584]: Accepted publickey for core from 20.161.92.111 port 55064 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:42:50.273793 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:50.295889 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:42:50.303072 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:42:50.310749 systemd-logind[1490]: New session 1 of user core. Jan 17 00:42:50.333294 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:42:50.354871 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:42:50.362057 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:42:50.591314 systemd[1604]: Queued start job for default target default.target. Jan 17 00:42:50.601228 systemd[1604]: Created slice app.slice - User Application Slice. Jan 17 00:42:50.601276 systemd[1604]: Reached target paths.target - Paths. Jan 17 00:42:50.601299 systemd[1604]: Reached target timers.target - Timers. Jan 17 00:42:50.605505 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:42:50.619438 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:42:50.620298 systemd[1604]: Reached target sockets.target - Sockets. Jan 17 00:42:50.620324 systemd[1604]: Reached target basic.target - Basic System. Jan 17 00:42:50.620422 systemd[1604]: Reached target default.target - Main User Target. Jan 17 00:42:50.620527 systemd[1604]: Startup finished in 245ms. Jan 17 00:42:50.620979 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:42:50.628618 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:42:50.841107 systemd-networkd[1438]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d265:24:19ff:fef3:4996/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d265:24:19ff:fef3:4996/64 assigned by NDisc. Jan 17 00:42:50.841120 systemd-networkd[1438]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 00:42:51.097989 systemd[1]: Started sshd@1-10.243.73.150:22-20.161.92.111:55076.service - OpenSSH per-connection server daemon (20.161.92.111:55076). Jan 17 00:42:51.457633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:42:51.458137 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:42:51.678633 sshd[1617]: Accepted publickey for core from 20.161.92.111 port 55076 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:42:51.682211 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:51.692280 systemd-logind[1490]: New session 2 of user core. Jan 17 00:42:51.702693 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:42:52.085630 sshd[1617]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:52.091559 systemd[1]: sshd@1-10.243.73.150:22-20.161.92.111:55076.service: Deactivated successfully. Jan 17 00:42:52.094749 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:42:52.096636 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:42:52.098822 systemd-logind[1490]: Removed session 2. Jan 17 00:42:52.193268 systemd[1]: Started sshd@2-10.243.73.150:22-20.161.92.111:55090.service - OpenSSH per-connection server daemon (20.161.92.111:55090). Jan 17 00:42:52.431239 kubelet[1624]: E0117 00:42:52.431057 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:42:52.434996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:42:52.435343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:42:52.436158 systemd[1]: kubelet.service: Consumed 1.655s CPU time. Jan 17 00:42:52.835712 sshd[1636]: Accepted publickey for core from 20.161.92.111 port 55090 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:42:52.838839 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:42:52.864122 systemd-logind[1490]: New session 3 of user core. Jan 17 00:42:52.876800 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:42:53.243914 sshd[1636]: pam_unix(sshd:session): session closed for user core Jan 17 00:42:53.249531 systemd[1]: sshd@2-10.243.73.150:22-20.161.92.111:55090.service: Deactivated successfully. Jan 17 00:42:53.252294 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:42:53.253377 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:42:53.255191 systemd-logind[1490]: Removed session 3. Jan 17 00:42:54.739366 login[1593]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Jan 17 00:42:54.739629 login[1592]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 00:42:54.748262 systemd-logind[1490]: New session 4 of user core. Jan 17 00:42:54.762682 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:42:55.495644 coreos-metadata[1479]: Jan 17 00:42:55.495 WARN failed to locate config-drive, using the metadata service API instead Jan 17 00:42:55.529912 coreos-metadata[1479]: Jan 17 00:42:55.529 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 17 00:42:55.536777 coreos-metadata[1479]: Jan 17 00:42:55.536 INFO Fetch failed with 404: resource not found Jan 17 00:42:55.536777 coreos-metadata[1479]: Jan 17 00:42:55.536 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 00:42:55.537537 coreos-metadata[1479]: Jan 17 00:42:55.537 INFO Fetch successful Jan 17 00:42:55.537724 coreos-metadata[1479]: Jan 17 00:42:55.537 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 17 00:42:55.549921 coreos-metadata[1479]: Jan 17 00:42:55.549 INFO Fetch successful Jan 17 00:42:55.550131 coreos-metadata[1479]: Jan 17 00:42:55.550 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 17 00:42:55.566485 coreos-metadata[1479]: Jan 17 00:42:55.566 INFO Fetch successful Jan 17 00:42:55.566675 coreos-metadata[1479]: Jan 17 00:42:55.566 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 17 00:42:55.583443 coreos-metadata[1479]: Jan 17 00:42:55.583 INFO Fetch successful Jan 17 00:42:55.583658 coreos-metadata[1479]: Jan 17 00:42:55.583 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 17 00:42:55.601193 coreos-metadata[1479]: Jan 17 00:42:55.601 INFO Fetch successful Jan 17 00:42:55.633505 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:42:55.634666 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:42:55.742930 login[1593]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 00:42:55.750307 systemd-logind[1490]: New session 5 of user core. Jan 17 00:42:55.767836 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:42:56.584779 coreos-metadata[1559]: Jan 17 00:42:56.584 WARN failed to locate config-drive, using the metadata service API instead Jan 17 00:42:56.609075 coreos-metadata[1559]: Jan 17 00:42:56.608 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 17 00:42:56.633536 coreos-metadata[1559]: Jan 17 00:42:56.633 INFO Fetch successful Jan 17 00:42:56.633903 coreos-metadata[1559]: Jan 17 00:42:56.633 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:42:56.662547 coreos-metadata[1559]: Jan 17 00:42:56.662 INFO Fetch successful Jan 17 00:42:56.668743 unknown[1559]: wrote ssh authorized keys file for user: core Jan 17 00:42:56.698471 update-ssh-keys[1677]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:42:56.699724 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:42:56.702643 systemd[1]: Finished sshkeys.service. Jan 17 00:42:56.706388 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:42:56.706982 systemd[1]: Startup finished in 1.716s (kernel) + 14.392s (initrd) + 12.992s (userspace) = 29.101s. Jan 17 00:43:02.685954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:43:02.693584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:43:02.999688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:43:03.005584 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:43:03.087411 kubelet[1689]: E0117 00:43:03.087125 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:43:03.092061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:43:03.092336 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:43:03.365807 systemd[1]: Started sshd@3-10.243.73.150:22-20.161.92.111:56596.service - OpenSSH per-connection server daemon (20.161.92.111:56596). Jan 17 00:43:03.925138 sshd[1697]: Accepted publickey for core from 20.161.92.111 port 56596 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:43:03.927184 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:03.934127 systemd-logind[1490]: New session 6 of user core. Jan 17 00:43:03.944536 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:43:04.331673 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:04.336526 systemd[1]: sshd@3-10.243.73.150:22-20.161.92.111:56596.service: Deactivated successfully. Jan 17 00:43:04.339694 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:43:04.341824 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:43:04.343455 systemd-logind[1490]: Removed session 6. Jan 17 00:43:04.442858 systemd[1]: Started sshd@4-10.243.73.150:22-20.161.92.111:56606.service - OpenSSH per-connection server daemon (20.161.92.111:56606). Jan 17 00:43:05.021125 sshd[1704]: Accepted publickey for core from 20.161.92.111 port 56606 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:43:05.023273 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:05.029419 systemd-logind[1490]: New session 7 of user core. Jan 17 00:43:05.039012 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:43:05.419865 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:05.425319 systemd[1]: sshd@4-10.243.73.150:22-20.161.92.111:56606.service: Deactivated successfully. Jan 17 00:43:05.427840 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:43:05.428945 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:43:05.430244 systemd-logind[1490]: Removed session 7. Jan 17 00:43:05.522696 systemd[1]: Started sshd@5-10.243.73.150:22-20.161.92.111:56612.service - OpenSSH per-connection server daemon (20.161.92.111:56612). Jan 17 00:43:06.096069 sshd[1711]: Accepted publickey for core from 20.161.92.111 port 56612 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:43:06.098348 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:06.105566 systemd-logind[1490]: New session 8 of user core. Jan 17 00:43:06.111702 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:43:06.502587 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:06.506695 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:43:06.507232 systemd[1]: sshd@5-10.243.73.150:22-20.161.92.111:56612.service: Deactivated successfully. Jan 17 00:43:06.509364 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:43:06.511325 systemd-logind[1490]: Removed session 8. Jan 17 00:43:06.606113 systemd[1]: Started sshd@6-10.243.73.150:22-20.161.92.111:56614.service - OpenSSH per-connection server daemon (20.161.92.111:56614). Jan 17 00:43:07.187468 sshd[1718]: Accepted publickey for core from 20.161.92.111 port 56614 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:43:07.189594 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:07.196004 systemd-logind[1490]: New session 9 of user core. Jan 17 00:43:07.207549 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:43:07.518637 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:43:07.519127 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:43:07.536699 sudo[1721]: pam_unix(sudo:session): session closed for user root Jan 17 00:43:07.627111 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:07.631540 systemd[1]: sshd@6-10.243.73.150:22-20.161.92.111:56614.service: Deactivated successfully. Jan 17 00:43:07.634134 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:43:07.636761 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:43:07.638370 systemd-logind[1490]: Removed session 9. Jan 17 00:43:07.736051 systemd[1]: Started sshd@7-10.243.73.150:22-20.161.92.111:56620.service - OpenSSH per-connection server daemon (20.161.92.111:56620). Jan 17 00:43:08.298430 sshd[1726]: Accepted publickey for core from 20.161.92.111 port 56620 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:43:08.301188 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:08.309520 systemd-logind[1490]: New session 10 of user core. Jan 17 00:43:08.315616 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:43:08.615870 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:43:08.616376 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:43:08.622551 sudo[1730]: pam_unix(sudo:session): session closed for user root Jan 17 00:43:08.630854 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:43:08.631313 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:43:08.649705 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:43:08.654814 auditctl[1733]: No rules Jan 17 00:43:08.655409 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:43:08.655721 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:43:08.671268 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:43:08.708569 augenrules[1751]: No rules Jan 17 00:43:08.710246 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:43:08.712674 sudo[1729]: pam_unix(sudo:session): session closed for user root Jan 17 00:43:08.802894 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:08.807720 systemd[1]: sshd@7-10.243.73.150:22-20.161.92.111:56620.service: Deactivated successfully. Jan 17 00:43:08.808127 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:43:08.810123 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:43:08.812246 systemd-logind[1490]: Removed session 10. Jan 17 00:43:08.912800 systemd[1]: Started sshd@8-10.243.73.150:22-20.161.92.111:56628.service - OpenSSH per-connection server daemon (20.161.92.111:56628). Jan 17 00:43:09.471116 sshd[1759]: Accepted publickey for core from 20.161.92.111 port 56628 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:43:09.473257 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:43:09.479251 systemd-logind[1490]: New session 11 of user core. Jan 17 00:43:09.487635 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:43:09.785977 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:43:09.786552 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:43:10.422755 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:43:10.426265 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:43:11.114137 dockerd[1778]: time="2026-01-17T00:43:11.113960554Z" level=info msg="Starting up" Jan 17 00:43:11.263966 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport398770449-merged.mount: Deactivated successfully. Jan 17 00:43:11.317805 dockerd[1778]: time="2026-01-17T00:43:11.317267738Z" level=info msg="Loading containers: start." Jan 17 00:43:11.472659 kernel: Initializing XFRM netlink socket Jan 17 00:43:11.593806 systemd-networkd[1438]: docker0: Link UP Jan 17 00:43:11.617249 dockerd[1778]: time="2026-01-17T00:43:11.617054885Z" level=info msg="Loading containers: done." Jan 17 00:43:11.655238 dockerd[1778]: time="2026-01-17T00:43:11.655150499Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:43:11.655510 dockerd[1778]: time="2026-01-17T00:43:11.655375627Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:43:11.655687 dockerd[1778]: time="2026-01-17T00:43:11.655614031Z" level=info msg="Daemon has completed initialization" Jan 17 00:43:11.698460 dockerd[1778]: time="2026-01-17T00:43:11.697825989Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:43:11.699012 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:43:12.260666 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1410744873-merged.mount: Deactivated successfully. Jan 17 00:43:13.119495 containerd[1518]: time="2026-01-17T00:43:13.118439185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:43:13.343024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:43:13.352631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:43:13.752562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:43:13.752765 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:43:13.845987 kubelet[1930]: E0117 00:43:13.845872 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:43:13.849189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:43:13.849951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:43:13.991465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3208943004.mount: Deactivated successfully. Jan 17 00:43:16.523462 containerd[1518]: time="2026-01-17T00:43:16.523383207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:16.525087 containerd[1518]: time="2026-01-17T00:43:16.525037076Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 17 00:43:16.527343 containerd[1518]: time="2026-01-17T00:43:16.525893401Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:16.530232 containerd[1518]: time="2026-01-17T00:43:16.530171259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:16.531960 containerd[1518]: time="2026-01-17T00:43:16.531924940Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.413325351s" Jan 17 00:43:16.532148 containerd[1518]: time="2026-01-17T00:43:16.532118594Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:43:16.533932 containerd[1518]: time="2026-01-17T00:43:16.533880899Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:43:19.019909 containerd[1518]: time="2026-01-17T00:43:19.019788787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:19.021796 containerd[1518]: time="2026-01-17T00:43:19.021432054Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 17 00:43:19.023256 containerd[1518]: time="2026-01-17T00:43:19.022677079Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:19.026672 containerd[1518]: time="2026-01-17T00:43:19.026631711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:19.028582 containerd[1518]: time="2026-01-17T00:43:19.028534186Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.494452396s" Jan 17 00:43:19.028724 containerd[1518]: time="2026-01-17T00:43:19.028697595Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:43:19.030006 containerd[1518]: time="2026-01-17T00:43:19.029889592Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:43:20.921541 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:43:20.936374 containerd[1518]: time="2026-01-17T00:43:20.934983847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:20.953456 containerd[1518]: time="2026-01-17T00:43:20.953412863Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 17 00:43:20.955506 containerd[1518]: time="2026-01-17T00:43:20.955464709Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:20.959516 containerd[1518]: time="2026-01-17T00:43:20.959476809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:20.961732 containerd[1518]: time="2026-01-17T00:43:20.961350602Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.931097196s" Jan 17 00:43:20.961732 containerd[1518]: time="2026-01-17T00:43:20.961402549Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:43:20.962839 containerd[1518]: time="2026-01-17T00:43:20.962687903Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:43:22.884754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505964766.mount: Deactivated successfully. Jan 17 00:43:23.828450 containerd[1518]: time="2026-01-17T00:43:23.828303332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:23.830047 containerd[1518]: time="2026-01-17T00:43:23.829997598Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 17 00:43:23.831133 containerd[1518]: time="2026-01-17T00:43:23.830706161Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:23.833841 containerd[1518]: time="2026-01-17T00:43:23.833796358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:23.835152 containerd[1518]: time="2026-01-17T00:43:23.835109947Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.870700756s" Jan 17 00:43:23.835376 containerd[1518]: time="2026-01-17T00:43:23.835342024Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:43:23.839814 containerd[1518]: time="2026-01-17T00:43:23.839710691Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:43:24.082262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:43:24.093718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:43:24.484600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:43:24.499839 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:43:24.538445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224257373.mount: Deactivated successfully. Jan 17 00:43:24.618444 kubelet[2024]: E0117 00:43:24.618279 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:43:24.622851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:43:24.623121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:43:26.484370 containerd[1518]: time="2026-01-17T00:43:26.483828967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:26.488191 containerd[1518]: time="2026-01-17T00:43:26.488108547Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 17 00:43:26.496334 containerd[1518]: time="2026-01-17T00:43:26.494388268Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:26.499444 containerd[1518]: time="2026-01-17T00:43:26.499405970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:26.500701 containerd[1518]: time="2026-01-17T00:43:26.500645428Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.660596443s" Jan 17 00:43:26.500844 containerd[1518]: time="2026-01-17T00:43:26.500816557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:43:26.501895 containerd[1518]: time="2026-01-17T00:43:26.501842514Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:43:27.042050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031987391.mount: Deactivated successfully. Jan 17 00:43:27.063364 containerd[1518]: time="2026-01-17T00:43:27.062882315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:27.064171 containerd[1518]: time="2026-01-17T00:43:27.064061090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 17 00:43:27.065113 containerd[1518]: time="2026-01-17T00:43:27.065075717Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:27.069585 containerd[1518]: time="2026-01-17T00:43:27.069455233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:27.072736 containerd[1518]: time="2026-01-17T00:43:27.072549406Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 570.659555ms" Jan 17 00:43:27.072736 containerd[1518]: time="2026-01-17T00:43:27.072603842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:43:27.073605 containerd[1518]: time="2026-01-17T00:43:27.073541615Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:43:27.692418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913574746.mount: Deactivated successfully. Jan 17 00:43:33.606730 update_engine[1492]: I20260117 00:43:33.606391 1492 update_attempter.cc:509] Updating boot flags... Jan 17 00:43:33.692613 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2150) Jan 17 00:43:33.804888 containerd[1518]: time="2026-01-17T00:43:33.804802446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:33.807906 containerd[1518]: time="2026-01-17T00:43:33.807857174Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 17 00:43:33.810380 containerd[1518]: time="2026-01-17T00:43:33.810191001Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:33.818988 containerd[1518]: time="2026-01-17T00:43:33.818935741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:43:33.822415 containerd[1518]: time="2026-01-17T00:43:33.821693718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.748108838s" Jan 17 00:43:33.822415 containerd[1518]: time="2026-01-17T00:43:33.821757408Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:43:34.663361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:43:34.678711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:43:34.960608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:43:34.963433 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:43:35.033005 kubelet[2182]: E0117 00:43:35.032928 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:43:35.036045 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:43:35.036349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:43:37.386354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:43:37.398698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:43:37.432304 systemd[1]: Reloading requested from client PID 2196 ('systemctl') (unit session-11.scope)... Jan 17 00:43:37.432382 systemd[1]: Reloading... Jan 17 00:43:37.656104 zram_generator::config[2238]: No configuration found. Jan 17 00:43:37.807600 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:43:37.918395 systemd[1]: Reloading finished in 485 ms. Jan 17 00:43:37.999918 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:43:38.000396 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:43:38.000998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:43:38.018962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:43:38.197348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:43:38.209790 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:43:38.304663 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:43:38.304663 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:43:38.304663 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:43:38.305441 kubelet[2303]: I0117 00:43:38.304768 2303 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:43:39.233101 kubelet[2303]: I0117 00:43:39.232942 2303 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:43:39.233101 kubelet[2303]: I0117 00:43:39.233064 2303 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:43:39.234191 kubelet[2303]: I0117 00:43:39.234100 2303 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:43:39.271668 kubelet[2303]: E0117 00:43:39.271583 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.73.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:39.273002 kubelet[2303]: I0117 00:43:39.272712 2303 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:43:39.293550 kubelet[2303]: E0117 00:43:39.293493 2303 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:43:39.293798 kubelet[2303]: I0117 00:43:39.293766 2303 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:43:39.305878 kubelet[2303]: I0117 00:43:39.305822 2303 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:43:39.309122 kubelet[2303]: I0117 00:43:39.309009 2303 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:43:39.309560 kubelet[2303]: I0117 00:43:39.309091 2303 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jwpu3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:43:39.311437 kubelet[2303]: I0117 00:43:39.311395 2303 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:43:39.311437 kubelet[2303]: I0117 00:43:39.311437 2303 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:43:39.312800 kubelet[2303]: I0117 00:43:39.312735 2303 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:43:39.332669 kubelet[2303]: I0117 00:43:39.332599 2303 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:43:39.332669 kubelet[2303]: I0117 00:43:39.332668 2303 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:43:39.332858 kubelet[2303]: I0117 00:43:39.332721 2303 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:43:39.332858 kubelet[2303]: I0117 00:43:39.332764 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:43:39.340332 kubelet[2303]: W0117 00:43:39.339241 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.73.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.73.150:6443: connect: connection refused Jan 17 00:43:39.340332 kubelet[2303]: E0117 00:43:39.339360 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.73.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:39.340332 kubelet[2303]: W0117 00:43:39.339479 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.73.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jwpu3.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.73.150:6443: connect: connection refused Jan 17 00:43:39.340332 kubelet[2303]: E0117 00:43:39.339531 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.73.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jwpu3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:39.340919 kubelet[2303]: I0117 00:43:39.340885 2303 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:43:39.345122 kubelet[2303]: I0117 00:43:39.344706 2303 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:43:39.345721 kubelet[2303]: W0117 00:43:39.345553 2303 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:43:39.349328 kubelet[2303]: I0117 00:43:39.348138 2303 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:43:39.349328 kubelet[2303]: I0117 00:43:39.348205 2303 server.go:1287] "Started kubelet" Jan 17 00:43:39.351673 kubelet[2303]: I0117 00:43:39.351609 2303 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:43:39.355146 kubelet[2303]: I0117 00:43:39.355040 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:43:39.356191 kubelet[2303]: I0117 00:43:39.355827 2303 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:43:39.356766 kubelet[2303]: I0117 00:43:39.356742 2303 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:43:39.359811 kubelet[2303]: E0117 00:43:39.356911 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.73.150:6443/api/v1/namespaces/default/events\": dial tcp 10.243.73.150:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-jwpu3.gb1.brightbox.com.188b5dfce03b2166 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jwpu3.gb1.brightbox.com,UID:srv-jwpu3.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-jwpu3.gb1.brightbox.com,},FirstTimestamp:2026-01-17 00:43:39.348164966 +0000 UTC m=+1.131332248,LastTimestamp:2026-01-17 00:43:39.348164966 +0000 UTC m=+1.131332248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jwpu3.gb1.brightbox.com,}" Jan 17 00:43:39.363658 kubelet[2303]: I0117 00:43:39.362550 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:43:39.365412 kubelet[2303]: I0117 00:43:39.363932 2303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:43:39.373468 kubelet[2303]: E0117 00:43:39.373442 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" Jan 17 00:43:39.373626 kubelet[2303]: I0117 00:43:39.373606 2303 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:43:39.377662 kubelet[2303]: I0117 00:43:39.377491 2303 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:43:39.377662 kubelet[2303]: I0117 00:43:39.377623 2303 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:43:39.379354 kubelet[2303]: W0117 00:43:39.378024 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.73.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.73.150:6443: connect: connection refused Jan 17 00:43:39.379354 kubelet[2303]: E0117 00:43:39.378085 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.73.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:39.379354 kubelet[2303]: E0117 00:43:39.378183 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.73.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jwpu3.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.73.150:6443: connect: connection refused" interval="200ms" Jan 17 00:43:39.379570 kubelet[2303]: I0117 00:43:39.379456 2303 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:43:39.379612 kubelet[2303]: I0117 00:43:39.379578 2303 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:43:39.385794 kubelet[2303]: I0117 00:43:39.385761 2303 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:43:39.410634 kubelet[2303]: I0117 00:43:39.410547 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:43:39.412881 kubelet[2303]: I0117 00:43:39.412348 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:43:39.412881 kubelet[2303]: I0117 00:43:39.412400 2303 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:43:39.412881 kubelet[2303]: I0117 00:43:39.412443 2303 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:43:39.412881 kubelet[2303]: I0117 00:43:39.412475 2303 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:43:39.412881 kubelet[2303]: E0117 00:43:39.412574 2303 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:43:39.422659 kubelet[2303]: E0117 00:43:39.422599 2303 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:43:39.424557 kubelet[2303]: W0117 00:43:39.424503 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.73.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.73.150:6443: connect: connection refused Jan 17 00:43:39.424643 kubelet[2303]: E0117 00:43:39.424574 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.73.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:39.447107 kubelet[2303]: I0117 00:43:39.447060 2303 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:43:39.447107 kubelet[2303]: I0117 00:43:39.447086 2303 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:43:39.447275 kubelet[2303]: I0117 00:43:39.447134 2303 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:43:39.450055 kubelet[2303]: I0117 00:43:39.450030 2303 policy_none.go:49] "None policy: Start" Jan 17 00:43:39.450115 kubelet[2303]: I0117 00:43:39.450068 2303 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:43:39.450115 kubelet[2303]: I0117 00:43:39.450097 2303 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:43:39.464136 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:43:39.473881 kubelet[2303]: E0117 00:43:39.473841 2303 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" Jan 17 00:43:39.480304 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:43:39.486007 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:43:39.499656 kubelet[2303]: I0117 00:43:39.499619 2303 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:43:39.500335 kubelet[2303]: I0117 00:43:39.499964 2303 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:43:39.500335 kubelet[2303]: I0117 00:43:39.500011 2303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:43:39.500471 kubelet[2303]: I0117 00:43:39.500402 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:43:39.502274 kubelet[2303]: E0117 00:43:39.502103 2303 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:43:39.502274 kubelet[2303]: E0117 00:43:39.502200 2303 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-jwpu3.gb1.brightbox.com\" not found" Jan 17 00:43:39.533484 systemd[1]: Created slice kubepods-burstable-podae9fa1a6b093862ebc34c1c63cf388d9.slice - libcontainer container kubepods-burstable-podae9fa1a6b093862ebc34c1c63cf388d9.slice. Jan 17 00:43:39.549878 kubelet[2303]: E0117 00:43:39.549786 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.556008 systemd[1]: Created slice kubepods-burstable-pod231b10e49bb10aa3532fde9efe6b4d10.slice - libcontainer container kubepods-burstable-pod231b10e49bb10aa3532fde9efe6b4d10.slice. Jan 17 00:43:39.561384 kubelet[2303]: E0117 00:43:39.561347 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.563055 systemd[1]: Created slice kubepods-burstable-pod952893200a94248ca4e318c425630be2.slice - libcontainer container kubepods-burstable-pod952893200a94248ca4e318c425630be2.slice. Jan 17 00:43:39.565713 kubelet[2303]: E0117 00:43:39.565677 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.579029 kubelet[2303]: E0117 00:43:39.578988 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.73.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jwpu3.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.73.150:6443: connect: connection refused" interval="400ms" Jan 17 00:43:39.605525 kubelet[2303]: I0117 00:43:39.605444 2303 kubelet_node_status.go:75] "Attempting to register node" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.606120 kubelet[2303]: E0117 00:43:39.606084 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.73.150:6443/api/v1/nodes\": dial tcp 10.243.73.150:6443: connect: connection refused" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.680451 kubelet[2303]: I0117 00:43:39.680390 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-kubeconfig\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.680451 kubelet[2303]: I0117 00:43:39.680449 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.680708 kubelet[2303]: I0117 00:43:39.680491 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae9fa1a6b093862ebc34c1c63cf388d9-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jwpu3.gb1.brightbox.com\" (UID: \"ae9fa1a6b093862ebc34c1c63cf388d9\") " pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.680708 kubelet[2303]: I0117 00:43:39.680520 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-ca-certs\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.680708 kubelet[2303]: I0117 00:43:39.680558 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-flexvolume-dir\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.680708 kubelet[2303]: I0117 00:43:39.680582 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-k8s-certs\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.680708 kubelet[2303]: I0117 00:43:39.680624 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/231b10e49bb10aa3532fde9efe6b4d10-kubeconfig\") pod \"kube-scheduler-srv-jwpu3.gb1.brightbox.com\" (UID: \"231b10e49bb10aa3532fde9efe6b4d10\") " pod="kube-system/kube-scheduler-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.680983 kubelet[2303]: I0117 00:43:39.680647 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae9fa1a6b093862ebc34c1c63cf388d9-ca-certs\") pod \"kube-apiserver-srv-jwpu3.gb1.brightbox.com\" (UID: \"ae9fa1a6b093862ebc34c1c63cf388d9\") " pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.680983 kubelet[2303]: I0117 00:43:39.680680 2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae9fa1a6b093862ebc34c1c63cf388d9-k8s-certs\") pod \"kube-apiserver-srv-jwpu3.gb1.brightbox.com\" (UID: \"ae9fa1a6b093862ebc34c1c63cf388d9\") " pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.810186 kubelet[2303]: I0117 00:43:39.810012 2303 kubelet_node_status.go:75] "Attempting to register node" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.811466 kubelet[2303]: E0117 00:43:39.810513 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.73.150:6443/api/v1/nodes\": dial tcp 10.243.73.150:6443: connect: connection refused" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:39.852295 containerd[1518]: time="2026-01-17T00:43:39.852187001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jwpu3.gb1.brightbox.com,Uid:ae9fa1a6b093862ebc34c1c63cf388d9,Namespace:kube-system,Attempt:0,}" Jan 17 00:43:39.870522 containerd[1518]: time="2026-01-17T00:43:39.870396604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jwpu3.gb1.brightbox.com,Uid:231b10e49bb10aa3532fde9efe6b4d10,Namespace:kube-system,Attempt:0,}" Jan 17 00:43:39.870910 containerd[1518]: time="2026-01-17T00:43:39.870406046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jwpu3.gb1.brightbox.com,Uid:952893200a94248ca4e318c425630be2,Namespace:kube-system,Attempt:0,}" Jan 17 00:43:39.979973 kubelet[2303]: E0117 00:43:39.979915 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.73.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jwpu3.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.73.150:6443: connect: connection refused" interval="800ms" Jan 17 00:43:40.035110 kubelet[2303]: E0117 00:43:40.034873 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.73.150:6443/api/v1/namespaces/default/events\": dial tcp 10.243.73.150:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-jwpu3.gb1.brightbox.com.188b5dfce03b2166 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jwpu3.gb1.brightbox.com,UID:srv-jwpu3.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-jwpu3.gb1.brightbox.com,},FirstTimestamp:2026-01-17 00:43:39.348164966 +0000 UTC m=+1.131332248,LastTimestamp:2026-01-17 00:43:39.348164966 +0000 UTC m=+1.131332248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jwpu3.gb1.brightbox.com,}" Jan 17 00:43:40.215029 kubelet[2303]: I0117 00:43:40.214515 2303 kubelet_node_status.go:75] "Attempting to register node" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:40.215029 kubelet[2303]: E0117 00:43:40.214957 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.73.150:6443/api/v1/nodes\": dial tcp 10.243.73.150:6443: connect: connection refused" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:40.253289 kubelet[2303]: W0117 00:43:40.253092 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.73.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jwpu3.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.73.150:6443: connect: connection refused Jan 17 00:43:40.253289 kubelet[2303]: E0117 00:43:40.253198 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.73.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jwpu3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:40.394699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470802997.mount: Deactivated successfully. Jan 17 00:43:40.403331 containerd[1518]: time="2026-01-17T00:43:40.402525475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:43:40.404166 containerd[1518]: time="2026-01-17T00:43:40.404106734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 00:43:40.405909 containerd[1518]: time="2026-01-17T00:43:40.405843340Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:43:40.407907 containerd[1518]: time="2026-01-17T00:43:40.407874400Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:43:40.410468 containerd[1518]: time="2026-01-17T00:43:40.410435624Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:43:40.412352 containerd[1518]: time="2026-01-17T00:43:40.411163070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:43:40.412352 containerd[1518]: time="2026-01-17T00:43:40.412113209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:43:40.413484 containerd[1518]: time="2026-01-17T00:43:40.413395941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:43:40.414662 containerd[1518]: time="2026-01-17T00:43:40.414582637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.096991ms" Jan 17 00:43:40.422106 containerd[1518]: time="2026-01-17T00:43:40.422058283Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.533655ms" Jan 17 00:43:40.432928 containerd[1518]: time="2026-01-17T00:43:40.432620011Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.660143ms" Jan 17 00:43:40.610078 kubelet[2303]: W0117 00:43:40.609901 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.73.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.73.150:6443: connect: connection refused Jan 17 00:43:40.610078 kubelet[2303]: E0117 00:43:40.610008 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.73.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:40.717854 containerd[1518]: time="2026-01-17T00:43:40.717003056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:43:40.717854 containerd[1518]: time="2026-01-17T00:43:40.717262366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:43:40.717854 containerd[1518]: time="2026-01-17T00:43:40.717290050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:40.717854 containerd[1518]: time="2026-01-17T00:43:40.717628868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:40.730335 containerd[1518]: time="2026-01-17T00:43:40.727354836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:43:40.730335 containerd[1518]: time="2026-01-17T00:43:40.728891831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:43:40.730335 containerd[1518]: time="2026-01-17T00:43:40.729034042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:40.730335 containerd[1518]: time="2026-01-17T00:43:40.729196569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:40.744871 containerd[1518]: time="2026-01-17T00:43:40.740153675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:43:40.744871 containerd[1518]: time="2026-01-17T00:43:40.744434393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:43:40.744871 containerd[1518]: time="2026-01-17T00:43:40.744456618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:40.744871 containerd[1518]: time="2026-01-17T00:43:40.744646079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:40.756137 kubelet[2303]: W0117 00:43:40.756028 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.73.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.73.150:6443: connect: connection refused Jan 17 00:43:40.756327 kubelet[2303]: E0117 00:43:40.756148 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.73.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:40.780873 kubelet[2303]: E0117 00:43:40.780803 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.73.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jwpu3.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.73.150:6443: connect: connection refused" interval="1.6s" Jan 17 00:43:40.786254 kubelet[2303]: W0117 00:43:40.786185 2303 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.73.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.73.150:6443: connect: connection refused Jan 17 00:43:40.786374 kubelet[2303]: E0117 00:43:40.786257 2303 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.73.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:40.812835 systemd[1]: Started cri-containerd-2f8424c770ac1d25ad2b50982d463e81a35f13ae1b97f9ec606980c1231cdc48.scope - libcontainer container 2f8424c770ac1d25ad2b50982d463e81a35f13ae1b97f9ec606980c1231cdc48. Jan 17 00:43:40.843262 systemd[1]: Started cri-containerd-a378189e3269840ee81bcfdd66441ba57e71ed7058b98b76f74fccbfb0a52b94.scope - libcontainer container a378189e3269840ee81bcfdd66441ba57e71ed7058b98b76f74fccbfb0a52b94. Jan 17 00:43:40.866649 systemd[1]: Started cri-containerd-0c8dcae295e3ea8c97f55a7005a058b037b60a5f6b1b516d9b15c9023d1f1fc8.scope - libcontainer container 0c8dcae295e3ea8c97f55a7005a058b037b60a5f6b1b516d9b15c9023d1f1fc8. Jan 17 00:43:40.982724 containerd[1518]: time="2026-01-17T00:43:40.982429790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jwpu3.gb1.brightbox.com,Uid:ae9fa1a6b093862ebc34c1c63cf388d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f8424c770ac1d25ad2b50982d463e81a35f13ae1b97f9ec606980c1231cdc48\"" Jan 17 00:43:41.017945 containerd[1518]: time="2026-01-17T00:43:41.017141613Z" level=info msg="CreateContainer within sandbox \"2f8424c770ac1d25ad2b50982d463e81a35f13ae1b97f9ec606980c1231cdc48\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:43:41.021124 kubelet[2303]: I0117 00:43:41.020517 2303 kubelet_node_status.go:75] "Attempting to register node" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:41.021124 kubelet[2303]: E0117 00:43:41.021016 2303 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.73.150:6443/api/v1/nodes\": dial tcp 10.243.73.150:6443: connect: connection refused" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:41.038543 containerd[1518]: time="2026-01-17T00:43:41.038466634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jwpu3.gb1.brightbox.com,Uid:231b10e49bb10aa3532fde9efe6b4d10,Namespace:kube-system,Attempt:0,} returns sandbox id \"a378189e3269840ee81bcfdd66441ba57e71ed7058b98b76f74fccbfb0a52b94\"" Jan 17 00:43:41.038872 containerd[1518]: time="2026-01-17T00:43:41.038536284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jwpu3.gb1.brightbox.com,Uid:952893200a94248ca4e318c425630be2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c8dcae295e3ea8c97f55a7005a058b037b60a5f6b1b516d9b15c9023d1f1fc8\"" Jan 17 00:43:41.042705 containerd[1518]: time="2026-01-17T00:43:41.042544105Z" level=info msg="CreateContainer within sandbox \"2f8424c770ac1d25ad2b50982d463e81a35f13ae1b97f9ec606980c1231cdc48\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12cc32db6bd1eec16529ab0305dfd6a218a80910186c3f2c00061b0963fa9da7\"" Jan 17 00:43:41.043894 containerd[1518]: time="2026-01-17T00:43:41.043861612Z" level=info msg="StartContainer for \"12cc32db6bd1eec16529ab0305dfd6a218a80910186c3f2c00061b0963fa9da7\"" Jan 17 00:43:41.044973 containerd[1518]: time="2026-01-17T00:43:41.044280254Z" level=info msg="CreateContainer within sandbox \"0c8dcae295e3ea8c97f55a7005a058b037b60a5f6b1b516d9b15c9023d1f1fc8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:43:41.045259 containerd[1518]: time="2026-01-17T00:43:41.044591940Z" level=info msg="CreateContainer within sandbox \"a378189e3269840ee81bcfdd66441ba57e71ed7058b98b76f74fccbfb0a52b94\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:43:41.066438 containerd[1518]: time="2026-01-17T00:43:41.066284220Z" level=info msg="CreateContainer within sandbox \"0c8dcae295e3ea8c97f55a7005a058b037b60a5f6b1b516d9b15c9023d1f1fc8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"abb42120dbbe30b1deff5cb35a5fc147f15ce1fd2bcff217e0caf86cbb4dbb7c\"" Jan 17 00:43:41.067745 containerd[1518]: time="2026-01-17T00:43:41.067638685Z" level=info msg="StartContainer for \"abb42120dbbe30b1deff5cb35a5fc147f15ce1fd2bcff217e0caf86cbb4dbb7c\"" Jan 17 00:43:41.076008 containerd[1518]: time="2026-01-17T00:43:41.075742022Z" level=info msg="CreateContainer within sandbox \"a378189e3269840ee81bcfdd66441ba57e71ed7058b98b76f74fccbfb0a52b94\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"48f22ea99a7c006c2f0d6dfd629e18b3e0d7d689553aef849f6467dcb8b2ed54\"" Jan 17 00:43:41.077120 containerd[1518]: time="2026-01-17T00:43:41.077087212Z" level=info msg="StartContainer for \"48f22ea99a7c006c2f0d6dfd629e18b3e0d7d689553aef849f6467dcb8b2ed54\"" Jan 17 00:43:41.102757 systemd[1]: Started cri-containerd-12cc32db6bd1eec16529ab0305dfd6a218a80910186c3f2c00061b0963fa9da7.scope - libcontainer container 12cc32db6bd1eec16529ab0305dfd6a218a80910186c3f2c00061b0963fa9da7. Jan 17 00:43:41.124621 systemd[1]: Started cri-containerd-abb42120dbbe30b1deff5cb35a5fc147f15ce1fd2bcff217e0caf86cbb4dbb7c.scope - libcontainer container abb42120dbbe30b1deff5cb35a5fc147f15ce1fd2bcff217e0caf86cbb4dbb7c. Jan 17 00:43:41.152708 systemd[1]: Started cri-containerd-48f22ea99a7c006c2f0d6dfd629e18b3e0d7d689553aef849f6467dcb8b2ed54.scope - libcontainer container 48f22ea99a7c006c2f0d6dfd629e18b3e0d7d689553aef849f6467dcb8b2ed54. Jan 17 00:43:41.228598 containerd[1518]: time="2026-01-17T00:43:41.228538448Z" level=info msg="StartContainer for \"12cc32db6bd1eec16529ab0305dfd6a218a80910186c3f2c00061b0963fa9da7\" returns successfully" Jan 17 00:43:41.256119 containerd[1518]: time="2026-01-17T00:43:41.256067486Z" level=info msg="StartContainer for \"abb42120dbbe30b1deff5cb35a5fc147f15ce1fd2bcff217e0caf86cbb4dbb7c\" returns successfully" Jan 17 00:43:41.303260 containerd[1518]: time="2026-01-17T00:43:41.303192231Z" level=info msg="StartContainer for \"48f22ea99a7c006c2f0d6dfd629e18b3e0d7d689553aef849f6467dcb8b2ed54\" returns successfully" Jan 17 00:43:41.329301 kubelet[2303]: E0117 00:43:41.328558 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.73.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.73.150:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:43:41.453830 kubelet[2303]: E0117 00:43:41.453703 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:41.462671 kubelet[2303]: E0117 00:43:41.462640 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:41.466100 kubelet[2303]: E0117 00:43:41.466065 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:42.469932 kubelet[2303]: E0117 00:43:42.469888 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:42.470633 kubelet[2303]: E0117 00:43:42.470205 2303 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:42.625021 kubelet[2303]: I0117 00:43:42.624981 2303 kubelet_node_status.go:75] "Attempting to register node" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:44.859825 kubelet[2303]: E0117 00:43:44.859769 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-jwpu3.gb1.brightbox.com\" not found" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:44.883292 kubelet[2303]: I0117 00:43:44.883148 2303 kubelet_node_status.go:78] "Successfully registered node" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:44.883493 kubelet[2303]: E0117 00:43:44.883302 2303 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-jwpu3.gb1.brightbox.com\": node \"srv-jwpu3.gb1.brightbox.com\" not found" Jan 17 00:43:44.976938 kubelet[2303]: I0117 00:43:44.976422 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:44.983590 kubelet[2303]: E0117 00:43:44.983554 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-jwpu3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:44.983590 kubelet[2303]: I0117 00:43:44.983589 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:44.985573 kubelet[2303]: E0117 00:43:44.985527 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:44.985573 kubelet[2303]: I0117 00:43:44.985559 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:44.987427 kubelet[2303]: E0117 00:43:44.987386 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jwpu3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:45.339055 kubelet[2303]: I0117 00:43:45.339007 2303 apiserver.go:52] "Watching apiserver" Jan 17 00:43:45.379395 kubelet[2303]: I0117 00:43:45.379043 2303 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:43:45.685466 kubelet[2303]: I0117 00:43:45.684847 2303 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:45.689257 kubelet[2303]: E0117 00:43:45.689222 2303 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jwpu3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:47.245811 systemd[1]: Reloading requested from client PID 2583 ('systemctl') (unit session-11.scope)... Jan 17 00:43:47.246324 systemd[1]: Reloading... Jan 17 00:43:47.406373 zram_generator::config[2628]: No configuration found. Jan 17 00:43:47.568671 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:43:47.713271 systemd[1]: Reloading finished in 465 ms. Jan 17 00:43:47.785226 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:43:47.806045 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:43:47.806478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:43:47.806571 systemd[1]: kubelet.service: Consumed 1.609s CPU time, 128.6M memory peak, 0B memory swap peak. Jan 17 00:43:47.819602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:43:48.295632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:43:48.296135 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:43:48.456671 sudo[2698]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 00:43:48.457426 sudo[2698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 00:43:48.470585 kubelet[2687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:43:48.470585 kubelet[2687]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:43:48.470585 kubelet[2687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:43:48.470585 kubelet[2687]: I0117 00:43:48.468046 2687 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:43:48.488349 kubelet[2687]: I0117 00:43:48.488272 2687 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:43:48.490330 kubelet[2687]: I0117 00:43:48.488558 2687 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:43:48.490330 kubelet[2687]: I0117 00:43:48.489708 2687 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:43:48.498893 kubelet[2687]: I0117 00:43:48.497505 2687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:43:48.517780 kubelet[2687]: I0117 00:43:48.515925 2687 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:43:48.525597 kubelet[2687]: E0117 00:43:48.525541 2687 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:43:48.525597 kubelet[2687]: I0117 00:43:48.525596 2687 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:43:48.532864 kubelet[2687]: I0117 00:43:48.532333 2687 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:43:48.533323 kubelet[2687]: I0117 00:43:48.532961 2687 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:43:48.533537 kubelet[2687]: I0117 00:43:48.533020 2687 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jwpu3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:43:48.533537 kubelet[2687]: I0117 00:43:48.533438 2687 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:43:48.533537 kubelet[2687]: I0117 00:43:48.533457 2687 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:43:48.533537 kubelet[2687]: I0117 00:43:48.533527 2687 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:43:48.533830 kubelet[2687]: I0117 00:43:48.533775 2687 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:43:48.539083 kubelet[2687]: I0117 00:43:48.534654 2687 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:43:48.539083 kubelet[2687]: I0117 00:43:48.534697 2687 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:43:48.539083 kubelet[2687]: I0117 00:43:48.534714 2687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:43:48.539083 kubelet[2687]: I0117 00:43:48.536743 2687 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:43:48.539083 kubelet[2687]: I0117 00:43:48.537270 2687 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:43:48.539083 kubelet[2687]: I0117 00:43:48.537872 2687 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:43:48.539083 kubelet[2687]: I0117 00:43:48.537923 2687 server.go:1287] "Started kubelet" Jan 17 00:43:48.559420 kubelet[2687]: I0117 00:43:48.556215 2687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:43:48.572468 kubelet[2687]: I0117 00:43:48.572286 2687 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:43:48.581177 kubelet[2687]: I0117 00:43:48.580506 2687 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:43:48.598791 kubelet[2687]: I0117 00:43:48.595153 2687 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:43:48.604225 kubelet[2687]: I0117 00:43:48.601103 2687 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:43:48.605448 kubelet[2687]: E0117 00:43:48.604708 2687 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jwpu3.gb1.brightbox.com\" not found" Jan 17 00:43:48.609327 kubelet[2687]: I0117 00:43:48.608672 2687 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:43:48.611185 kubelet[2687]: I0117 00:43:48.609726 2687 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:43:48.615324 kubelet[2687]: I0117 00:43:48.612284 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:43:48.615324 kubelet[2687]: I0117 00:43:48.613935 2687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:43:48.615324 kubelet[2687]: I0117 00:43:48.613972 2687 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:43:48.615324 kubelet[2687]: I0117 00:43:48.614443 2687 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:43:48.615324 kubelet[2687]: I0117 00:43:48.614463 2687 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:43:48.618908 kubelet[2687]: E0117 00:43:48.616761 2687 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:43:48.632075 kubelet[2687]: I0117 00:43:48.632025 2687 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:43:48.638017 kubelet[2687]: I0117 00:43:48.637662 2687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:43:48.641479 kubelet[2687]: I0117 00:43:48.640964 2687 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:43:48.657237 kubelet[2687]: E0117 00:43:48.656909 2687 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:43:48.660774 kubelet[2687]: I0117 00:43:48.658290 2687 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:43:48.660774 kubelet[2687]: I0117 00:43:48.658330 2687 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:43:48.717857 kubelet[2687]: E0117 00:43:48.717373 2687 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:43:48.766222 kubelet[2687]: I0117 00:43:48.765026 2687 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:43:48.766222 kubelet[2687]: I0117 00:43:48.765166 2687 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:43:48.766222 kubelet[2687]: I0117 00:43:48.765203 2687 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:43:48.766222 kubelet[2687]: I0117 00:43:48.765755 2687 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:43:48.766222 kubelet[2687]: I0117 00:43:48.765777 2687 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:43:48.766222 kubelet[2687]: I0117 00:43:48.765861 2687 policy_none.go:49] "None policy: Start" Jan 17 00:43:48.766222 kubelet[2687]: I0117 00:43:48.765877 2687 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:43:48.766222 kubelet[2687]: I0117 00:43:48.765938 2687 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:43:48.766844 kubelet[2687]: I0117 00:43:48.766359 2687 state_mem.go:75] "Updated machine memory state" Jan 17 00:43:48.773282 kubelet[2687]: I0117 00:43:48.773252 2687 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:43:48.773558 kubelet[2687]: I0117 00:43:48.773526 2687 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:43:48.773631 kubelet[2687]: I0117 00:43:48.773552 2687 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:43:48.776324 kubelet[2687]: I0117 00:43:48.776285 2687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:43:48.785831 kubelet[2687]: E0117 00:43:48.784757 2687 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:43:48.902068 kubelet[2687]: I0117 00:43:48.901553 2687 kubelet_node_status.go:75] "Attempting to register node" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:48.920056 kubelet[2687]: I0117 00:43:48.919728 2687 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:48.921751 kubelet[2687]: I0117 00:43:48.921687 2687 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:48.925328 kubelet[2687]: I0117 00:43:48.921711 2687 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:48.926285 kubelet[2687]: I0117 00:43:48.925746 2687 kubelet_node_status.go:124] "Node was previously registered" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:48.926285 kubelet[2687]: I0117 00:43:48.925878 2687 kubelet_node_status.go:78] "Successfully registered node" node="srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:48.937137 kubelet[2687]: W0117 00:43:48.937091 2687 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:43:48.940672 kubelet[2687]: W0117 00:43:48.940649 2687 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:43:48.943643 kubelet[2687]: W0117 00:43:48.940916 2687 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:43:49.013299 kubelet[2687]: I0117 00:43:49.013076 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae9fa1a6b093862ebc34c1c63cf388d9-ca-certs\") pod \"kube-apiserver-srv-jwpu3.gb1.brightbox.com\" (UID: \"ae9fa1a6b093862ebc34c1c63cf388d9\") " pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.014830 kubelet[2687]: I0117 00:43:49.014446 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae9fa1a6b093862ebc34c1c63cf388d9-k8s-certs\") pod \"kube-apiserver-srv-jwpu3.gb1.brightbox.com\" (UID: \"ae9fa1a6b093862ebc34c1c63cf388d9\") " pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.014830 kubelet[2687]: I0117 00:43:49.014490 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae9fa1a6b093862ebc34c1c63cf388d9-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jwpu3.gb1.brightbox.com\" (UID: \"ae9fa1a6b093862ebc34c1c63cf388d9\") " pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.014830 kubelet[2687]: I0117 00:43:49.014526 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-flexvolume-dir\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.014830 kubelet[2687]: I0117 00:43:49.014614 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-k8s-certs\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.014830 kubelet[2687]: I0117 00:43:49.014649 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-kubeconfig\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.015137 kubelet[2687]: I0117 00:43:49.014714 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.015137 kubelet[2687]: I0117 00:43:49.014747 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/231b10e49bb10aa3532fde9efe6b4d10-kubeconfig\") pod \"kube-scheduler-srv-jwpu3.gb1.brightbox.com\" (UID: \"231b10e49bb10aa3532fde9efe6b4d10\") " pod="kube-system/kube-scheduler-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.015137 kubelet[2687]: I0117 00:43:49.014789 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/952893200a94248ca4e318c425630be2-ca-certs\") pod \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" (UID: \"952893200a94248ca4e318c425630be2\") " pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.410631 sudo[2698]: pam_unix(sudo:session): session closed for user root Jan 17 00:43:49.536567 kubelet[2687]: I0117 00:43:49.536489 2687 apiserver.go:52] "Watching apiserver" Jan 17 00:43:49.610071 kubelet[2687]: I0117 00:43:49.609973 2687 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:43:49.695772 kubelet[2687]: I0117 00:43:49.695583 2687 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.720139 kubelet[2687]: W0117 00:43:49.720018 2687 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:43:49.720399 kubelet[2687]: E0117 00:43:49.720184 2687 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-jwpu3.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" Jan 17 00:43:49.724683 kubelet[2687]: I0117 00:43:49.724474 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-jwpu3.gb1.brightbox.com" podStartSLOduration=1.72434487 podStartE2EDuration="1.72434487s" podCreationTimestamp="2026-01-17 00:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:43:49.723428001 +0000 UTC m=+1.405953642" watchObservedRunningTime="2026-01-17 00:43:49.72434487 +0000 UTC m=+1.406870508" Jan 17 00:43:49.750691 kubelet[2687]: I0117 00:43:49.750538 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-jwpu3.gb1.brightbox.com" podStartSLOduration=1.750509739 podStartE2EDuration="1.750509739s" podCreationTimestamp="2026-01-17 00:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:43:49.738715566 +0000 UTC m=+1.421241202" watchObservedRunningTime="2026-01-17 00:43:49.750509739 +0000 UTC m=+1.433035373" Jan 17 00:43:49.751032 kubelet[2687]: I0117 00:43:49.750851 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-jwpu3.gb1.brightbox.com" podStartSLOduration=1.750840539 podStartE2EDuration="1.750840539s" podCreationTimestamp="2026-01-17 00:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:43:49.74897354 +0000 UTC m=+1.431499184" watchObservedRunningTime="2026-01-17 00:43:49.750840539 +0000 UTC m=+1.433366175" Jan 17 00:43:51.341515 sudo[1762]: pam_unix(sudo:session): session closed for user root Jan 17 00:43:51.434076 sshd[1759]: pam_unix(sshd:session): session closed for user core Jan 17 00:43:51.441027 systemd[1]: sshd@8-10.243.73.150:22-20.161.92.111:56628.service: Deactivated successfully. Jan 17 00:43:51.444262 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:43:51.444871 systemd[1]: session-11.scope: Consumed 6.240s CPU time, 143.2M memory peak, 0B memory swap peak. Jan 17 00:43:51.446817 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:43:51.449993 systemd-logind[1490]: Removed session 11. Jan 17 00:43:52.262041 kubelet[2687]: I0117 00:43:52.261903 2687 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:43:52.265080 kubelet[2687]: I0117 00:43:52.264462 2687 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:43:52.265165 containerd[1518]: time="2026-01-17T00:43:52.264199769Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:43:53.190068 systemd[1]: Created slice kubepods-besteffort-podac309147_d447_420f_bece_9159e972bce0.slice - libcontainer container kubepods-besteffort-podac309147_d447_420f_bece_9159e972bce0.slice. Jan 17 00:43:53.233366 systemd[1]: Created slice kubepods-burstable-poda1a5e927_efee_4658_94f3_2f4ca8ae0b07.slice - libcontainer container kubepods-burstable-poda1a5e927_efee_4658_94f3_2f4ca8ae0b07.slice. Jan 17 00:43:53.248160 kubelet[2687]: I0117 00:43:53.248083 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-lib-modules\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.248160 kubelet[2687]: I0117 00:43:53.248155 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-xtables-lock\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.248464 kubelet[2687]: I0117 00:43:53.248227 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-host-proc-sys-kernel\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.248464 kubelet[2687]: I0117 00:43:53.248263 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-clustermesh-secrets\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.248464 kubelet[2687]: I0117 00:43:53.248296 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-run\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.248464 kubelet[2687]: I0117 00:43:53.248385 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac309147-d447-420f-bece-9159e972bce0-kube-proxy\") pod \"kube-proxy-p7xvr\" (UID: \"ac309147-d447-420f-bece-9159e972bce0\") " pod="kube-system/kube-proxy-p7xvr" Jan 17 00:43:53.248464 kubelet[2687]: I0117 00:43:53.248426 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn6hw\" (UniqueName: \"kubernetes.io/projected/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-kube-api-access-tn6hw\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.248928 kubelet[2687]: I0117 00:43:53.248456 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac309147-d447-420f-bece-9159e972bce0-lib-modules\") pod \"kube-proxy-p7xvr\" (UID: \"ac309147-d447-420f-bece-9159e972bce0\") " pod="kube-system/kube-proxy-p7xvr" Jan 17 00:43:53.248928 kubelet[2687]: I0117 00:43:53.248533 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-hostproc\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.248928 kubelet[2687]: I0117 00:43:53.248564 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-etc-cni-netd\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.248928 kubelet[2687]: I0117 00:43:53.248593 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac309147-d447-420f-bece-9159e972bce0-xtables-lock\") pod \"kube-proxy-p7xvr\" (UID: \"ac309147-d447-420f-bece-9159e972bce0\") " pod="kube-system/kube-proxy-p7xvr" Jan 17 00:43:53.248928 kubelet[2687]: I0117 00:43:53.248673 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-config-path\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.249251 kubelet[2687]: I0117 00:43:53.248711 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hndpq\" (UniqueName: \"kubernetes.io/projected/ac309147-d447-420f-bece-9159e972bce0-kube-api-access-hndpq\") pod \"kube-proxy-p7xvr\" (UID: \"ac309147-d447-420f-bece-9159e972bce0\") " pod="kube-system/kube-proxy-p7xvr" Jan 17 00:43:53.249251 kubelet[2687]: I0117 00:43:53.248738 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-bpf-maps\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.249251 kubelet[2687]: I0117 00:43:53.248768 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-cgroup\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.249251 kubelet[2687]: I0117 00:43:53.248797 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cni-path\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.249251 kubelet[2687]: I0117 00:43:53.248824 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-host-proc-sys-net\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.249251 kubelet[2687]: I0117 00:43:53.248852 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-hubble-tls\") pod \"cilium-dkkm5\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " pod="kube-system/cilium-dkkm5" Jan 17 00:43:53.452259 kubelet[2687]: I0117 00:43:53.450785 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d615846-f58f-4539-bf98-cb835387934a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zvhls\" (UID: \"4d615846-f58f-4539-bf98-cb835387934a\") " pod="kube-system/cilium-operator-6c4d7847fc-zvhls" Jan 17 00:43:53.452259 kubelet[2687]: I0117 00:43:53.450845 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbz88\" (UniqueName: \"kubernetes.io/projected/4d615846-f58f-4539-bf98-cb835387934a-kube-api-access-vbz88\") pod \"cilium-operator-6c4d7847fc-zvhls\" (UID: \"4d615846-f58f-4539-bf98-cb835387934a\") " pod="kube-system/cilium-operator-6c4d7847fc-zvhls" Jan 17 00:43:53.459913 systemd[1]: Created slice kubepods-besteffort-pod4d615846_f58f_4539_bf98_cb835387934a.slice - libcontainer container kubepods-besteffort-pod4d615846_f58f_4539_bf98_cb835387934a.slice. Jan 17 00:43:53.501842 containerd[1518]: time="2026-01-17T00:43:53.501735208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p7xvr,Uid:ac309147-d447-420f-bece-9159e972bce0,Namespace:kube-system,Attempt:0,}" Jan 17 00:43:53.540997 containerd[1518]: time="2026-01-17T00:43:53.540452873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dkkm5,Uid:a1a5e927-efee-4658-94f3-2f4ca8ae0b07,Namespace:kube-system,Attempt:0,}" Jan 17 00:43:53.591853 containerd[1518]: time="2026-01-17T00:43:53.591587119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:43:53.598182 containerd[1518]: time="2026-01-17T00:43:53.598025148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:43:53.598298 containerd[1518]: time="2026-01-17T00:43:53.598197011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:53.598647 containerd[1518]: time="2026-01-17T00:43:53.598520493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:53.623752 containerd[1518]: time="2026-01-17T00:43:53.623481220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:43:53.626686 containerd[1518]: time="2026-01-17T00:43:53.623553070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:43:53.626686 containerd[1518]: time="2026-01-17T00:43:53.624347153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:53.626686 containerd[1518]: time="2026-01-17T00:43:53.624482021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:53.632603 systemd[1]: Started cri-containerd-2a13c68203e6fe3302bdce2b8429b264b70026f9219539d92fac683025faeb06.scope - libcontainer container 2a13c68203e6fe3302bdce2b8429b264b70026f9219539d92fac683025faeb06. Jan 17 00:43:53.685415 systemd[1]: Started cri-containerd-106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc.scope - libcontainer container 106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc. Jan 17 00:43:53.692186 containerd[1518]: time="2026-01-17T00:43:53.692142635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p7xvr,Uid:ac309147-d447-420f-bece-9159e972bce0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a13c68203e6fe3302bdce2b8429b264b70026f9219539d92fac683025faeb06\"" Jan 17 00:43:53.701074 containerd[1518]: time="2026-01-17T00:43:53.701014731Z" level=info msg="CreateContainer within sandbox \"2a13c68203e6fe3302bdce2b8429b264b70026f9219539d92fac683025faeb06\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:43:53.744017 containerd[1518]: time="2026-01-17T00:43:53.743435996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dkkm5,Uid:a1a5e927-efee-4658-94f3-2f4ca8ae0b07,Namespace:kube-system,Attempt:0,} returns sandbox id \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\"" Jan 17 00:43:53.749078 containerd[1518]: time="2026-01-17T00:43:53.748672911Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 00:43:53.764490 containerd[1518]: time="2026-01-17T00:43:53.764436285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zvhls,Uid:4d615846-f58f-4539-bf98-cb835387934a,Namespace:kube-system,Attempt:0,}" Jan 17 00:43:53.765108 containerd[1518]: time="2026-01-17T00:43:53.765064234Z" level=info msg="CreateContainer within sandbox \"2a13c68203e6fe3302bdce2b8429b264b70026f9219539d92fac683025faeb06\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"20f046249f86257ebd2ff0ef39c3f1370486e8f83ef5b7e55ec9b518d5e5b593\"" Jan 17 00:43:53.765956 containerd[1518]: time="2026-01-17T00:43:53.765663650Z" level=info msg="StartContainer for \"20f046249f86257ebd2ff0ef39c3f1370486e8f83ef5b7e55ec9b518d5e5b593\"" Jan 17 00:43:53.812542 systemd[1]: Started cri-containerd-20f046249f86257ebd2ff0ef39c3f1370486e8f83ef5b7e55ec9b518d5e5b593.scope - libcontainer container 20f046249f86257ebd2ff0ef39c3f1370486e8f83ef5b7e55ec9b518d5e5b593. Jan 17 00:43:53.824528 containerd[1518]: time="2026-01-17T00:43:53.824401898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:43:53.826473 containerd[1518]: time="2026-01-17T00:43:53.825813483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:43:53.826950 containerd[1518]: time="2026-01-17T00:43:53.826276343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:53.826950 containerd[1518]: time="2026-01-17T00:43:53.826412583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:43:53.864557 systemd[1]: Started cri-containerd-25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83.scope - libcontainer container 25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83. Jan 17 00:43:53.910993 containerd[1518]: time="2026-01-17T00:43:53.909638019Z" level=info msg="StartContainer for \"20f046249f86257ebd2ff0ef39c3f1370486e8f83ef5b7e55ec9b518d5e5b593\" returns successfully" Jan 17 00:43:54.021031 containerd[1518]: time="2026-01-17T00:43:54.020842608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zvhls,Uid:4d615846-f58f-4539-bf98-cb835387934a,Namespace:kube-system,Attempt:0,} returns sandbox id \"25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83\"" Jan 17 00:43:54.761233 kubelet[2687]: I0117 00:43:54.761098 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p7xvr" podStartSLOduration=1.760963255 podStartE2EDuration="1.760963255s" podCreationTimestamp="2026-01-17 00:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:43:54.760147648 +0000 UTC m=+6.442673290" watchObservedRunningTime="2026-01-17 00:43:54.760963255 +0000 UTC m=+6.443488882" Jan 17 00:44:01.333598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785841099.mount: Deactivated successfully. Jan 17 00:44:04.840636 containerd[1518]: time="2026-01-17T00:44:04.840255438Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:04.843628 containerd[1518]: time="2026-01-17T00:44:04.843552234Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 17 00:44:04.844467 containerd[1518]: time="2026-01-17T00:44:04.844406414Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:04.847780 containerd[1518]: time="2026-01-17T00:44:04.847744474Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.099009161s" Jan 17 00:44:04.848163 containerd[1518]: time="2026-01-17T00:44:04.847934367Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 00:44:04.850955 containerd[1518]: time="2026-01-17T00:44:04.850609730Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 00:44:04.854750 containerd[1518]: time="2026-01-17T00:44:04.854665843Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:44:04.950302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4105279137.mount: Deactivated successfully. Jan 17 00:44:04.954801 containerd[1518]: time="2026-01-17T00:44:04.954713501Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\"" Jan 17 00:44:04.958868 containerd[1518]: time="2026-01-17T00:44:04.955573900Z" level=info msg="StartContainer for \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\"" Jan 17 00:44:05.090576 systemd[1]: Started cri-containerd-6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d.scope - libcontainer container 6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d. Jan 17 00:44:05.139937 containerd[1518]: time="2026-01-17T00:44:05.139736269Z" level=info msg="StartContainer for \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\" returns successfully" Jan 17 00:44:05.161923 systemd[1]: cri-containerd-6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d.scope: Deactivated successfully. Jan 17 00:44:05.429749 containerd[1518]: time="2026-01-17T00:44:05.421196560Z" level=info msg="shim disconnected" id=6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d namespace=k8s.io Jan 17 00:44:05.429749 containerd[1518]: time="2026-01-17T00:44:05.429423424Z" level=warning msg="cleaning up after shim disconnected" id=6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d namespace=k8s.io Jan 17 00:44:05.429749 containerd[1518]: time="2026-01-17T00:44:05.429453718Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:44:05.854504 containerd[1518]: time="2026-01-17T00:44:05.854455353Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:44:05.875570 containerd[1518]: time="2026-01-17T00:44:05.875513915Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\"" Jan 17 00:44:05.877852 containerd[1518]: time="2026-01-17T00:44:05.876702374Z" level=info msg="StartContainer for \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\"" Jan 17 00:44:05.926594 systemd[1]: Started cri-containerd-6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6.scope - libcontainer container 6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6. Jan 17 00:44:05.944627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d-rootfs.mount: Deactivated successfully. Jan 17 00:44:05.980843 containerd[1518]: time="2026-01-17T00:44:05.980643181Z" level=info msg="StartContainer for \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\" returns successfully" Jan 17 00:44:05.998906 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:44:06.000046 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:44:06.000282 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:44:06.010841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:44:06.012056 systemd[1]: cri-containerd-6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6.scope: Deactivated successfully. Jan 17 00:44:06.047521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6-rootfs.mount: Deactivated successfully. Jan 17 00:44:06.066149 containerd[1518]: time="2026-01-17T00:44:06.065752425Z" level=info msg="shim disconnected" id=6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6 namespace=k8s.io Jan 17 00:44:06.066149 containerd[1518]: time="2026-01-17T00:44:06.065860641Z" level=warning msg="cleaning up after shim disconnected" id=6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6 namespace=k8s.io Jan 17 00:44:06.066149 containerd[1518]: time="2026-01-17T00:44:06.065878241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:44:06.084967 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:44:06.862607 containerd[1518]: time="2026-01-17T00:44:06.862526754Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:44:06.913995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698442357.mount: Deactivated successfully. Jan 17 00:44:06.943461 containerd[1518]: time="2026-01-17T00:44:06.940244163Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\"" Jan 17 00:44:06.944423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947103273.mount: Deactivated successfully. Jan 17 00:44:06.957431 containerd[1518]: time="2026-01-17T00:44:06.957352011Z" level=info msg="StartContainer for \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\"" Jan 17 00:44:07.068114 systemd[1]: Started cri-containerd-a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482.scope - libcontainer container a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482. Jan 17 00:44:07.071532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875153620.mount: Deactivated successfully. Jan 17 00:44:07.139511 containerd[1518]: time="2026-01-17T00:44:07.139187932Z" level=info msg="StartContainer for \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\" returns successfully" Jan 17 00:44:07.149252 systemd[1]: cri-containerd-a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482.scope: Deactivated successfully. Jan 17 00:44:07.222163 containerd[1518]: time="2026-01-17T00:44:07.221937987Z" level=info msg="shim disconnected" id=a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482 namespace=k8s.io Jan 17 00:44:07.222740 containerd[1518]: time="2026-01-17T00:44:07.222574327Z" level=warning msg="cleaning up after shim disconnected" id=a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482 namespace=k8s.io Jan 17 00:44:07.222740 containerd[1518]: time="2026-01-17T00:44:07.222608264Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:44:07.876925 containerd[1518]: time="2026-01-17T00:44:07.876418031Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:44:07.903265 containerd[1518]: time="2026-01-17T00:44:07.902767948Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\"" Jan 17 00:44:07.904202 containerd[1518]: time="2026-01-17T00:44:07.903957664Z" level=info msg="StartContainer for \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\"" Jan 17 00:44:07.947044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482-rootfs.mount: Deactivated successfully. Jan 17 00:44:07.970428 systemd[1]: run-containerd-runc-k8s.io-a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4-runc.hwRZjt.mount: Deactivated successfully. Jan 17 00:44:07.982581 systemd[1]: Started cri-containerd-a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4.scope - libcontainer container a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4. Jan 17 00:44:08.000542 containerd[1518]: time="2026-01-17T00:44:08.000456019Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:08.002058 containerd[1518]: time="2026-01-17T00:44:08.001830296Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 17 00:44:08.021364 containerd[1518]: time="2026-01-17T00:44:08.021257243Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:44:08.033550 containerd[1518]: time="2026-01-17T00:44:08.032805277Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.182140288s" Jan 17 00:44:08.033550 containerd[1518]: time="2026-01-17T00:44:08.032862712Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 00:44:08.040217 containerd[1518]: time="2026-01-17T00:44:08.039993139Z" level=info msg="CreateContainer within sandbox \"25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 00:44:08.052602 systemd[1]: cri-containerd-a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4.scope: Deactivated successfully. Jan 17 00:44:08.067967 containerd[1518]: time="2026-01-17T00:44:08.067912613Z" level=info msg="StartContainer for \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\" returns successfully" Jan 17 00:44:08.073660 containerd[1518]: time="2026-01-17T00:44:08.073606874Z" level=info msg="CreateContainer within sandbox \"25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\"" Jan 17 00:44:08.076479 containerd[1518]: time="2026-01-17T00:44:08.076441531Z" level=info msg="StartContainer for \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\"" Jan 17 00:44:08.125937 systemd[1]: Started cri-containerd-42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1.scope - libcontainer container 42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1. Jan 17 00:44:08.255479 containerd[1518]: time="2026-01-17T00:44:08.252893955Z" level=info msg="StartContainer for \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\" returns successfully" Jan 17 00:44:08.256508 containerd[1518]: time="2026-01-17T00:44:08.256244923Z" level=info msg="shim disconnected" id=a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4 namespace=k8s.io Jan 17 00:44:08.256508 containerd[1518]: time="2026-01-17T00:44:08.256331681Z" level=warning msg="cleaning up after shim disconnected" id=a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4 namespace=k8s.io Jan 17 00:44:08.256508 containerd[1518]: time="2026-01-17T00:44:08.256349460Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:44:08.885469 containerd[1518]: time="2026-01-17T00:44:08.885385076Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:44:08.930089 containerd[1518]: time="2026-01-17T00:44:08.930015416Z" level=info msg="CreateContainer within sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\"" Jan 17 00:44:08.931590 containerd[1518]: time="2026-01-17T00:44:08.931553922Z" level=info msg="StartContainer for \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\"" Jan 17 00:44:08.951629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4-rootfs.mount: Deactivated successfully. Jan 17 00:44:09.034135 systemd[1]: Started cri-containerd-67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164.scope - libcontainer container 67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164. Jan 17 00:44:09.181941 containerd[1518]: time="2026-01-17T00:44:09.181277211Z" level=info msg="StartContainer for \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\" returns successfully" Jan 17 00:44:09.248870 systemd[1]: run-containerd-runc-k8s.io-67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164-runc.WOCGIH.mount: Deactivated successfully. Jan 17 00:44:09.284878 kubelet[2687]: I0117 00:44:09.284730 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zvhls" podStartSLOduration=2.27201555 podStartE2EDuration="16.283338229s" podCreationTimestamp="2026-01-17 00:43:53 +0000 UTC" firstStartedPulling="2026-01-17 00:43:54.025975442 +0000 UTC m=+5.708501063" lastFinishedPulling="2026-01-17 00:44:08.037298119 +0000 UTC m=+19.719823742" observedRunningTime="2026-01-17 00:44:09.102960232 +0000 UTC m=+20.785485877" watchObservedRunningTime="2026-01-17 00:44:09.283338229 +0000 UTC m=+20.965863866" Jan 17 00:44:09.633339 kubelet[2687]: I0117 00:44:09.631684 2687 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:44:09.846915 systemd[1]: Created slice kubepods-burstable-podc9f0f712_6574_4ec1_96f2_9aaea27d05c3.slice - libcontainer container kubepods-burstable-podc9f0f712_6574_4ec1_96f2_9aaea27d05c3.slice. Jan 17 00:44:09.865684 kubelet[2687]: I0117 00:44:09.865562 2687 status_manager.go:890] "Failed to get status for pod" podUID="c9f0f712-6574-4ec1-96f2-9aaea27d05c3" pod="kube-system/coredns-668d6bf9bc-n8gwq" err="pods \"coredns-668d6bf9bc-n8gwq\" is forbidden: User \"system:node:srv-jwpu3.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-jwpu3.gb1.brightbox.com' and this object" Jan 17 00:44:09.866483 kubelet[2687]: W0117 00:44:09.865906 2687 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-jwpu3.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-jwpu3.gb1.brightbox.com' and this object Jan 17 00:44:09.868871 kubelet[2687]: E0117 00:44:09.867722 2687 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-jwpu3.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-jwpu3.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 17 00:44:09.894086 systemd[1]: Created slice kubepods-burstable-pod4e9ca063_1f2c_4cec_8914_a06c92c4ca31.slice - libcontainer container kubepods-burstable-pod4e9ca063_1f2c_4cec_8914_a06c92c4ca31.slice. Jan 17 00:44:10.003342 kubelet[2687]: I0117 00:44:10.000894 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dkkm5" podStartSLOduration=5.897896509 podStartE2EDuration="17.00086472s" podCreationTimestamp="2026-01-17 00:43:53 +0000 UTC" firstStartedPulling="2026-01-17 00:43:53.74694762 +0000 UTC m=+5.429473245" lastFinishedPulling="2026-01-17 00:44:04.849915822 +0000 UTC m=+16.532441456" observedRunningTime="2026-01-17 00:44:09.998203108 +0000 UTC m=+21.680728742" watchObservedRunningTime="2026-01-17 00:44:10.00086472 +0000 UTC m=+21.683390350" Jan 17 00:44:10.008348 kubelet[2687]: I0117 00:44:10.007050 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9f0f712-6574-4ec1-96f2-9aaea27d05c3-config-volume\") pod \"coredns-668d6bf9bc-n8gwq\" (UID: \"c9f0f712-6574-4ec1-96f2-9aaea27d05c3\") " pod="kube-system/coredns-668d6bf9bc-n8gwq" Jan 17 00:44:10.008348 kubelet[2687]: I0117 00:44:10.007151 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r4ff\" (UniqueName: \"kubernetes.io/projected/4e9ca063-1f2c-4cec-8914-a06c92c4ca31-kube-api-access-7r4ff\") pod \"coredns-668d6bf9bc-nmcps\" (UID: \"4e9ca063-1f2c-4cec-8914-a06c92c4ca31\") " pod="kube-system/coredns-668d6bf9bc-nmcps" Jan 17 00:44:10.008348 kubelet[2687]: I0117 00:44:10.007263 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e9ca063-1f2c-4cec-8914-a06c92c4ca31-config-volume\") pod \"coredns-668d6bf9bc-nmcps\" (UID: \"4e9ca063-1f2c-4cec-8914-a06c92c4ca31\") " pod="kube-system/coredns-668d6bf9bc-nmcps" Jan 17 00:44:10.008348 kubelet[2687]: I0117 00:44:10.007324 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplqv\" (UniqueName: \"kubernetes.io/projected/c9f0f712-6574-4ec1-96f2-9aaea27d05c3-kube-api-access-gplqv\") pod \"coredns-668d6bf9bc-n8gwq\" (UID: \"c9f0f712-6574-4ec1-96f2-9aaea27d05c3\") " pod="kube-system/coredns-668d6bf9bc-n8gwq" Jan 17 00:44:11.056953 containerd[1518]: time="2026-01-17T00:44:11.056878393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n8gwq,Uid:c9f0f712-6574-4ec1-96f2-9aaea27d05c3,Namespace:kube-system,Attempt:0,}" Jan 17 00:44:11.105896 containerd[1518]: time="2026-01-17T00:44:11.105370283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmcps,Uid:4e9ca063-1f2c-4cec-8914-a06c92c4ca31,Namespace:kube-system,Attempt:0,}" Jan 17 00:44:12.318083 systemd-networkd[1438]: cilium_host: Link UP Jan 17 00:44:12.318991 systemd-networkd[1438]: cilium_net: Link UP Jan 17 00:44:12.323542 systemd-networkd[1438]: cilium_net: Gained carrier Jan 17 00:44:12.323860 systemd-networkd[1438]: cilium_host: Gained carrier Jan 17 00:44:12.324110 systemd-networkd[1438]: cilium_net: Gained IPv6LL Jan 17 00:44:12.327857 systemd-networkd[1438]: cilium_host: Gained IPv6LL Jan 17 00:44:12.495235 systemd-networkd[1438]: cilium_vxlan: Link UP Jan 17 00:44:12.495248 systemd-networkd[1438]: cilium_vxlan: Gained carrier Jan 17 00:44:13.063396 kernel: NET: Registered PF_ALG protocol family Jan 17 00:44:13.682110 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL Jan 17 00:44:14.130409 systemd-networkd[1438]: lxc_health: Link UP Jan 17 00:44:14.143967 systemd-networkd[1438]: lxc_health: Gained carrier Jan 17 00:44:14.705102 systemd-networkd[1438]: lxcb4a98c7649d7: Link UP Jan 17 00:44:14.715019 kernel: eth0: renamed from tmp48895 Jan 17 00:44:14.754015 systemd-networkd[1438]: lxc22c5a9a94093: Link UP Jan 17 00:44:14.774800 kernel: eth0: renamed from tmp7ee49 Jan 17 00:44:14.779210 systemd-networkd[1438]: lxcb4a98c7649d7: Gained carrier Jan 17 00:44:14.790923 systemd-networkd[1438]: lxc22c5a9a94093: Gained carrier Jan 17 00:44:15.664663 systemd-networkd[1438]: lxc_health: Gained IPv6LL Jan 17 00:44:16.176632 systemd-networkd[1438]: lxc22c5a9a94093: Gained IPv6LL Jan 17 00:44:16.240690 systemd-networkd[1438]: lxcb4a98c7649d7: Gained IPv6LL Jan 17 00:44:20.701431 containerd[1518]: time="2026-01-17T00:44:20.698853542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:20.701431 containerd[1518]: time="2026-01-17T00:44:20.699104516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:20.701431 containerd[1518]: time="2026-01-17T00:44:20.699747252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:20.701431 containerd[1518]: time="2026-01-17T00:44:20.700128354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:20.728912 containerd[1518]: time="2026-01-17T00:44:20.728733474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:44:20.728912 containerd[1518]: time="2026-01-17T00:44:20.728839390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:44:20.728912 containerd[1518]: time="2026-01-17T00:44:20.728877247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:20.731663 containerd[1518]: time="2026-01-17T00:44:20.730945873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:44:20.808699 systemd[1]: Started cri-containerd-4889582032ac15d182d3f490cae93ed409d50ec83fff1ecfd59762a1c51b9fe3.scope - libcontainer container 4889582032ac15d182d3f490cae93ed409d50ec83fff1ecfd59762a1c51b9fe3. Jan 17 00:44:20.812326 systemd[1]: Started cri-containerd-7ee498d6a9db2d00cd7fa5750426d1f7c1f74bd6f743abd7301db2faed150d37.scope - libcontainer container 7ee498d6a9db2d00cd7fa5750426d1f7c1f74bd6f743abd7301db2faed150d37. Jan 17 00:44:20.928723 containerd[1518]: time="2026-01-17T00:44:20.928624024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nmcps,Uid:4e9ca063-1f2c-4cec-8914-a06c92c4ca31,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ee498d6a9db2d00cd7fa5750426d1f7c1f74bd6f743abd7301db2faed150d37\"" Jan 17 00:44:20.939238 containerd[1518]: time="2026-01-17T00:44:20.939177677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n8gwq,Uid:c9f0f712-6574-4ec1-96f2-9aaea27d05c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4889582032ac15d182d3f490cae93ed409d50ec83fff1ecfd59762a1c51b9fe3\"" Jan 17 00:44:20.942663 containerd[1518]: time="2026-01-17T00:44:20.942036574Z" level=info msg="CreateContainer within sandbox \"7ee498d6a9db2d00cd7fa5750426d1f7c1f74bd6f743abd7301db2faed150d37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:44:20.950778 containerd[1518]: time="2026-01-17T00:44:20.950283945Z" level=info msg="CreateContainer within sandbox \"4889582032ac15d182d3f490cae93ed409d50ec83fff1ecfd59762a1c51b9fe3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:44:21.003226 containerd[1518]: time="2026-01-17T00:44:21.002934649Z" level=info msg="CreateContainer within sandbox \"4889582032ac15d182d3f490cae93ed409d50ec83fff1ecfd59762a1c51b9fe3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fac20c974109e2e7d17b821e78cd02ba3bf39de626d937465814319814f6525\"" Jan 17 00:44:21.005280 containerd[1518]: time="2026-01-17T00:44:21.005097285Z" level=info msg="StartContainer for \"1fac20c974109e2e7d17b821e78cd02ba3bf39de626d937465814319814f6525\"" Jan 17 00:44:21.019337 containerd[1518]: time="2026-01-17T00:44:21.018629187Z" level=info msg="CreateContainer within sandbox \"7ee498d6a9db2d00cd7fa5750426d1f7c1f74bd6f743abd7301db2faed150d37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de973c8145f1cf36b76d28760ea662141bc98fca04277e1597ae720d11b0ca6c\"" Jan 17 00:44:21.026924 containerd[1518]: time="2026-01-17T00:44:21.026829350Z" level=info msg="StartContainer for \"de973c8145f1cf36b76d28760ea662141bc98fca04277e1597ae720d11b0ca6c\"" Jan 17 00:44:21.065808 systemd[1]: Started cri-containerd-1fac20c974109e2e7d17b821e78cd02ba3bf39de626d937465814319814f6525.scope - libcontainer container 1fac20c974109e2e7d17b821e78cd02ba3bf39de626d937465814319814f6525. Jan 17 00:44:21.085535 systemd[1]: Started cri-containerd-de973c8145f1cf36b76d28760ea662141bc98fca04277e1597ae720d11b0ca6c.scope - libcontainer container de973c8145f1cf36b76d28760ea662141bc98fca04277e1597ae720d11b0ca6c. Jan 17 00:44:21.144170 containerd[1518]: time="2026-01-17T00:44:21.144094612Z" level=info msg="StartContainer for \"de973c8145f1cf36b76d28760ea662141bc98fca04277e1597ae720d11b0ca6c\" returns successfully" Jan 17 00:44:21.144634 containerd[1518]: time="2026-01-17T00:44:21.144600824Z" level=info msg="StartContainer for \"1fac20c974109e2e7d17b821e78cd02ba3bf39de626d937465814319814f6525\" returns successfully" Jan 17 00:44:21.712244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553434378.mount: Deactivated successfully. Jan 17 00:44:21.976453 kubelet[2687]: I0117 00:44:21.974037 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nmcps" podStartSLOduration=28.97395908 podStartE2EDuration="28.97395908s" podCreationTimestamp="2026-01-17 00:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:44:21.972068026 +0000 UTC m=+33.654593662" watchObservedRunningTime="2026-01-17 00:44:21.97395908 +0000 UTC m=+33.656484709" Jan 17 00:44:22.003352 kubelet[2687]: I0117 00:44:22.002642 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n8gwq" podStartSLOduration=29.002618953 podStartE2EDuration="29.002618953s" podCreationTimestamp="2026-01-17 00:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:44:21.999565391 +0000 UTC m=+33.682091037" watchObservedRunningTime="2026-01-17 00:44:22.002618953 +0000 UTC m=+33.685144591" Jan 17 00:44:54.511176 systemd[1]: Started sshd@9-10.243.73.150:22-20.161.92.111:47322.service - OpenSSH per-connection server daemon (20.161.92.111:47322). Jan 17 00:44:55.128356 sshd[4065]: Accepted publickey for core from 20.161.92.111 port 47322 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:44:55.131100 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:44:55.141608 systemd-logind[1490]: New session 12 of user core. Jan 17 00:44:55.149620 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:44:56.116989 sshd[4065]: pam_unix(sshd:session): session closed for user core Jan 17 00:44:56.122757 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:44:56.123325 systemd[1]: sshd@9-10.243.73.150:22-20.161.92.111:47322.service: Deactivated successfully. Jan 17 00:44:56.127060 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:44:56.129486 systemd-logind[1490]: Removed session 12. Jan 17 00:45:01.226926 systemd[1]: Started sshd@10-10.243.73.150:22-20.161.92.111:47324.service - OpenSSH per-connection server daemon (20.161.92.111:47324). Jan 17 00:45:01.802230 sshd[4079]: Accepted publickey for core from 20.161.92.111 port 47324 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:01.804749 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:01.813270 systemd-logind[1490]: New session 13 of user core. Jan 17 00:45:01.818603 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:45:02.311544 sshd[4079]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:02.321019 systemd[1]: sshd@10-10.243.73.150:22-20.161.92.111:47324.service: Deactivated successfully. Jan 17 00:45:02.324687 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:45:02.326891 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:45:02.328897 systemd-logind[1490]: Removed session 13. Jan 17 00:45:07.418702 systemd[1]: Started sshd@11-10.243.73.150:22-20.161.92.111:45684.service - OpenSSH per-connection server daemon (20.161.92.111:45684). Jan 17 00:45:07.995935 sshd[4093]: Accepted publickey for core from 20.161.92.111 port 45684 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:07.998208 sshd[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:08.005843 systemd-logind[1490]: New session 14 of user core. Jan 17 00:45:08.017617 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:45:08.492393 sshd[4093]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:08.498490 systemd[1]: sshd@11-10.243.73.150:22-20.161.92.111:45684.service: Deactivated successfully. Jan 17 00:45:08.498638 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:45:08.502487 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:45:08.504809 systemd-logind[1490]: Removed session 14. Jan 17 00:45:13.602177 systemd[1]: Started sshd@12-10.243.73.150:22-20.161.92.111:48850.service - OpenSSH per-connection server daemon (20.161.92.111:48850). Jan 17 00:45:14.197340 sshd[4106]: Accepted publickey for core from 20.161.92.111 port 48850 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:14.199749 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:14.206739 systemd-logind[1490]: New session 15 of user core. Jan 17 00:45:14.215685 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:45:14.705673 sshd[4106]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:14.711932 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:45:14.713255 systemd[1]: sshd@12-10.243.73.150:22-20.161.92.111:48850.service: Deactivated successfully. Jan 17 00:45:14.718628 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:45:14.720594 systemd-logind[1490]: Removed session 15. Jan 17 00:45:14.808894 systemd[1]: Started sshd@13-10.243.73.150:22-20.161.92.111:48864.service - OpenSSH per-connection server daemon (20.161.92.111:48864). Jan 17 00:45:15.379901 sshd[4119]: Accepted publickey for core from 20.161.92.111 port 48864 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:15.382924 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:15.390292 systemd-logind[1490]: New session 16 of user core. Jan 17 00:45:15.397569 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:45:15.951294 sshd[4119]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:15.956502 systemd[1]: sshd@13-10.243.73.150:22-20.161.92.111:48864.service: Deactivated successfully. Jan 17 00:45:15.959205 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:45:15.960385 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:45:15.962209 systemd-logind[1490]: Removed session 16. Jan 17 00:45:16.059743 systemd[1]: Started sshd@14-10.243.73.150:22-20.161.92.111:48876.service - OpenSSH per-connection server daemon (20.161.92.111:48876). Jan 17 00:45:16.637179 sshd[4129]: Accepted publickey for core from 20.161.92.111 port 48876 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:16.639660 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:16.647107 systemd-logind[1490]: New session 17 of user core. Jan 17 00:45:16.654511 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:45:17.138464 sshd[4129]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:17.144883 systemd[1]: sshd@14-10.243.73.150:22-20.161.92.111:48876.service: Deactivated successfully. Jan 17 00:45:17.147303 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:45:17.148464 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:45:17.150145 systemd-logind[1490]: Removed session 17. Jan 17 00:45:22.249858 systemd[1]: Started sshd@15-10.243.73.150:22-20.161.92.111:48886.service - OpenSSH per-connection server daemon (20.161.92.111:48886). Jan 17 00:45:22.823388 sshd[4141]: Accepted publickey for core from 20.161.92.111 port 48886 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:22.825804 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:22.834087 systemd-logind[1490]: New session 18 of user core. Jan 17 00:45:22.846617 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:45:23.309425 sshd[4141]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:23.314076 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:45:23.314494 systemd[1]: sshd@15-10.243.73.150:22-20.161.92.111:48886.service: Deactivated successfully. Jan 17 00:45:23.318504 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:45:23.320983 systemd-logind[1490]: Removed session 18. Jan 17 00:45:28.422647 systemd[1]: Started sshd@16-10.243.73.150:22-20.161.92.111:52716.service - OpenSSH per-connection server daemon (20.161.92.111:52716). Jan 17 00:45:28.993141 sshd[4155]: Accepted publickey for core from 20.161.92.111 port 52716 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:28.995730 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:29.004400 systemd-logind[1490]: New session 19 of user core. Jan 17 00:45:29.014669 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:45:29.495075 sshd[4155]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:29.501293 systemd[1]: sshd@16-10.243.73.150:22-20.161.92.111:52716.service: Deactivated successfully. Jan 17 00:45:29.504279 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:45:29.506458 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:45:29.509236 systemd-logind[1490]: Removed session 19. Jan 17 00:45:29.602647 systemd[1]: Started sshd@17-10.243.73.150:22-20.161.92.111:52720.service - OpenSSH per-connection server daemon (20.161.92.111:52720). Jan 17 00:45:30.176164 sshd[4168]: Accepted publickey for core from 20.161.92.111 port 52720 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:30.178529 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:30.186567 systemd-logind[1490]: New session 20 of user core. Jan 17 00:45:30.191517 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:45:31.073172 sshd[4168]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:31.078618 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:45:31.079652 systemd[1]: sshd@17-10.243.73.150:22-20.161.92.111:52720.service: Deactivated successfully. Jan 17 00:45:31.084017 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:45:31.087445 systemd-logind[1490]: Removed session 20. Jan 17 00:45:31.180764 systemd[1]: Started sshd@18-10.243.73.150:22-20.161.92.111:52730.service - OpenSSH per-connection server daemon (20.161.92.111:52730). Jan 17 00:45:31.766225 sshd[4180]: Accepted publickey for core from 20.161.92.111 port 52730 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:31.767185 sshd[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:31.774028 systemd-logind[1490]: New session 21 of user core. Jan 17 00:45:31.783627 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:45:33.195420 sshd[4180]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:33.203813 systemd[1]: sshd@18-10.243.73.150:22-20.161.92.111:52730.service: Deactivated successfully. Jan 17 00:45:33.207485 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:45:33.209717 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:45:33.212683 systemd-logind[1490]: Removed session 21. Jan 17 00:45:33.303883 systemd[1]: Started sshd@19-10.243.73.150:22-20.161.92.111:36624.service - OpenSSH per-connection server daemon (20.161.92.111:36624). Jan 17 00:45:33.891755 sshd[4198]: Accepted publickey for core from 20.161.92.111 port 36624 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:33.894141 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:33.902193 systemd-logind[1490]: New session 22 of user core. Jan 17 00:45:33.908571 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:45:34.618123 sshd[4198]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:34.624877 systemd[1]: sshd@19-10.243.73.150:22-20.161.92.111:36624.service: Deactivated successfully. Jan 17 00:45:34.628536 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:45:34.629787 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:45:34.631579 systemd-logind[1490]: Removed session 22. Jan 17 00:45:34.718754 systemd[1]: Started sshd@20-10.243.73.150:22-20.161.92.111:36628.service - OpenSSH per-connection server daemon (20.161.92.111:36628). Jan 17 00:45:35.289842 sshd[4209]: Accepted publickey for core from 20.161.92.111 port 36628 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:35.292145 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:35.299031 systemd-logind[1490]: New session 23 of user core. Jan 17 00:45:35.310563 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:45:35.773952 sshd[4209]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:35.779530 systemd[1]: sshd@20-10.243.73.150:22-20.161.92.111:36628.service: Deactivated successfully. Jan 17 00:45:35.782062 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:45:35.783353 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:45:35.784714 systemd-logind[1490]: Removed session 23. Jan 17 00:45:40.889757 systemd[1]: Started sshd@21-10.243.73.150:22-20.161.92.111:36640.service - OpenSSH per-connection server daemon (20.161.92.111:36640). Jan 17 00:45:41.471836 sshd[4222]: Accepted publickey for core from 20.161.92.111 port 36640 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:41.474919 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:41.482624 systemd-logind[1490]: New session 24 of user core. Jan 17 00:45:41.490538 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:45:41.968383 sshd[4222]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:41.973785 systemd[1]: sshd@21-10.243.73.150:22-20.161.92.111:36640.service: Deactivated successfully. Jan 17 00:45:41.976503 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:45:41.977466 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:45:41.979276 systemd-logind[1490]: Removed session 24. Jan 17 00:45:47.082814 systemd[1]: Started sshd@22-10.243.73.150:22-20.161.92.111:51462.service - OpenSSH per-connection server daemon (20.161.92.111:51462). Jan 17 00:45:47.645814 sshd[4237]: Accepted publickey for core from 20.161.92.111 port 51462 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:47.648232 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:47.655590 systemd-logind[1490]: New session 25 of user core. Jan 17 00:45:47.661673 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:45:48.130758 sshd[4237]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:48.135226 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:45:48.136503 systemd[1]: sshd@22-10.243.73.150:22-20.161.92.111:51462.service: Deactivated successfully. Jan 17 00:45:48.139506 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:45:48.141951 systemd-logind[1490]: Removed session 25. Jan 17 00:45:53.235651 systemd[1]: Started sshd@23-10.243.73.150:22-20.161.92.111:46600.service - OpenSSH per-connection server daemon (20.161.92.111:46600). Jan 17 00:45:53.809617 sshd[4253]: Accepted publickey for core from 20.161.92.111 port 46600 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:53.812388 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:53.821788 systemd-logind[1490]: New session 26 of user core. Jan 17 00:45:53.831135 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:45:54.297274 sshd[4253]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:54.303015 systemd[1]: sshd@23-10.243.73.150:22-20.161.92.111:46600.service: Deactivated successfully. Jan 17 00:45:54.307177 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:45:54.308800 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:45:54.310347 systemd-logind[1490]: Removed session 26. Jan 17 00:45:54.408967 systemd[1]: Started sshd@24-10.243.73.150:22-20.161.92.111:46602.service - OpenSSH per-connection server daemon (20.161.92.111:46602). Jan 17 00:45:54.969232 sshd[4266]: Accepted publickey for core from 20.161.92.111 port 46602 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:54.971538 sshd[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:54.979641 systemd-logind[1490]: New session 27 of user core. Jan 17 00:45:54.985606 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:45:57.117727 systemd[1]: run-containerd-runc-k8s.io-67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164-runc.HucVpz.mount: Deactivated successfully. Jan 17 00:45:57.150614 containerd[1518]: time="2026-01-17T00:45:57.150511466Z" level=info msg="StopContainer for \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\" with timeout 30 (s)" Jan 17 00:45:57.152425 containerd[1518]: time="2026-01-17T00:45:57.152221833Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:45:57.155714 containerd[1518]: time="2026-01-17T00:45:57.155489478Z" level=info msg="Stop container \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\" with signal terminated" Jan 17 00:45:57.166198 containerd[1518]: time="2026-01-17T00:45:57.166130559Z" level=info msg="StopContainer for \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\" with timeout 2 (s)" Jan 17 00:45:57.166717 containerd[1518]: time="2026-01-17T00:45:57.166678526Z" level=info msg="Stop container \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\" with signal terminated" Jan 17 00:45:57.190601 systemd-networkd[1438]: lxc_health: Link DOWN Jan 17 00:45:57.190616 systemd-networkd[1438]: lxc_health: Lost carrier Jan 17 00:45:57.194564 systemd[1]: cri-containerd-42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1.scope: Deactivated successfully. Jan 17 00:45:57.223669 systemd[1]: cri-containerd-67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164.scope: Deactivated successfully. Jan 17 00:45:57.224579 systemd[1]: cri-containerd-67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164.scope: Consumed 10.304s CPU time. Jan 17 00:45:57.288660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1-rootfs.mount: Deactivated successfully. Jan 17 00:45:57.296144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164-rootfs.mount: Deactivated successfully. Jan 17 00:45:57.306326 containerd[1518]: time="2026-01-17T00:45:57.305935795Z" level=info msg="shim disconnected" id=67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164 namespace=k8s.io Jan 17 00:45:57.306982 containerd[1518]: time="2026-01-17T00:45:57.306568954Z" level=warning msg="cleaning up after shim disconnected" id=67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164 namespace=k8s.io Jan 17 00:45:57.306982 containerd[1518]: time="2026-01-17T00:45:57.306647287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:57.307198 containerd[1518]: time="2026-01-17T00:45:57.306068615Z" level=info msg="shim disconnected" id=42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1 namespace=k8s.io Jan 17 00:45:57.307321 containerd[1518]: time="2026-01-17T00:45:57.307279126Z" level=warning msg="cleaning up after shim disconnected" id=42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1 namespace=k8s.io Jan 17 00:45:57.307467 containerd[1518]: time="2026-01-17T00:45:57.307441161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:57.340579 containerd[1518]: time="2026-01-17T00:45:57.340507978Z" level=info msg="StopContainer for \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\" returns successfully" Jan 17 00:45:57.352426 containerd[1518]: time="2026-01-17T00:45:57.352348350Z" level=info msg="StopContainer for \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\" returns successfully" Jan 17 00:45:57.354189 containerd[1518]: time="2026-01-17T00:45:57.354125890Z" level=info msg="StopPodSandbox for \"25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83\"" Jan 17 00:45:57.354265 containerd[1518]: time="2026-01-17T00:45:57.354223415Z" level=info msg="Container to stop \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:57.356731 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83-shm.mount: Deactivated successfully. Jan 17 00:45:57.364859 containerd[1518]: time="2026-01-17T00:45:57.364801088Z" level=info msg="StopPodSandbox for \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\"" Jan 17 00:45:57.367269 containerd[1518]: time="2026-01-17T00:45:57.365215037Z" level=info msg="Container to stop \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:57.367269 containerd[1518]: time="2026-01-17T00:45:57.366397125Z" level=info msg="Container to stop \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:57.367269 containerd[1518]: time="2026-01-17T00:45:57.366426642Z" level=info msg="Container to stop \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:57.367269 containerd[1518]: time="2026-01-17T00:45:57.366445880Z" level=info msg="Container to stop \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:57.367269 containerd[1518]: time="2026-01-17T00:45:57.366462046Z" level=info msg="Container to stop \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 00:45:57.369568 systemd[1]: cri-containerd-25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83.scope: Deactivated successfully. Jan 17 00:45:57.385919 systemd[1]: cri-containerd-106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc.scope: Deactivated successfully. Jan 17 00:45:57.421112 containerd[1518]: time="2026-01-17T00:45:57.420543205Z" level=info msg="shim disconnected" id=25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83 namespace=k8s.io Jan 17 00:45:57.421112 containerd[1518]: time="2026-01-17T00:45:57.421062648Z" level=warning msg="cleaning up after shim disconnected" id=25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83 namespace=k8s.io Jan 17 00:45:57.421112 containerd[1518]: time="2026-01-17T00:45:57.421081878Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:57.460022 containerd[1518]: time="2026-01-17T00:45:57.459924131Z" level=info msg="shim disconnected" id=106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc namespace=k8s.io Jan 17 00:45:57.460022 containerd[1518]: time="2026-01-17T00:45:57.460008791Z" level=warning msg="cleaning up after shim disconnected" id=106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc namespace=k8s.io Jan 17 00:45:57.460022 containerd[1518]: time="2026-01-17T00:45:57.460026220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:45:57.481483 containerd[1518]: time="2026-01-17T00:45:57.481236296Z" level=info msg="TearDown network for sandbox \"25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83\" successfully" Jan 17 00:45:57.481483 containerd[1518]: time="2026-01-17T00:45:57.481284621Z" level=info msg="StopPodSandbox for \"25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83\" returns successfully" Jan 17 00:45:57.485359 containerd[1518]: time="2026-01-17T00:45:57.484353133Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:45:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:45:57.488344 containerd[1518]: time="2026-01-17T00:45:57.487932192Z" level=info msg="TearDown network for sandbox \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" successfully" Jan 17 00:45:57.488344 containerd[1518]: time="2026-01-17T00:45:57.487978725Z" level=info msg="StopPodSandbox for \"106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc\" returns successfully" Jan 17 00:45:57.686267 kubelet[2687]: I0117 00:45:57.685939 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-xtables-lock\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.686267 kubelet[2687]: I0117 00:45:57.686033 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-clustermesh-secrets\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.686267 kubelet[2687]: I0117 00:45:57.686074 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-bpf-maps\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.686267 kubelet[2687]: I0117 00:45:57.686098 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-host-proc-sys-net\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.686267 kubelet[2687]: I0117 00:45:57.686127 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-host-proc-sys-kernel\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.686267 kubelet[2687]: I0117 00:45:57.686158 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbz88\" (UniqueName: \"kubernetes.io/projected/4d615846-f58f-4539-bf98-cb835387934a-kube-api-access-vbz88\") pod \"4d615846-f58f-4539-bf98-cb835387934a\" (UID: \"4d615846-f58f-4539-bf98-cb835387934a\") " Jan 17 00:45:57.687732 kubelet[2687]: I0117 00:45:57.686217 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-hubble-tls\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.687732 kubelet[2687]: I0117 00:45:57.686267 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-hostproc\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.687732 kubelet[2687]: I0117 00:45:57.686304 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-cgroup\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.687732 kubelet[2687]: I0117 00:45:57.686365 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-lib-modules\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.687732 kubelet[2687]: I0117 00:45:57.686389 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-etc-cni-netd\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.687732 kubelet[2687]: I0117 00:45:57.686418 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d615846-f58f-4539-bf98-cb835387934a-cilium-config-path\") pod \"4d615846-f58f-4539-bf98-cb835387934a\" (UID: \"4d615846-f58f-4539-bf98-cb835387934a\") " Jan 17 00:45:57.688567 kubelet[2687]: I0117 00:45:57.686448 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn6hw\" (UniqueName: \"kubernetes.io/projected/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-kube-api-access-tn6hw\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.688567 kubelet[2687]: I0117 00:45:57.686487 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-config-path\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.688567 kubelet[2687]: I0117 00:45:57.686513 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cni-path\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.688567 kubelet[2687]: I0117 00:45:57.686537 2687 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-run\") pod \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\" (UID: \"a1a5e927-efee-4658-94f3-2f4ca8ae0b07\") " Jan 17 00:45:57.693817 kubelet[2687]: I0117 00:45:57.692654 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.694341 kubelet[2687]: I0117 00:45:57.692543 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.694417 kubelet[2687]: I0117 00:45:57.694374 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.694493 kubelet[2687]: I0117 00:45:57.694414 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.694493 kubelet[2687]: I0117 00:45:57.694445 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.694493 kubelet[2687]: I0117 00:45:57.694474 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.705713 kubelet[2687]: I0117 00:45:57.705665 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d615846-f58f-4539-bf98-cb835387934a-kube-api-access-vbz88" (OuterVolumeSpecName: "kube-api-access-vbz88") pod "4d615846-f58f-4539-bf98-cb835387934a" (UID: "4d615846-f58f-4539-bf98-cb835387934a"). InnerVolumeSpecName "kube-api-access-vbz88". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:45:57.706402 kubelet[2687]: I0117 00:45:57.705649 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:45:57.706402 kubelet[2687]: I0117 00:45:57.705707 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.706402 kubelet[2687]: I0117 00:45:57.705726 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.709618 kubelet[2687]: I0117 00:45:57.709577 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:45:57.709769 kubelet[2687]: I0117 00:45:57.709745 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-hostproc" (OuterVolumeSpecName: "hostproc") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.710160 kubelet[2687]: I0117 00:45:57.710123 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d615846-f58f-4539-bf98-cb835387934a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d615846-f58f-4539-bf98-cb835387934a" (UID: "4d615846-f58f-4539-bf98-cb835387934a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:45:57.710274 kubelet[2687]: I0117 00:45:57.710200 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cni-path" (OuterVolumeSpecName: "cni-path") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 17 00:45:57.714357 kubelet[2687]: I0117 00:45:57.713893 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-kube-api-access-tn6hw" (OuterVolumeSpecName: "kube-api-access-tn6hw") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "kube-api-access-tn6hw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:45:57.715080 kubelet[2687]: I0117 00:45:57.715052 2687 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a1a5e927-efee-4658-94f3-2f4ca8ae0b07" (UID: "a1a5e927-efee-4658-94f3-2f4ca8ae0b07"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:45:57.787647 kubelet[2687]: I0117 00:45:57.787579 2687 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tn6hw\" (UniqueName: \"kubernetes.io/projected/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-kube-api-access-tn6hw\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.787647 kubelet[2687]: I0117 00:45:57.787634 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-config-path\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788007 kubelet[2687]: I0117 00:45:57.787652 2687 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cni-path\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788007 kubelet[2687]: I0117 00:45:57.787697 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-run\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788007 kubelet[2687]: I0117 00:45:57.787734 2687 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-host-proc-sys-net\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788007 kubelet[2687]: I0117 00:45:57.787749 2687 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-xtables-lock\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788007 kubelet[2687]: I0117 00:45:57.787763 2687 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-clustermesh-secrets\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788007 kubelet[2687]: I0117 00:45:57.787791 2687 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-bpf-maps\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788007 kubelet[2687]: I0117 00:45:57.787813 2687 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-host-proc-sys-kernel\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788007 kubelet[2687]: I0117 00:45:57.787828 2687 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbz88\" (UniqueName: \"kubernetes.io/projected/4d615846-f58f-4539-bf98-cb835387934a-kube-api-access-vbz88\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788513 kubelet[2687]: I0117 00:45:57.787842 2687 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-hubble-tls\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788513 kubelet[2687]: I0117 00:45:57.787883 2687 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-hostproc\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788513 kubelet[2687]: I0117 00:45:57.787898 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-cilium-cgroup\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788513 kubelet[2687]: I0117 00:45:57.787911 2687 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-lib-modules\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788513 kubelet[2687]: I0117 00:45:57.787936 2687 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1a5e927-efee-4658-94f3-2f4ca8ae0b07-etc-cni-netd\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:57.788513 kubelet[2687]: I0117 00:45:57.787953 2687 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d615846-f58f-4539-bf98-cb835387934a-cilium-config-path\") on node \"srv-jwpu3.gb1.brightbox.com\" DevicePath \"\"" Jan 17 00:45:58.103648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25ee0f5b41c9cb4a65ca78a7105b4db9c56457d5199e7fbe5a13a5acf783ea83-rootfs.mount: Deactivated successfully. Jan 17 00:45:58.103838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc-rootfs.mount: Deactivated successfully. Jan 17 00:45:58.104006 systemd[1]: var-lib-kubelet-pods-4d615846\x2df58f\x2d4539\x2dbf98\x2dcb835387934a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvbz88.mount: Deactivated successfully. Jan 17 00:45:58.104236 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-106db877d3b5f25247225fe3d60f085e9883835a1a670a015b7d3c8cc5294ffc-shm.mount: Deactivated successfully. Jan 17 00:45:58.104398 systemd[1]: var-lib-kubelet-pods-a1a5e927\x2defee\x2d4658\x2d94f3\x2d2f4ca8ae0b07-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtn6hw.mount: Deactivated successfully. Jan 17 00:45:58.104558 systemd[1]: var-lib-kubelet-pods-a1a5e927\x2defee\x2d4658\x2d94f3\x2d2f4ca8ae0b07-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 00:45:58.104755 systemd[1]: var-lib-kubelet-pods-a1a5e927\x2defee\x2d4658\x2d94f3\x2d2f4ca8ae0b07-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 00:45:58.267657 kubelet[2687]: I0117 00:45:58.267068 2687 scope.go:117] "RemoveContainer" containerID="42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1" Jan 17 00:45:58.268148 systemd[1]: Removed slice kubepods-besteffort-pod4d615846_f58f_4539_bf98_cb835387934a.slice - libcontainer container kubepods-besteffort-pod4d615846_f58f_4539_bf98_cb835387934a.slice. Jan 17 00:45:58.275914 containerd[1518]: time="2026-01-17T00:45:58.274802989Z" level=info msg="RemoveContainer for \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\"" Jan 17 00:45:58.291695 containerd[1518]: time="2026-01-17T00:45:58.291290276Z" level=info msg="RemoveContainer for \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\" returns successfully" Jan 17 00:45:58.292898 kubelet[2687]: I0117 00:45:58.292869 2687 scope.go:117] "RemoveContainer" containerID="42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1" Jan 17 00:45:58.308702 systemd[1]: Removed slice kubepods-burstable-poda1a5e927_efee_4658_94f3_2f4ca8ae0b07.slice - libcontainer container kubepods-burstable-poda1a5e927_efee_4658_94f3_2f4ca8ae0b07.slice. Jan 17 00:45:58.313088 containerd[1518]: time="2026-01-17T00:45:58.302195146Z" level=error msg="ContainerStatus for \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\": not found" Jan 17 00:45:58.308837 systemd[1]: kubepods-burstable-poda1a5e927_efee_4658_94f3_2f4ca8ae0b07.slice: Consumed 10.431s CPU time. Jan 17 00:45:58.317269 kubelet[2687]: E0117 00:45:58.317172 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\": not found" containerID="42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1" Jan 17 00:45:58.322278 kubelet[2687]: I0117 00:45:58.321549 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1"} err="failed to get container status \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"42cf2012d4820c88b7d1ec26ae58457125390c83e4958c78c161ec521a4490d1\": not found" Jan 17 00:45:58.322278 kubelet[2687]: I0117 00:45:58.321713 2687 scope.go:117] "RemoveContainer" containerID="67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164" Jan 17 00:45:58.326177 containerd[1518]: time="2026-01-17T00:45:58.326043293Z" level=info msg="RemoveContainer for \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\"" Jan 17 00:45:58.330181 containerd[1518]: time="2026-01-17T00:45:58.330117441Z" level=info msg="RemoveContainer for \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\" returns successfully" Jan 17 00:45:58.330522 kubelet[2687]: I0117 00:45:58.330458 2687 scope.go:117] "RemoveContainer" containerID="a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4" Jan 17 00:45:58.333672 containerd[1518]: time="2026-01-17T00:45:58.333638215Z" level=info msg="RemoveContainer for \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\"" Jan 17 00:45:58.343331 containerd[1518]: time="2026-01-17T00:45:58.343241777Z" level=info msg="RemoveContainer for \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\" returns successfully" Jan 17 00:45:58.344344 kubelet[2687]: I0117 00:45:58.344153 2687 scope.go:117] "RemoveContainer" containerID="a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482" Jan 17 00:45:58.352197 containerd[1518]: time="2026-01-17T00:45:58.351656774Z" level=info msg="RemoveContainer for \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\"" Jan 17 00:45:58.355634 containerd[1518]: time="2026-01-17T00:45:58.355512827Z" level=info msg="RemoveContainer for \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\" returns successfully" Jan 17 00:45:58.356792 kubelet[2687]: I0117 00:45:58.356728 2687 scope.go:117] "RemoveContainer" containerID="6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6" Jan 17 00:45:58.358718 containerd[1518]: time="2026-01-17T00:45:58.358396395Z" level=info msg="RemoveContainer for \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\"" Jan 17 00:45:58.365886 containerd[1518]: time="2026-01-17T00:45:58.365847280Z" level=info msg="RemoveContainer for \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\" returns successfully" Jan 17 00:45:58.366504 kubelet[2687]: I0117 00:45:58.366441 2687 scope.go:117] "RemoveContainer" containerID="6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d" Jan 17 00:45:58.368263 containerd[1518]: time="2026-01-17T00:45:58.368229170Z" level=info msg="RemoveContainer for \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\"" Jan 17 00:45:58.371085 containerd[1518]: time="2026-01-17T00:45:58.371015839Z" level=info msg="RemoveContainer for \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\" returns successfully" Jan 17 00:45:58.371507 kubelet[2687]: I0117 00:45:58.371373 2687 scope.go:117] "RemoveContainer" containerID="67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164" Jan 17 00:45:58.371841 containerd[1518]: time="2026-01-17T00:45:58.371668517Z" level=error msg="ContainerStatus for \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\": not found" Jan 17 00:45:58.372128 kubelet[2687]: E0117 00:45:58.371995 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\": not found" containerID="67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164" Jan 17 00:45:58.372128 kubelet[2687]: I0117 00:45:58.372072 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164"} err="failed to get container status \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\": rpc error: code = NotFound desc = an error occurred when try to find container \"67a4887adc2d7e5b4afe805eab78999841fe7b795a160aa5f9f28e48216da164\": not found" Jan 17 00:45:58.372471 kubelet[2687]: I0117 00:45:58.372107 2687 scope.go:117] "RemoveContainer" containerID="a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4" Jan 17 00:45:58.373048 containerd[1518]: time="2026-01-17T00:45:58.372659632Z" level=error msg="ContainerStatus for \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\": not found" Jan 17 00:45:58.373129 kubelet[2687]: E0117 00:45:58.372886 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\": not found" containerID="a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4" Jan 17 00:45:58.373129 kubelet[2687]: I0117 00:45:58.372916 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4"} err="failed to get container status \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0408f6ea9477d1021bcfc054462b5692fd6772c8580c60fb0e34e94d42d5de4\": not found" Jan 17 00:45:58.373129 kubelet[2687]: I0117 00:45:58.372941 2687 scope.go:117] "RemoveContainer" containerID="a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482" Jan 17 00:45:58.373519 containerd[1518]: time="2026-01-17T00:45:58.373455447Z" level=error msg="ContainerStatus for \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\": not found" Jan 17 00:45:58.373831 kubelet[2687]: E0117 00:45:58.373612 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\": not found" containerID="a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482" Jan 17 00:45:58.373831 kubelet[2687]: I0117 00:45:58.373665 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482"} err="failed to get container status \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\": rpc error: code = NotFound desc = an error occurred when try to find container \"a43b9272cc9bf7f91a56234bc984f4ffaf6fef1d425cc4e10a83a065be00d482\": not found" Jan 17 00:45:58.373831 kubelet[2687]: I0117 00:45:58.373686 2687 scope.go:117] "RemoveContainer" containerID="6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6" Jan 17 00:45:58.374618 containerd[1518]: time="2026-01-17T00:45:58.374240698Z" level=error msg="ContainerStatus for \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\": not found" Jan 17 00:45:58.374718 kubelet[2687]: E0117 00:45:58.374428 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\": not found" containerID="6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6" Jan 17 00:45:58.374718 kubelet[2687]: I0117 00:45:58.374510 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6"} err="failed to get container status \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ca082a7ac43da605f8b3c7d242799920790ef572f1a559be24f5b443295f9f6\": not found" Jan 17 00:45:58.374718 kubelet[2687]: I0117 00:45:58.374533 2687 scope.go:117] "RemoveContainer" containerID="6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d" Jan 17 00:45:58.374883 containerd[1518]: time="2026-01-17T00:45:58.374737014Z" level=error msg="ContainerStatus for \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\": not found" Jan 17 00:45:58.375099 kubelet[2687]: E0117 00:45:58.375005 2687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\": not found" containerID="6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d" Jan 17 00:45:58.375099 kubelet[2687]: I0117 00:45:58.375036 2687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d"} err="failed to get container status \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b843a9e8de34ad353189ae4eac894a55170a35fd5beacefcc83de1ed1f77d1d\": not found" Jan 17 00:45:58.620446 kubelet[2687]: I0117 00:45:58.620070 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d615846-f58f-4539-bf98-cb835387934a" path="/var/lib/kubelet/pods/4d615846-f58f-4539-bf98-cb835387934a/volumes" Jan 17 00:45:58.622253 kubelet[2687]: I0117 00:45:58.622227 2687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a5e927-efee-4658-94f3-2f4ca8ae0b07" path="/var/lib/kubelet/pods/a1a5e927-efee-4658-94f3-2f4ca8ae0b07/volumes" Jan 17 00:45:58.838021 kubelet[2687]: E0117 00:45:58.837925 2687 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:45:59.047120 sshd[4266]: pam_unix(sshd:session): session closed for user core Jan 17 00:45:59.057973 systemd[1]: sshd@24-10.243.73.150:22-20.161.92.111:46602.service: Deactivated successfully. Jan 17 00:45:59.061518 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:45:59.061952 systemd[1]: session-27.scope: Consumed 1.061s CPU time. Jan 17 00:45:59.063707 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:45:59.065695 systemd-logind[1490]: Removed session 27. Jan 17 00:45:59.159701 systemd[1]: Started sshd@25-10.243.73.150:22-20.161.92.111:46618.service - OpenSSH per-connection server daemon (20.161.92.111:46618). Jan 17 00:45:59.732841 sshd[4427]: Accepted publickey for core from 20.161.92.111 port 46618 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:45:59.735235 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:45:59.743441 systemd-logind[1490]: New session 28 of user core. Jan 17 00:45:59.748630 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:46:00.724156 kubelet[2687]: I0117 00:46:00.724068 2687 memory_manager.go:355] "RemoveStaleState removing state" podUID="4d615846-f58f-4539-bf98-cb835387934a" containerName="cilium-operator" Jan 17 00:46:00.724156 kubelet[2687]: I0117 00:46:00.724117 2687 memory_manager.go:355] "RemoveStaleState removing state" podUID="a1a5e927-efee-4658-94f3-2f4ca8ae0b07" containerName="cilium-agent" Jan 17 00:46:00.769322 sshd[4427]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:00.781972 systemd[1]: sshd@25-10.243.73.150:22-20.161.92.111:46618.service: Deactivated successfully. Jan 17 00:46:00.788336 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:46:00.800594 systemd-logind[1490]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:46:00.801253 systemd[1]: Created slice kubepods-burstable-poda12bbe57_ea76_4eb9_951f_038a349b2003.slice - libcontainer container kubepods-burstable-poda12bbe57_ea76_4eb9_951f_038a349b2003.slice. Jan 17 00:46:00.803858 systemd-logind[1490]: Removed session 28. Jan 17 00:46:00.813849 kubelet[2687]: I0117 00:46:00.813637 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-lib-modules\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.813849 kubelet[2687]: I0117 00:46:00.813731 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-xtables-lock\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.814741 kubelet[2687]: I0117 00:46:00.813808 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a12bbe57-ea76-4eb9-951f-038a349b2003-cilium-ipsec-secrets\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.814919 kubelet[2687]: I0117 00:46:00.814871 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfj8\" (UniqueName: \"kubernetes.io/projected/a12bbe57-ea76-4eb9-951f-038a349b2003-kube-api-access-hqfj8\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.815056 kubelet[2687]: I0117 00:46:00.815033 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-hostproc\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.815212 kubelet[2687]: I0117 00:46:00.815189 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a12bbe57-ea76-4eb9-951f-038a349b2003-hubble-tls\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.815555 kubelet[2687]: I0117 00:46:00.815502 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-cni-path\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.815945 kubelet[2687]: I0117 00:46:00.815715 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-bpf-maps\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.816286 kubelet[2687]: I0117 00:46:00.816097 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a12bbe57-ea76-4eb9-951f-038a349b2003-cilium-config-path\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.816286 kubelet[2687]: I0117 00:46:00.816179 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a12bbe57-ea76-4eb9-951f-038a349b2003-clustermesh-secrets\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.816286 kubelet[2687]: I0117 00:46:00.816210 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-cilium-run\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.816286 kubelet[2687]: I0117 00:46:00.816250 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-etc-cni-netd\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.817751 kubelet[2687]: I0117 00:46:00.816290 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-cilium-cgroup\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.817751 kubelet[2687]: I0117 00:46:00.816611 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-host-proc-sys-net\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.817751 kubelet[2687]: I0117 00:46:00.816663 2687 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a12bbe57-ea76-4eb9-951f-038a349b2003-host-proc-sys-kernel\") pod \"cilium-k5tv5\" (UID: \"a12bbe57-ea76-4eb9-951f-038a349b2003\") " pod="kube-system/cilium-k5tv5" Jan 17 00:46:00.872843 systemd[1]: Started sshd@26-10.243.73.150:22-20.161.92.111:46634.service - OpenSSH per-connection server daemon (20.161.92.111:46634). Jan 17 00:46:01.122039 containerd[1518]: time="2026-01-17T00:46:01.121892539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5tv5,Uid:a12bbe57-ea76-4eb9-951f-038a349b2003,Namespace:kube-system,Attempt:0,}" Jan 17 00:46:01.175247 containerd[1518]: time="2026-01-17T00:46:01.175113357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:46:01.175511 containerd[1518]: time="2026-01-17T00:46:01.175265505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:46:01.175511 containerd[1518]: time="2026-01-17T00:46:01.175349549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:46:01.175738 containerd[1518]: time="2026-01-17T00:46:01.175562411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:46:01.211580 systemd[1]: Started cri-containerd-09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac.scope - libcontainer container 09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac. Jan 17 00:46:01.256110 containerd[1518]: time="2026-01-17T00:46:01.256045617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5tv5,Uid:a12bbe57-ea76-4eb9-951f-038a349b2003,Namespace:kube-system,Attempt:0,} returns sandbox id \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\"" Jan 17 00:46:01.273160 containerd[1518]: time="2026-01-17T00:46:01.272476501Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 00:46:01.291382 containerd[1518]: time="2026-01-17T00:46:01.291327848Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ac9ad1beb6dc153a33e086676cefafea2fcf30e3feb49cd6d60cd5db81d157e\"" Jan 17 00:46:01.294844 containerd[1518]: time="2026-01-17T00:46:01.293574945Z" level=info msg="StartContainer for \"8ac9ad1beb6dc153a33e086676cefafea2fcf30e3feb49cd6d60cd5db81d157e\"" Jan 17 00:46:01.342710 systemd[1]: Started cri-containerd-8ac9ad1beb6dc153a33e086676cefafea2fcf30e3feb49cd6d60cd5db81d157e.scope - libcontainer container 8ac9ad1beb6dc153a33e086676cefafea2fcf30e3feb49cd6d60cd5db81d157e. Jan 17 00:46:01.382901 containerd[1518]: time="2026-01-17T00:46:01.381781913Z" level=info msg="StartContainer for \"8ac9ad1beb6dc153a33e086676cefafea2fcf30e3feb49cd6d60cd5db81d157e\" returns successfully" Jan 17 00:46:01.406002 systemd[1]: cri-containerd-8ac9ad1beb6dc153a33e086676cefafea2fcf30e3feb49cd6d60cd5db81d157e.scope: Deactivated successfully. Jan 17 00:46:01.442528 sshd[4439]: Accepted publickey for core from 20.161.92.111 port 46634 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:46:01.446871 sshd[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:01.455823 systemd-logind[1490]: New session 29 of user core. Jan 17 00:46:01.460210 containerd[1518]: time="2026-01-17T00:46:01.459960445Z" level=info msg="shim disconnected" id=8ac9ad1beb6dc153a33e086676cefafea2fcf30e3feb49cd6d60cd5db81d157e namespace=k8s.io Jan 17 00:46:01.460210 containerd[1518]: time="2026-01-17T00:46:01.460171527Z" level=warning msg="cleaning up after shim disconnected" id=8ac9ad1beb6dc153a33e086676cefafea2fcf30e3feb49cd6d60cd5db81d157e namespace=k8s.io Jan 17 00:46:01.460556 containerd[1518]: time="2026-01-17T00:46:01.460431092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:46:01.462509 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 00:46:01.845643 sshd[4439]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:01.852554 systemd-logind[1490]: Session 29 logged out. Waiting for processes to exit. Jan 17 00:46:01.853833 systemd[1]: sshd@26-10.243.73.150:22-20.161.92.111:46634.service: Deactivated successfully. Jan 17 00:46:01.857769 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 00:46:01.859506 systemd-logind[1490]: Removed session 29. Jan 17 00:46:01.951710 systemd[1]: Started sshd@27-10.243.73.150:22-20.161.92.111:46640.service - OpenSSH per-connection server daemon (20.161.92.111:46640). Jan 17 00:46:01.971375 kubelet[2687]: I0117 00:46:01.969850 2687 setters.go:602] "Node became not ready" node="srv-jwpu3.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-17T00:46:01Z","lastTransitionTime":"2026-01-17T00:46:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 00:46:02.335959 containerd[1518]: time="2026-01-17T00:46:02.335893358Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 00:46:02.364874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971523184.mount: Deactivated successfully. Jan 17 00:46:02.367719 containerd[1518]: time="2026-01-17T00:46:02.367018893Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139\"" Jan 17 00:46:02.370922 containerd[1518]: time="2026-01-17T00:46:02.369167388Z" level=info msg="StartContainer for \"02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139\"" Jan 17 00:46:02.441024 systemd[1]: Started cri-containerd-02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139.scope - libcontainer container 02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139. Jan 17 00:46:02.487439 containerd[1518]: time="2026-01-17T00:46:02.487149011Z" level=info msg="StartContainer for \"02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139\" returns successfully" Jan 17 00:46:02.500258 systemd[1]: cri-containerd-02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139.scope: Deactivated successfully. Jan 17 00:46:02.531678 containerd[1518]: time="2026-01-17T00:46:02.531384005Z" level=info msg="shim disconnected" id=02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139 namespace=k8s.io Jan 17 00:46:02.531678 containerd[1518]: time="2026-01-17T00:46:02.531459929Z" level=warning msg="cleaning up after shim disconnected" id=02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139 namespace=k8s.io Jan 17 00:46:02.531678 containerd[1518]: time="2026-01-17T00:46:02.531475868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:46:02.541606 sshd[4553]: Accepted publickey for core from 20.161.92.111 port 46640 ssh2: RSA SHA256:e7YTQZHggQ0j4O1p7twKFyXfxguBGEIbATr9At9uxuc Jan 17 00:46:02.544943 sshd[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:46:02.557277 systemd-logind[1490]: New session 30 of user core. Jan 17 00:46:02.561541 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 00:46:02.933932 systemd[1]: run-containerd-runc-k8s.io-02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139-runc.eLyQbU.mount: Deactivated successfully. Jan 17 00:46:02.934133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02620ac81836a939b3bcdba816ac8c43983facefce80dfe0bc80947befec1139-rootfs.mount: Deactivated successfully. Jan 17 00:46:03.333977 containerd[1518]: time="2026-01-17T00:46:03.333909437Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 00:46:03.364428 containerd[1518]: time="2026-01-17T00:46:03.364354186Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c7acea901e98dafc600b705e2fe0ac503f4d7da8c9df1188c58c9b0f0111bde8\"" Jan 17 00:46:03.365775 containerd[1518]: time="2026-01-17T00:46:03.365339556Z" level=info msg="StartContainer for \"c7acea901e98dafc600b705e2fe0ac503f4d7da8c9df1188c58c9b0f0111bde8\"" Jan 17 00:46:03.427610 systemd[1]: Started cri-containerd-c7acea901e98dafc600b705e2fe0ac503f4d7da8c9df1188c58c9b0f0111bde8.scope - libcontainer container c7acea901e98dafc600b705e2fe0ac503f4d7da8c9df1188c58c9b0f0111bde8. Jan 17 00:46:03.489489 containerd[1518]: time="2026-01-17T00:46:03.488954175Z" level=info msg="StartContainer for \"c7acea901e98dafc600b705e2fe0ac503f4d7da8c9df1188c58c9b0f0111bde8\" returns successfully" Jan 17 00:46:03.498237 systemd[1]: cri-containerd-c7acea901e98dafc600b705e2fe0ac503f4d7da8c9df1188c58c9b0f0111bde8.scope: Deactivated successfully. Jan 17 00:46:03.531424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7acea901e98dafc600b705e2fe0ac503f4d7da8c9df1188c58c9b0f0111bde8-rootfs.mount: Deactivated successfully. Jan 17 00:46:03.534700 containerd[1518]: time="2026-01-17T00:46:03.534606086Z" level=info msg="shim disconnected" id=c7acea901e98dafc600b705e2fe0ac503f4d7da8c9df1188c58c9b0f0111bde8 namespace=k8s.io Jan 17 00:46:03.534842 containerd[1518]: time="2026-01-17T00:46:03.534702211Z" level=warning msg="cleaning up after shim disconnected" id=c7acea901e98dafc600b705e2fe0ac503f4d7da8c9df1188c58c9b0f0111bde8 namespace=k8s.io Jan 17 00:46:03.534842 containerd[1518]: time="2026-01-17T00:46:03.534718242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:46:03.840211 kubelet[2687]: E0117 00:46:03.840093 2687 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 00:46:04.340634 containerd[1518]: time="2026-01-17T00:46:04.339607645Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 00:46:04.366383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390637340.mount: Deactivated successfully. Jan 17 00:46:04.376047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4251433110.mount: Deactivated successfully. Jan 17 00:46:04.384904 containerd[1518]: time="2026-01-17T00:46:04.384711338Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e\"" Jan 17 00:46:04.387468 containerd[1518]: time="2026-01-17T00:46:04.387254155Z" level=info msg="StartContainer for \"e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e\"" Jan 17 00:46:04.449683 systemd[1]: Started cri-containerd-e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e.scope - libcontainer container e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e. Jan 17 00:46:04.493373 systemd[1]: cri-containerd-e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e.scope: Deactivated successfully. Jan 17 00:46:04.503160 containerd[1518]: time="2026-01-17T00:46:04.499971881Z" level=info msg="StartContainer for \"e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e\" returns successfully" Jan 17 00:46:04.504561 containerd[1518]: time="2026-01-17T00:46:04.495184789Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda12bbe57_ea76_4eb9_951f_038a349b2003.slice/cri-containerd-e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e.scope/memory.events\": no such file or directory" Jan 17 00:46:04.537571 containerd[1518]: time="2026-01-17T00:46:04.537445222Z" level=info msg="shim disconnected" id=e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e namespace=k8s.io Jan 17 00:46:04.537886 containerd[1518]: time="2026-01-17T00:46:04.537589212Z" level=warning msg="cleaning up after shim disconnected" id=e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e namespace=k8s.io Jan 17 00:46:04.537886 containerd[1518]: time="2026-01-17T00:46:04.537610930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:46:05.347294 containerd[1518]: time="2026-01-17T00:46:05.346983273Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 00:46:05.363239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e90d6988f0593e9b7aaf03f624e8c7ec3091aa3a71b0956e104c44de7671132e-rootfs.mount: Deactivated successfully. Jan 17 00:46:05.407716 containerd[1518]: time="2026-01-17T00:46:05.407646246Z" level=info msg="CreateContainer within sandbox \"09cb385ecfc9fe1546aef1980d2a2aad6cc56bce74e310c91dae7a57d76c03ac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c882d34d26dcf57cc0b3c30b4e4e84428c05d6a6f774c6496a7d2d97a8ed7f86\"" Jan 17 00:46:05.410759 containerd[1518]: time="2026-01-17T00:46:05.410719245Z" level=info msg="StartContainer for \"c882d34d26dcf57cc0b3c30b4e4e84428c05d6a6f774c6496a7d2d97a8ed7f86\"" Jan 17 00:46:05.476697 systemd[1]: Started cri-containerd-c882d34d26dcf57cc0b3c30b4e4e84428c05d6a6f774c6496a7d2d97a8ed7f86.scope - libcontainer container c882d34d26dcf57cc0b3c30b4e4e84428c05d6a6f774c6496a7d2d97a8ed7f86. Jan 17 00:46:05.531508 containerd[1518]: time="2026-01-17T00:46:05.530554428Z" level=info msg="StartContainer for \"c882d34d26dcf57cc0b3c30b4e4e84428c05d6a6f774c6496a7d2d97a8ed7f86\" returns successfully" Jan 17 00:46:06.418610 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 00:46:10.179718 systemd-networkd[1438]: lxc_health: Link UP Jan 17 00:46:10.188999 systemd-networkd[1438]: lxc_health: Gained carrier Jan 17 00:46:11.160371 kubelet[2687]: I0117 00:46:11.160196 2687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k5tv5" podStartSLOduration=11.160125458 podStartE2EDuration="11.160125458s" podCreationTimestamp="2026-01-17 00:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:46:06.391277209 +0000 UTC m=+138.073802846" watchObservedRunningTime="2026-01-17 00:46:11.160125458 +0000 UTC m=+142.842651088" Jan 17 00:46:11.634637 systemd-networkd[1438]: lxc_health: Gained IPv6LL Jan 17 00:46:12.117274 systemd[1]: run-containerd-runc-k8s.io-c882d34d26dcf57cc0b3c30b4e4e84428c05d6a6f774c6496a7d2d97a8ed7f86-runc.uY5rqB.mount: Deactivated successfully. Jan 17 00:46:16.848859 sshd[4553]: pam_unix(sshd:session): session closed for user core Jan 17 00:46:16.863277 systemd[1]: sshd@27-10.243.73.150:22-20.161.92.111:46640.service: Deactivated successfully. Jan 17 00:46:16.868914 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 00:46:16.871369 systemd-logind[1490]: Session 30 logged out. Waiting for processes to exit. Jan 17 00:46:16.873285 systemd-logind[1490]: Removed session 30.