Mar 14 00:49:54.980266 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:49:54.980302 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:49:54.980313 kernel: BIOS-provided physical RAM map: Mar 14 00:49:54.980324 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 14 00:49:54.980331 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 14 00:49:54.980338 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:49:54.980347 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 14 00:49:54.980355 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 14 00:49:54.980362 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:49:54.980370 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:49:54.980377 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:49:54.980385 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:49:54.980396 kernel: NX (Execute Disable) protection: active Mar 14 00:49:54.980404 kernel: APIC: Static calls initialized Mar 14 00:49:54.980414 kernel: SMBIOS 2.8 present. Mar 14 00:49:54.980422 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 14 00:49:54.980431 kernel: Hypervisor detected: KVM Mar 14 00:49:54.980443 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:49:54.980452 kernel: kvm-clock: using sched offset of 3883660904 cycles Mar 14 00:49:54.980461 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:49:54.980470 kernel: tsc: Detected 2294.576 MHz processor Mar 14 00:49:54.980479 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:49:54.980488 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:49:54.980497 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 14 00:49:54.980506 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:49:54.980514 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:49:54.980526 kernel: Using GB pages for direct mapping Mar 14 00:49:54.980535 kernel: ACPI: Early table checksum verification disabled Mar 14 00:49:54.980550 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 14 00:49:54.980559 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:49:54.980568 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:49:54.980576 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:49:54.980585 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 14 00:49:54.980594 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:49:54.980602 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:49:54.980614 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:49:54.980623 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:49:54.980632 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 14 00:49:54.980640 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 14 00:49:54.980649 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 14 00:49:54.980662 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 14 00:49:54.980672 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 14 00:49:54.980684 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 14 00:49:54.983114 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 14 00:49:54.983166 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 00:49:54.983195 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 00:49:54.983241 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 14 00:49:54.983315 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 14 00:49:54.983341 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 14 00:49:54.983366 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 14 00:49:54.983437 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 14 00:49:54.983463 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 14 00:49:54.983488 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 14 00:49:54.983513 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 14 00:49:54.983538 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 14 00:49:54.983598 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 14 00:49:54.983623 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 14 00:49:54.983648 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 14 00:49:54.983673 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 14 00:49:54.983769 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 14 00:49:54.983797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 14 00:49:54.983823 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 14 00:49:54.983848 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 14 00:49:54.983875 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 14 00:49:54.983901 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 14 00:49:54.983928 kernel: Zone ranges: Mar 14 00:49:54.983954 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:49:54.983979 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 14 00:49:54.984013 kernel: Normal empty Mar 14 00:49:54.984038 kernel: Movable zone start for each node Mar 14 00:49:54.984064 kernel: Early memory node ranges Mar 14 00:49:54.984089 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:49:54.984115 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 14 00:49:54.984140 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 14 00:49:54.984166 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:49:54.984191 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:49:54.984217 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 14 00:49:54.984243 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:49:54.984280 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:49:54.984305 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:49:54.984331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:49:54.984357 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:49:54.984382 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:49:54.984408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:49:54.984433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:49:54.984458 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:49:54.984484 kernel: TSC deadline timer available Mar 14 00:49:54.984566 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 14 00:49:54.984592 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:49:54.984618 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:49:54.984643 kernel: Booting paravirtualized kernel on KVM Mar 14 00:49:54.984669 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:49:54.986984 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 14 00:49:54.987031 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 14 00:49:54.987059 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 14 00:49:54.987085 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 14 00:49:54.987121 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:49:54.987148 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:49:54.987177 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:49:54.987206 kernel: random: crng init done Mar 14 00:49:54.987232 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:49:54.987258 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 00:49:54.987284 kernel: Fallback order for Node 0: 0 Mar 14 00:49:54.987310 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 14 00:49:54.987341 kernel: Policy zone: DMA32 Mar 14 00:49:54.987367 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:49:54.987393 kernel: software IO TLB: area num 16. Mar 14 00:49:54.987420 kernel: Memory: 1901604K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194752K reserved, 0K cma-reserved) Mar 14 00:49:54.987446 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 14 00:49:54.987472 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:49:54.987498 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:49:54.987523 kernel: Dynamic Preempt: voluntary Mar 14 00:49:54.987564 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:49:54.987598 kernel: rcu: RCU event tracing is enabled. Mar 14 00:49:54.987625 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 14 00:49:54.987651 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:49:54.987677 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:49:54.987741 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:49:54.987794 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:49:54.987822 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 14 00:49:54.987849 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 14 00:49:54.987876 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:49:54.987903 kernel: Console: colour VGA+ 80x25 Mar 14 00:49:54.987930 kernel: printk: console [tty0] enabled Mar 14 00:49:54.987957 kernel: printk: console [ttyS0] enabled Mar 14 00:49:54.987991 kernel: ACPI: Core revision 20230628 Mar 14 00:49:54.988019 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:49:54.988046 kernel: x2apic enabled Mar 14 00:49:54.988073 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:49:54.988102 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Mar 14 00:49:54.988135 kernel: Calibrating delay loop (skipped) preset value.. 4589.15 BogoMIPS (lpj=2294576) Mar 14 00:49:54.988162 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:49:54.988190 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 14 00:49:54.988217 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 14 00:49:54.988244 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:49:54.988271 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 14 00:49:54.988297 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 14 00:49:54.988325 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Mar 14 00:49:54.988352 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Mar 14 00:49:54.988379 kernel: RETBleed: Mitigation: Enhanced IBRS Mar 14 00:49:54.988412 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 14 00:49:54.988440 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 14 00:49:54.988466 kernel: TAA: Mitigation: Clear CPU buffers Mar 14 00:49:54.988493 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:49:54.988520 kernel: GDS: Unknown: Dependent on hypervisor status Mar 14 00:49:54.988591 kernel: active return thunk: its_return_thunk Mar 14 00:49:54.988628 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 00:49:54.988654 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:49:54.988681 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:49:54.990158 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:49:54.990190 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:49:54.990227 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:49:54.990254 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:49:54.990281 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:49:54.990308 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:49:54.990335 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Mar 14 00:49:54.990362 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Mar 14 00:49:54.990389 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Mar 14 00:49:54.990415 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Mar 14 00:49:54.990442 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Mar 14 00:49:54.990470 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:49:54.990497 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:49:54.990524 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:49:54.990572 kernel: landlock: Up and running. Mar 14 00:49:54.990599 kernel: SELinux: Initializing. Mar 14 00:49:54.990626 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:49:54.990653 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:49:54.990680 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Mar 14 00:49:54.990733 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 14 00:49:54.990761 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 14 00:49:54.990789 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 14 00:49:54.990817 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 14 00:49:54.990932 kernel: signal: max sigframe size: 3632 Mar 14 00:49:54.990960 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:49:54.990988 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:49:54.991016 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:49:54.991043 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:49:54.991070 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:49:54.991097 kernel: .... node #0, CPUs: #1 Mar 14 00:49:54.991124 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 14 00:49:54.991151 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:49:54.991178 kernel: smpboot: Max logical packages: 16 Mar 14 00:49:54.991212 kernel: smpboot: Total of 2 processors activated (9178.30 BogoMIPS) Mar 14 00:49:54.991240 kernel: devtmpfs: initialized Mar 14 00:49:54.991279 kernel: x86/mm: Memory block size: 128MB Mar 14 00:49:54.991306 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:49:54.991333 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 14 00:49:54.991360 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:49:54.991388 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:49:54.991415 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:49:54.991443 kernel: audit: type=2000 audit(1773449393.896:1): state=initialized audit_enabled=0 res=1 Mar 14 00:49:54.991476 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:49:54.991503 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:49:54.991530 kernel: cpuidle: using governor menu Mar 14 00:49:54.991598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:49:54.991626 kernel: dca service started, version 1.12.1 Mar 14 00:49:54.994778 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:49:54.994810 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:49:54.994838 kernel: PCI: Using configuration type 1 for base access Mar 14 00:49:54.994866 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:49:54.994904 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:49:54.994931 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:49:54.994958 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:49:54.994985 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:49:54.995012 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:49:54.995039 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:49:54.995067 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:49:54.995094 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:49:54.995121 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:49:54.995155 kernel: ACPI: Interpreter enabled Mar 14 00:49:54.995182 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:49:54.995208 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:49:54.995236 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:49:54.995263 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:49:54.995310 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:49:54.995407 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:49:54.996029 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:49:54.996344 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:49:54.996630 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:49:54.996667 kernel: PCI host bridge to bus 0000:00 Mar 14 00:49:54.998867 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:49:54.998979 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:49:54.999081 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:49:54.999181 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 14 00:49:54.999287 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:49:54.999385 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 14 00:49:54.999484 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:49:54.999626 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:49:54.999782 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 14 00:49:54.999893 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 14 00:49:55.000007 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 14 00:49:55.000114 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 14 00:49:55.000223 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:49:55.000342 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:49:55.000456 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 14 00:49:55.000582 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 14 00:49:55.002768 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 14 00:49:55.002946 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 14 00:49:55.003066 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 14 00:49:55.003191 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 14 00:49:55.003311 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 14 00:49:55.003432 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 14 00:49:55.003555 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 14 00:49:55.003680 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 14 00:49:55.006597 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 14 00:49:55.006731 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 14 00:49:55.006868 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 14 00:49:55.006993 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 14 00:49:55.007102 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 14 00:49:55.007234 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:49:55.007328 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 14 00:49:55.007420 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 14 00:49:55.007514 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 14 00:49:55.007616 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 14 00:49:55.009764 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 14 00:49:55.009881 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 14 00:49:55.009986 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 14 00:49:55.010118 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 14 00:49:55.010227 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:49:55.010323 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:49:55.010427 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:49:55.010521 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 14 00:49:55.010628 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 14 00:49:55.010748 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:49:55.010845 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:49:55.010951 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 14 00:49:55.011050 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 14 00:49:55.011149 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 14 00:49:55.011249 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 14 00:49:55.011344 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 14 00:49:55.011451 kernel: pci_bus 0000:02: extended config space not accessible Mar 14 00:49:55.011573 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 14 00:49:55.011676 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 14 00:49:55.013479 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 14 00:49:55.013638 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 14 00:49:55.013792 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 14 00:49:55.013892 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 14 00:49:55.013989 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 14 00:49:55.014081 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 14 00:49:55.014173 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 14 00:49:55.014285 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 14 00:49:55.014386 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 14 00:49:55.014487 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 14 00:49:55.014588 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 14 00:49:55.014682 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 14 00:49:55.014852 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 14 00:49:55.014946 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 14 00:49:55.015038 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 14 00:49:55.015133 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 14 00:49:55.015225 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 14 00:49:55.015322 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 14 00:49:55.015416 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 14 00:49:55.015510 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 14 00:49:55.015618 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 14 00:49:55.016792 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 14 00:49:55.016903 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 14 00:49:55.017000 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 14 00:49:55.017106 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 14 00:49:55.017206 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 14 00:49:55.017298 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 14 00:49:55.017312 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:49:55.017322 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:49:55.017332 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:49:55.017342 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:49:55.017352 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:49:55.017362 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:49:55.017372 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:49:55.017385 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:49:55.017395 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:49:55.017405 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:49:55.017415 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:49:55.017425 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:49:55.017435 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:49:55.017445 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:49:55.017455 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:49:55.017465 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:49:55.017477 kernel: iommu: Default domain type: Translated Mar 14 00:49:55.017487 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:49:55.017497 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:49:55.017507 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:49:55.017517 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 14 00:49:55.017527 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 14 00:49:55.017633 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:49:55.017736 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:49:55.017834 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:49:55.017847 kernel: vgaarb: loaded Mar 14 00:49:55.017857 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:49:55.017868 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:49:55.017924 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:49:55.017934 kernel: pnp: PnP ACPI init Mar 14 00:49:55.018042 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:49:55.018057 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:49:55.018067 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:49:55.018082 kernel: NET: Registered PF_INET protocol family Mar 14 00:49:55.018092 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:49:55.018102 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 14 00:49:55.018113 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:49:55.018123 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:49:55.018132 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 14 00:49:55.018143 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 14 00:49:55.018152 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:49:55.018165 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:49:55.018175 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:49:55.018185 kernel: NET: Registered PF_XDP protocol family Mar 14 00:49:55.018282 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 14 00:49:55.018380 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 14 00:49:55.018479 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 14 00:49:55.018585 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 14 00:49:55.020760 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 14 00:49:55.020894 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 14 00:49:55.020997 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 14 00:49:55.021102 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 14 00:49:55.021221 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 14 00:49:55.021317 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 14 00:49:55.021414 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 14 00:49:55.021513 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 14 00:49:55.021645 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 14 00:49:55.021769 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 14 00:49:55.021923 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 14 00:49:55.022019 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 14 00:49:55.022163 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 14 00:49:55.022262 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 14 00:49:55.022357 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 14 00:49:55.022456 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 14 00:49:55.022558 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 14 00:49:55.022656 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 14 00:49:55.024793 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 14 00:49:55.024899 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 14 00:49:55.025037 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 14 00:49:55.025142 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 14 00:49:55.025239 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 14 00:49:55.025334 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 14 00:49:55.025428 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 14 00:49:55.025528 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 14 00:49:55.025633 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 14 00:49:55.027790 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 14 00:49:55.027901 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 14 00:49:55.028005 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 14 00:49:55.028102 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 14 00:49:55.028196 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 14 00:49:55.028291 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 14 00:49:55.028385 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 14 00:49:55.028478 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 14 00:49:55.028588 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 14 00:49:55.028687 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 14 00:49:55.028805 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 14 00:49:55.028900 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 14 00:49:55.028998 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 14 00:49:55.029092 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 14 00:49:55.029185 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 14 00:49:55.029280 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 14 00:49:55.029376 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 14 00:49:55.029469 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 14 00:49:55.029573 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 14 00:49:55.029670 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:49:55.029766 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:49:55.029851 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:49:55.029935 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 14 00:49:55.030019 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:49:55.030102 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 14 00:49:55.030206 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 14 00:49:55.030296 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 14 00:49:55.030385 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 14 00:49:55.030483 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 14 00:49:55.030590 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 14 00:49:55.030679 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 14 00:49:55.032839 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 14 00:49:55.032941 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 14 00:49:55.033031 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 14 00:49:55.033118 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 14 00:49:55.033218 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 14 00:49:55.033305 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 14 00:49:55.033391 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 14 00:49:55.033494 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 14 00:49:55.033588 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 14 00:49:55.033683 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 14 00:49:55.033793 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 14 00:49:55.033878 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 14 00:49:55.033964 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 14 00:49:55.034063 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 14 00:49:55.034155 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 14 00:49:55.034242 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 14 00:49:55.034336 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 14 00:49:55.034429 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 14 00:49:55.034530 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 14 00:49:55.034551 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:49:55.034562 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:49:55.034577 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 00:49:55.034587 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 14 00:49:55.034598 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 00:49:55.034609 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Mar 14 00:49:55.034619 kernel: Initialise system trusted keyrings Mar 14 00:49:55.034630 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 14 00:49:55.034641 kernel: Key type asymmetric registered Mar 14 00:49:55.034651 kernel: Asymmetric key parser 'x509' registered Mar 14 00:49:55.034661 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:49:55.034675 kernel: io scheduler mq-deadline registered Mar 14 00:49:55.034686 kernel: io scheduler kyber registered Mar 14 00:49:55.036764 kernel: io scheduler bfq registered Mar 14 00:49:55.036889 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 14 00:49:55.036996 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 14 00:49:55.037114 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:49:55.037213 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 14 00:49:55.037312 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 14 00:49:55.037406 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:49:55.037501 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 14 00:49:55.037604 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 14 00:49:55.037714 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:49:55.037810 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 14 00:49:55.037908 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 14 00:49:55.038002 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:49:55.038095 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 14 00:49:55.038188 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 14 00:49:55.038281 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:49:55.040908 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 14 00:49:55.041102 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 14 00:49:55.041212 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:49:55.041319 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 14 00:49:55.041418 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 14 00:49:55.041516 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:49:55.041633 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 14 00:49:55.041765 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 14 00:49:55.041864 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:49:55.041879 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:49:55.041892 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:49:55.041903 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 00:49:55.041915 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:49:55.041926 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:49:55.041937 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:49:55.041953 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:49:55.041964 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:49:55.041975 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:49:55.042084 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 14 00:49:55.042176 kernel: rtc_cmos 00:03: registered as rtc0 Mar 14 00:49:55.042265 kernel: rtc_cmos 00:03: setting system clock to 2026-03-14T00:49:54 UTC (1773449394) Mar 14 00:49:55.042354 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 14 00:49:55.042372 kernel: intel_pstate: CPU model not supported Mar 14 00:49:55.042383 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:49:55.042395 kernel: Segment Routing with IPv6 Mar 14 00:49:55.042405 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:49:55.042416 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:49:55.042427 kernel: Key type dns_resolver registered Mar 14 00:49:55.042439 kernel: IPI shorthand broadcast: enabled Mar 14 00:49:55.042449 kernel: sched_clock: Marking stable (1006006278, 123066992)->(1354160914, -225087644) Mar 14 00:49:55.042460 kernel: registered taskstats version 1 Mar 14 00:49:55.042472 kernel: Loading compiled-in X.509 certificates Mar 14 00:49:55.042485 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:49:55.042496 kernel: Key type .fscrypt registered Mar 14 00:49:55.042507 kernel: Key type fscrypt-provisioning registered Mar 14 00:49:55.042517 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:49:55.042528 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:49:55.042550 kernel: ima: No architecture policies found Mar 14 00:49:55.042561 kernel: clk: Disabling unused clocks Mar 14 00:49:55.042572 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:49:55.042582 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:49:55.042597 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:49:55.042608 kernel: Run /init as init process Mar 14 00:49:55.042619 kernel: with arguments: Mar 14 00:49:55.042630 kernel: /init Mar 14 00:49:55.042640 kernel: with environment: Mar 14 00:49:55.042650 kernel: HOME=/ Mar 14 00:49:55.042661 kernel: TERM=linux Mar 14 00:49:55.042674 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:49:55.042703 systemd[1]: Detected virtualization kvm. Mar 14 00:49:55.042714 systemd[1]: Detected architecture x86-64. Mar 14 00:49:55.042725 systemd[1]: Running in initrd. Mar 14 00:49:55.042737 systemd[1]: No hostname configured, using default hostname. Mar 14 00:49:55.042747 systemd[1]: Hostname set to . Mar 14 00:49:55.042759 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:49:55.042770 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:49:55.042781 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:49:55.042795 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:49:55.042807 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:49:55.042818 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:49:55.042829 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:49:55.042840 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:49:55.042853 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:49:55.042868 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:49:55.042880 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:49:55.042891 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:49:55.042902 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:49:55.042913 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:49:55.042924 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:49:55.042935 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:49:55.042946 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:49:55.042957 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:49:55.042971 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:49:55.042982 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:49:55.042993 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:49:55.043004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:49:55.043016 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:49:55.043027 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:49:55.043038 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:49:55.043049 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:49:55.043060 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:49:55.043075 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:49:55.043087 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:49:55.043098 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:49:55.043109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:49:55.043153 systemd-journald[203]: Collecting audit messages is disabled. Mar 14 00:49:55.043184 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:49:55.043195 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:49:55.043206 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:49:55.043218 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:49:55.043233 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:49:55.043245 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:49:55.043257 systemd-journald[203]: Journal started Mar 14 00:49:55.043283 systemd-journald[203]: Runtime Journal (/run/log/journal/ca4d03657976429db4da754559e50472) is 4.7M, max 38.0M, 33.2M free. Mar 14 00:49:55.015738 systemd-modules-load[204]: Inserted module 'overlay' Mar 14 00:49:55.057295 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:49:55.057337 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:49:55.058851 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 14 00:49:55.060314 kernel: Bridge firewalling registered Mar 14 00:49:55.059421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:49:55.060047 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:49:55.070942 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:49:55.074894 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:49:55.077283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:49:55.080397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:49:55.099239 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:49:55.100497 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:49:55.108926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:49:55.119994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:49:55.122632 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:49:55.151802 dracut-cmdline[237]: dracut-dracut-053 Mar 14 00:49:55.150320 systemd-resolved[232]: Positive Trust Anchors: Mar 14 00:49:55.150335 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:49:55.150375 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:49:55.159757 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:49:55.157257 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 14 00:49:55.158678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:49:55.159160 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:49:55.282732 kernel: SCSI subsystem initialized Mar 14 00:49:55.294739 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:49:55.306728 kernel: iscsi: registered transport (tcp) Mar 14 00:49:55.332748 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:49:55.332861 kernel: QLogic iSCSI HBA Driver Mar 14 00:49:55.424594 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:49:55.443207 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:49:55.480728 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:49:55.480827 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:49:55.482727 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:49:55.542771 kernel: raid6: avx512x4 gen() 28558 MB/s Mar 14 00:49:55.559819 kernel: raid6: avx512x2 gen() 28817 MB/s Mar 14 00:49:55.576827 kernel: raid6: avx512x1 gen() 27829 MB/s Mar 14 00:49:55.593766 kernel: raid6: avx2x4 gen() 21602 MB/s Mar 14 00:49:55.610774 kernel: raid6: avx2x2 gen() 21328 MB/s Mar 14 00:49:55.627788 kernel: raid6: avx2x1 gen() 18212 MB/s Mar 14 00:49:55.627928 kernel: raid6: using algorithm avx512x2 gen() 28817 MB/s Mar 14 00:49:55.645877 kernel: raid6: .... xor() 21694 MB/s, rmw enabled Mar 14 00:49:55.646017 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:49:55.669751 kernel: xor: automatically using best checksumming function avx Mar 14 00:49:55.843739 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:49:55.861135 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:49:55.868000 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:49:55.899233 systemd-udevd[420]: Using default interface naming scheme 'v255'. Mar 14 00:49:55.906583 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:49:55.913905 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:49:55.938576 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Mar 14 00:49:55.982334 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:49:55.992134 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:49:56.075955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:49:56.085897 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:49:56.103238 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:49:56.115936 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:49:56.118105 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:49:56.119410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:49:56.129209 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:49:56.173516 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:49:56.200731 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:49:56.206712 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 14 00:49:56.218901 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 14 00:49:56.225730 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:49:56.225797 kernel: AES CTR mode by8 optimization enabled Mar 14 00:49:56.236746 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:49:56.236814 kernel: GPT:17805311 != 125829119 Mar 14 00:49:56.236828 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:49:56.237842 kernel: GPT:17805311 != 125829119 Mar 14 00:49:56.238962 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:49:56.238982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:49:56.241716 kernel: libata version 3.00 loaded. Mar 14 00:49:56.246798 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:49:56.246940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:49:56.247603 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:49:56.250575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:49:56.250772 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:49:56.251226 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:49:56.257077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:49:56.285098 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:49:56.300753 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:49:56.301708 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:49:56.302125 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:49:56.319720 kernel: ACPI: bus type USB registered Mar 14 00:49:56.319794 kernel: usbcore: registered new interface driver usbfs Mar 14 00:49:56.319810 kernel: usbcore: registered new interface driver hub Mar 14 00:49:56.319823 kernel: usbcore: registered new device driver usb Mar 14 00:49:56.322763 kernel: scsi host0: ahci Mar 14 00:49:56.323724 kernel: scsi host1: ahci Mar 14 00:49:56.325712 kernel: scsi host2: ahci Mar 14 00:49:56.326722 kernel: scsi host3: ahci Mar 14 00:49:56.327719 kernel: scsi host4: ahci Mar 14 00:49:56.328717 kernel: scsi host5: ahci Mar 14 00:49:56.328873 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Mar 14 00:49:56.328889 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Mar 14 00:49:56.328904 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Mar 14 00:49:56.328926 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Mar 14 00:49:56.328940 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Mar 14 00:49:56.328952 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Mar 14 00:49:56.347572 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 00:49:56.383296 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (475) Mar 14 00:49:56.383334 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (479) Mar 14 00:49:56.383421 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:49:56.401338 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 00:49:56.406586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:49:56.410841 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 00:49:56.411406 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 00:49:56.427160 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:49:56.433174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:49:56.437202 disk-uuid[567]: Primary Header is updated. Mar 14 00:49:56.437202 disk-uuid[567]: Secondary Entries is updated. Mar 14 00:49:56.437202 disk-uuid[567]: Secondary Header is updated. Mar 14 00:49:56.444771 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:49:56.453765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:49:56.485204 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:49:56.642620 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:49:56.642803 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:49:56.645765 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:49:56.645875 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:49:56.649172 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 14 00:49:56.652712 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:49:56.665076 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 14 00:49:56.665379 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 14 00:49:56.666973 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 14 00:49:56.671631 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 14 00:49:56.672179 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 14 00:49:56.672546 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 14 00:49:56.672926 kernel: hub 1-0:1.0: USB hub found Mar 14 00:49:56.673766 kernel: hub 1-0:1.0: 4 ports detected Mar 14 00:49:56.674765 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 14 00:49:56.675728 kernel: hub 2-0:1.0: USB hub found Mar 14 00:49:56.676985 kernel: hub 2-0:1.0: 4 ports detected Mar 14 00:49:56.915927 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 14 00:49:57.062794 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:49:57.068729 kernel: usbcore: registered new interface driver usbhid Mar 14 00:49:57.068850 kernel: usbhid: USB HID core driver Mar 14 00:49:57.073854 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 14 00:49:57.073977 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 14 00:49:57.464243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:49:57.464358 disk-uuid[568]: The operation has completed successfully. Mar 14 00:49:57.501916 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:49:57.502062 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:49:57.524171 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:49:57.531843 sh[590]: Success Mar 14 00:49:57.556720 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 14 00:49:57.636943 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:49:57.637826 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:49:57.640413 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:49:57.663753 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:49:57.666369 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:49:57.666411 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:49:57.666437 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:49:57.667315 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:49:57.675964 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:49:57.678208 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:49:57.684031 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:49:57.689023 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:49:57.706731 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:49:57.706812 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:49:57.706829 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:49:57.715771 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:49:57.736910 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:49:57.736520 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:49:57.745018 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:49:57.755005 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:49:57.852015 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:49:57.860967 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:49:57.885928 ignition[676]: Ignition 2.19.0 Mar 14 00:49:57.885943 ignition[676]: Stage: fetch-offline Mar 14 00:49:57.889053 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:49:57.886011 ignition[676]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:49:57.886024 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 00:49:57.886196 ignition[676]: parsed url from cmdline: "" Mar 14 00:49:57.886199 ignition[676]: no config URL provided Mar 14 00:49:57.886205 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:49:57.886214 ignition[676]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:49:57.886219 ignition[676]: failed to fetch config: resource requires networking Mar 14 00:49:57.886676 ignition[676]: Ignition finished successfully Mar 14 00:49:57.896813 systemd-networkd[772]: lo: Link UP Mar 14 00:49:57.896827 systemd-networkd[772]: lo: Gained carrier Mar 14 00:49:57.898210 systemd-networkd[772]: Enumeration completed Mar 14 00:49:57.898610 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:49:57.898614 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:49:57.898719 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:49:57.899248 systemd[1]: Reached target network.target - Network. Mar 14 00:49:57.900227 systemd-networkd[772]: eth0: Link UP Mar 14 00:49:57.900232 systemd-networkd[772]: eth0: Gained carrier Mar 14 00:49:57.900241 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:49:57.907945 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:49:57.909983 systemd-networkd[772]: eth0: DHCPv4 address 10.244.101.86/30, gateway 10.244.101.85 acquired from 10.244.101.85 Mar 14 00:49:57.934859 ignition[780]: Ignition 2.19.0 Mar 14 00:49:57.934876 ignition[780]: Stage: fetch Mar 14 00:49:57.935087 ignition[780]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:49:57.935099 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 00:49:57.935199 ignition[780]: parsed url from cmdline: "" Mar 14 00:49:57.935203 ignition[780]: no config URL provided Mar 14 00:49:57.935209 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:49:57.935218 ignition[780]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:49:57.935421 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 14 00:49:57.935774 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 14 00:49:57.935799 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 14 00:49:57.950891 ignition[780]: GET result: OK Mar 14 00:49:57.951711 ignition[780]: parsing config with SHA512: e7abf36d1001e7664ed892e683369a11bafad36665b7fcd11f2e1c6a20f1051c119aa04c75e91f11fd2251c03eef90bf40b2ce821ad5f20e994b2b59990b6010 Mar 14 00:49:57.956377 unknown[780]: fetched base config from "system" Mar 14 00:49:57.956391 unknown[780]: fetched base config from "system" Mar 14 00:49:57.956397 unknown[780]: fetched user config from "openstack" Mar 14 00:49:57.958017 ignition[780]: fetch: fetch complete Mar 14 00:49:57.958022 ignition[780]: fetch: fetch passed Mar 14 00:49:57.958096 ignition[780]: Ignition finished successfully Mar 14 00:49:57.961944 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:49:57.965932 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:49:58.000969 ignition[787]: Ignition 2.19.0 Mar 14 00:49:58.000982 ignition[787]: Stage: kargs Mar 14 00:49:58.001181 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:49:58.003769 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:49:58.001192 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 00:49:58.002210 ignition[787]: kargs: kargs passed Mar 14 00:49:58.002267 ignition[787]: Ignition finished successfully Mar 14 00:49:58.010918 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:49:58.027140 ignition[793]: Ignition 2.19.0 Mar 14 00:49:58.027153 ignition[793]: Stage: disks Mar 14 00:49:58.027373 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:49:58.027385 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 00:49:58.029883 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:49:58.028448 ignition[793]: disks: disks passed Mar 14 00:49:58.031917 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:49:58.028502 ignition[793]: Ignition finished successfully Mar 14 00:49:58.033796 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:49:58.034221 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:49:58.035207 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:49:58.035783 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:49:58.041898 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:49:58.062781 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:49:58.065788 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:49:58.074071 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:49:58.188743 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:49:58.189965 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:49:58.191745 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:49:58.198853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:49:58.201812 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:49:58.203049 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:49:58.205627 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 14 00:49:58.206155 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:49:58.206186 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:49:58.216511 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Mar 14 00:49:58.216542 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:49:58.216557 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:49:58.216577 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:49:58.220714 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:49:58.222242 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:49:58.223890 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:49:58.231946 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:49:58.278083 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:49:58.285739 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:49:58.294858 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:49:58.299195 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:49:58.413485 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:49:58.423068 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:49:58.429397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:49:58.440731 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:49:58.470679 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:49:58.475653 ignition[928]: INFO : Ignition 2.19.0 Mar 14 00:49:58.476788 ignition[928]: INFO : Stage: mount Mar 14 00:49:58.476788 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:49:58.476788 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 00:49:58.479639 ignition[928]: INFO : mount: mount passed Mar 14 00:49:58.479639 ignition[928]: INFO : Ignition finished successfully Mar 14 00:49:58.480052 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:49:58.664902 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:49:59.311448 systemd-networkd[772]: eth0: Gained IPv6LL Mar 14 00:50:00.823925 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1955:24:19ff:fef4:6556/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1955:24:19ff:fef4:6556/64 assigned by NDisc. Mar 14 00:50:00.823949 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 14 00:50:05.368771 coreos-metadata[812]: Mar 14 00:50:05.368 WARN failed to locate config-drive, using the metadata service API instead Mar 14 00:50:05.386401 coreos-metadata[812]: Mar 14 00:50:05.386 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 14 00:50:05.401197 coreos-metadata[812]: Mar 14 00:50:05.401 INFO Fetch successful Mar 14 00:50:05.404086 coreos-metadata[812]: Mar 14 00:50:05.402 INFO wrote hostname srv-avwyp.gb1.brightbox.com to /sysroot/etc/hostname Mar 14 00:50:05.407837 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 14 00:50:05.408003 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 14 00:50:05.416992 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:50:05.445202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:50:05.456848 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Mar 14 00:50:05.459885 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:50:05.459951 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:50:05.460821 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:50:05.464749 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:50:05.467517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:50:05.507761 ignition[961]: INFO : Ignition 2.19.0 Mar 14 00:50:05.507761 ignition[961]: INFO : Stage: files Mar 14 00:50:05.509221 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:50:05.509221 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 00:50:05.509221 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:50:05.511131 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:50:05.511131 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:50:05.512893 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:50:05.512893 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:50:05.514556 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:50:05.514395 unknown[961]: wrote ssh authorized keys file for user: core Mar 14 00:50:05.517158 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:50:05.517158 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:50:05.665884 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:50:06.021394 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:50:06.021394 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:50:06.021394 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 00:50:06.276857 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:50:06.670816 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:50:06.689930 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:50:06.689930 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:50:06.689930 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:50:06.689930 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:50:06.689930 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 14 00:50:06.974778 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:50:08.584283 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:50:08.584283 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 14 00:50:08.587765 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:50:08.587765 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:50:08.587765 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 14 00:50:08.587765 ignition[961]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:50:08.587765 ignition[961]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:50:08.587765 ignition[961]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:50:08.587765 ignition[961]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:50:08.587765 ignition[961]: INFO : files: files passed Mar 14 00:50:08.587765 ignition[961]: INFO : Ignition finished successfully Mar 14 00:50:08.588729 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:50:08.598020 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:50:08.599808 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:50:08.611321 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:50:08.611519 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:50:08.622683 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:50:08.624045 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:50:08.624728 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:50:08.626272 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:50:08.627446 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:50:08.634148 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:50:08.682103 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:50:08.682223 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:50:08.683392 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:50:08.684450 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:50:08.685366 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:50:08.701997 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:50:08.717053 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:50:08.722877 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:50:08.735441 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:50:08.736715 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:50:08.737230 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:50:08.737666 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:50:08.737830 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:50:08.739688 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:50:08.740923 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:50:08.742094 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:50:08.743271 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:50:08.744433 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:50:08.745525 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:50:08.746688 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:50:08.747950 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:50:08.748950 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:50:08.749793 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:50:08.750518 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:50:08.750660 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:50:08.751661 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:50:08.752617 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:50:08.753418 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:50:08.753524 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:50:08.754248 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:50:08.754363 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:50:08.755387 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:50:08.755497 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:50:08.756499 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:50:08.756607 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:50:08.762953 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:50:08.764822 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:50:08.765718 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:50:08.765847 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:50:08.767957 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:50:08.769811 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:50:08.775965 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:50:08.776075 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:50:08.788309 ignition[1015]: INFO : Ignition 2.19.0 Mar 14 00:50:08.788309 ignition[1015]: INFO : Stage: umount Mar 14 00:50:08.789319 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:50:08.789319 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 00:50:08.791235 ignition[1015]: INFO : umount: umount passed Mar 14 00:50:08.791235 ignition[1015]: INFO : Ignition finished successfully Mar 14 00:50:08.793596 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:50:08.794038 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:50:08.795180 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:50:08.795231 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:50:08.795656 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:50:08.796124 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:50:08.796831 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:50:08.796871 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:50:08.797451 systemd[1]: Stopped target network.target - Network. Mar 14 00:50:08.798552 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:50:08.798618 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:50:08.799249 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:50:08.799591 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:50:08.804514 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:50:08.805535 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:50:08.806823 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:50:08.807863 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:50:08.807961 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:50:08.810118 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:50:08.810182 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:50:08.810948 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:50:08.811020 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:50:08.813589 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:50:08.813653 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:50:08.814617 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:50:08.816523 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:50:08.819799 systemd-networkd[772]: eth0: DHCPv6 lease lost Mar 14 00:50:08.820606 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:50:08.822438 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:50:08.822591 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:50:08.824314 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:50:08.824370 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:50:08.834830 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:50:08.835898 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:50:08.836473 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:50:08.838860 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:50:08.845724 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:50:08.846353 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:50:08.859871 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:50:08.860255 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:50:08.863581 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:50:08.863814 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:50:08.867753 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:50:08.867970 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:50:08.874329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:50:08.874459 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:50:08.876511 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:50:08.876600 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:50:08.878526 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:50:08.878659 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:50:08.879900 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:50:08.879947 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:50:08.880705 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:50:08.880755 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:50:08.882540 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:50:08.882612 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:50:08.890965 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:50:08.891436 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:50:08.891498 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:50:08.894558 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:50:08.894619 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:50:08.895626 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:50:08.895679 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:50:08.897501 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:50:08.897560 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:50:08.898980 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:50:08.899023 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:50:08.901031 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:50:08.901090 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:50:08.901643 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:50:08.901708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:50:08.902904 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:50:08.903040 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:50:08.904386 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:50:08.910940 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:50:08.930441 systemd[1]: Switching root. Mar 14 00:50:08.971652 systemd-journald[203]: Journal stopped Mar 14 00:50:10.060954 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 14 00:50:10.061056 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:50:10.061075 kernel: SELinux: policy capability open_perms=1 Mar 14 00:50:10.061096 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:50:10.061115 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:50:10.061129 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:50:10.061148 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:50:10.061162 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:50:10.061184 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:50:10.061205 kernel: audit: type=1403 audit(1773449409.120:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:50:10.061221 systemd[1]: Successfully loaded SELinux policy in 61.324ms. Mar 14 00:50:10.061245 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.984ms. Mar 14 00:50:10.061260 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:50:10.061281 systemd[1]: Detected virtualization kvm. Mar 14 00:50:10.061301 systemd[1]: Detected architecture x86-64. Mar 14 00:50:10.061315 systemd[1]: Detected first boot. Mar 14 00:50:10.061331 systemd[1]: Hostname set to . Mar 14 00:50:10.061345 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:50:10.061359 zram_generator::config[1058]: No configuration found. Mar 14 00:50:10.061373 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:50:10.061387 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:50:10.061401 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:50:10.061414 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:50:10.061428 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:50:10.061444 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:50:10.061471 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:50:10.061485 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:50:10.061499 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:50:10.061513 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:50:10.061527 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:50:10.061542 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:50:10.061557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:50:10.061571 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:50:10.061589 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:50:10.061603 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:50:10.061616 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:50:10.061630 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:50:10.061655 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:50:10.061669 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:50:10.061688 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:50:10.064814 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:50:10.064838 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:50:10.064853 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:50:10.064868 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:50:10.064883 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:50:10.064911 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:50:10.064927 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:50:10.064942 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:50:10.064956 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:50:10.064972 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:50:10.064987 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:50:10.065001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:50:10.065016 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:50:10.065030 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:50:10.065049 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:50:10.065068 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:50:10.065083 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:50:10.065097 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:50:10.065111 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:50:10.065125 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:50:10.065140 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:50:10.065154 systemd[1]: Reached target machines.target - Containers. Mar 14 00:50:10.065169 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:50:10.065188 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:50:10.065202 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:50:10.065217 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:50:10.065230 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:50:10.065244 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:50:10.065258 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:50:10.065273 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:50:10.065286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:50:10.065303 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:50:10.065317 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:50:10.065333 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:50:10.065346 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:50:10.065361 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:50:10.065374 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:50:10.065388 kernel: fuse: init (API version 7.39) Mar 14 00:50:10.065405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:50:10.065419 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:50:10.065436 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:50:10.065450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:50:10.065473 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:50:10.065488 systemd[1]: Stopped verity-setup.service. Mar 14 00:50:10.065502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:50:10.065516 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:50:10.065531 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:50:10.065548 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:50:10.065562 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:50:10.065584 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:50:10.065598 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:50:10.065613 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:50:10.065627 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:50:10.065645 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:50:10.065660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:50:10.065677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:50:10.065715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:50:10.065768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:50:10.065790 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:50:10.065805 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:50:10.065819 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:50:10.065834 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:50:10.065848 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:50:10.065864 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:50:10.065879 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:50:10.065928 systemd-journald[1144]: Collecting audit messages is disabled. Mar 14 00:50:10.065968 kernel: ACPI: bus type drm_connector registered Mar 14 00:50:10.065982 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:50:10.065999 systemd-journald[1144]: Journal started Mar 14 00:50:10.066043 systemd-journald[1144]: Runtime Journal (/run/log/journal/ca4d03657976429db4da754559e50472) is 4.7M, max 38.0M, 33.2M free. Mar 14 00:50:10.075787 kernel: loop: module loaded Mar 14 00:50:09.713521 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:50:09.735244 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 00:50:09.735956 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:50:10.084266 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:50:10.084335 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:50:10.084365 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:50:10.084383 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:50:10.094511 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:50:10.097117 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:50:10.102172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:50:10.113586 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:50:10.113678 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:50:10.120764 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:50:10.121714 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:50:10.143141 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:50:10.143233 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:50:10.143254 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:50:10.146859 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:50:10.147048 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:50:10.147782 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:50:10.148756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:50:10.149325 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:50:10.150203 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:50:10.151744 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:50:10.192963 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:50:10.193500 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:50:10.194779 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:50:10.197029 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:50:10.205972 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:50:10.214722 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:50:10.233812 systemd-journald[1144]: Time spent on flushing to /var/log/journal/ca4d03657976429db4da754559e50472 is 100.241ms for 1158 entries. Mar 14 00:50:10.233812 systemd-journald[1144]: System Journal (/var/log/journal/ca4d03657976429db4da754559e50472) is 8.0M, max 584.8M, 576.8M free. Mar 14 00:50:10.368508 systemd-journald[1144]: Received client request to flush runtime journal. Mar 14 00:50:10.371401 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:50:10.371421 kernel: loop1: detected capacity change from 0 to 219192 Mar 14 00:50:10.371437 kernel: loop2: detected capacity change from 0 to 8 Mar 14 00:50:10.246101 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:50:10.303022 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Mar 14 00:50:10.303039 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Mar 14 00:50:10.320091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:50:10.329108 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:50:10.330390 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:50:10.332206 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:50:10.367028 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:50:10.377831 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:50:10.378657 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:50:10.406828 kernel: loop3: detected capacity change from 0 to 142488 Mar 14 00:50:10.413905 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:50:10.443337 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:50:10.453851 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:50:10.456714 kernel: loop4: detected capacity change from 0 to 140768 Mar 14 00:50:10.496870 kernel: loop5: detected capacity change from 0 to 219192 Mar 14 00:50:10.543789 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Mar 14 00:50:10.552239 kernel: loop6: detected capacity change from 0 to 8 Mar 14 00:50:10.544272 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Mar 14 00:50:10.567190 kernel: loop7: detected capacity change from 0 to 142488 Mar 14 00:50:10.570119 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:50:10.600002 (sd-merge)[1217]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 14 00:50:10.600997 (sd-merge)[1217]: Merged extensions into '/usr'. Mar 14 00:50:10.609573 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:50:10.609806 systemd[1]: Reloading... Mar 14 00:50:10.704369 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:50:10.738786 zram_generator::config[1246]: No configuration found. Mar 14 00:50:10.892047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:50:10.942507 systemd[1]: Reloading finished in 330 ms. Mar 14 00:50:10.983682 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:50:10.985874 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:50:10.998359 systemd[1]: Starting ensure-sysext.service... Mar 14 00:50:11.001866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:50:11.012804 systemd[1]: Reloading requested from client PID 1302 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:50:11.012817 systemd[1]: Reloading... Mar 14 00:50:11.048163 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:50:11.048947 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:50:11.050020 systemd-tmpfiles[1303]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:50:11.050388 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Mar 14 00:50:11.050522 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Mar 14 00:50:11.053437 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:50:11.053524 systemd-tmpfiles[1303]: Skipping /boot Mar 14 00:50:11.064811 systemd-tmpfiles[1303]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:50:11.064931 systemd-tmpfiles[1303]: Skipping /boot Mar 14 00:50:11.098742 zram_generator::config[1328]: No configuration found. Mar 14 00:50:11.236528 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:50:11.286114 systemd[1]: Reloading finished in 272 ms. Mar 14 00:50:11.312069 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:50:11.313249 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:50:11.327888 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:50:11.332604 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:50:11.336870 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:50:11.342286 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:50:11.345197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:50:11.358883 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:50:11.367282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:50:11.367499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:50:11.376974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:50:11.379201 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:50:11.383941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:50:11.384462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:50:11.384577 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:50:11.388160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:50:11.388352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:50:11.388558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:50:11.388665 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:50:11.392523 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:50:11.392772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:50:11.398935 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:50:11.399582 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:50:11.399747 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:50:11.402901 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:50:11.405077 systemd[1]: Finished ensure-sysext.service. Mar 14 00:50:11.416184 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:50:11.416928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:50:11.417765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:50:11.423775 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:50:11.436897 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:50:11.444302 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:50:11.450036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:50:11.450216 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:50:11.451477 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:50:11.457010 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:50:11.457821 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:50:11.462608 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:50:11.462797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:50:11.463582 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:50:11.468522 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:50:11.470220 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:50:11.488573 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:50:11.490300 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Mar 14 00:50:11.493149 augenrules[1425]: No rules Mar 14 00:50:11.494437 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:50:11.502788 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:50:11.525784 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:50:11.531931 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:50:11.621517 systemd-resolved[1391]: Positive Trust Anchors: Mar 14 00:50:11.621553 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:50:11.621594 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:50:11.629395 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:50:11.635718 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1448) Mar 14 00:50:11.640629 systemd-resolved[1391]: Using system hostname 'srv-avwyp.gb1.brightbox.com'. Mar 14 00:50:11.646160 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:50:11.647256 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:50:11.650083 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:50:11.650593 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:50:11.696282 systemd-networkd[1439]: lo: Link UP Mar 14 00:50:11.696291 systemd-networkd[1439]: lo: Gained carrier Mar 14 00:50:11.699145 systemd-networkd[1439]: Enumeration completed Mar 14 00:50:11.699627 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:50:11.699635 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:50:11.699823 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:50:11.700423 systemd[1]: Reached target network.target - Network. Mar 14 00:50:11.702856 systemd-networkd[1439]: eth0: Link UP Mar 14 00:50:11.702865 systemd-networkd[1439]: eth0: Gained carrier Mar 14 00:50:11.702882 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:50:11.706870 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:50:11.745780 systemd-networkd[1439]: eth0: DHCPv4 address 10.244.101.86/30, gateway 10.244.101.85 acquired from 10.244.101.85 Mar 14 00:50:11.746629 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 14 00:50:11.753711 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:50:11.771724 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 14 00:50:11.779379 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:50:11.780709 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:50:11.811730 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:50:11.814027 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:50:11.814273 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:50:11.817755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:50:11.818897 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 14 00:50:11.825865 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:50:11.854889 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:50:11.904084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:50:12.026605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:50:12.057302 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:50:12.068540 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:50:12.087770 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:50:12.121507 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:50:12.122516 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:50:12.123113 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:50:12.123818 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:50:12.124445 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:50:12.125485 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:50:12.126144 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:50:12.126747 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:50:12.127305 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:50:12.127354 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:50:12.127832 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:50:12.130741 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:50:12.132892 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:50:12.140788 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:50:12.142825 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:50:12.143812 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:50:12.144305 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:50:12.144719 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:50:12.145128 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:50:12.145154 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:50:12.146859 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:50:12.150878 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:50:12.155893 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:50:12.160223 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:50:12.165842 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:50:12.169883 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:50:12.170364 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:50:12.173427 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:50:12.177263 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:50:12.182471 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:50:12.185055 jq[1483]: false Mar 14 00:50:12.192914 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:50:12.199910 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:50:12.201362 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:50:12.202429 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:50:12.204904 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:50:12.209869 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:50:12.211621 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:50:12.216439 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:50:12.216625 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:50:12.225236 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:50:12.226292 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:50:12.231851 update_engine[1492]: I20260314 00:50:12.231750 1492 main.cc:92] Flatcar Update Engine starting Mar 14 00:50:12.270595 jq[1493]: true Mar 14 00:50:12.275142 extend-filesystems[1484]: Found loop4 Mar 14 00:50:12.278003 extend-filesystems[1484]: Found loop5 Mar 14 00:50:12.278003 extend-filesystems[1484]: Found loop6 Mar 14 00:50:12.278003 extend-filesystems[1484]: Found loop7 Mar 14 00:50:12.286844 extend-filesystems[1484]: Found vda Mar 14 00:50:12.286844 extend-filesystems[1484]: Found vda1 Mar 14 00:50:12.286844 extend-filesystems[1484]: Found vda2 Mar 14 00:50:12.286844 extend-filesystems[1484]: Found vda3 Mar 14 00:50:12.286844 extend-filesystems[1484]: Found usr Mar 14 00:50:12.286844 extend-filesystems[1484]: Found vda4 Mar 14 00:50:12.286844 extend-filesystems[1484]: Found vda6 Mar 14 00:50:12.286844 extend-filesystems[1484]: Found vda7 Mar 14 00:50:12.286844 extend-filesystems[1484]: Found vda9 Mar 14 00:50:12.286844 extend-filesystems[1484]: Checking size of /dev/vda9 Mar 14 00:50:12.321461 tar[1495]: linux-amd64/LICENSE Mar 14 00:50:12.321461 tar[1495]: linux-amd64/helm Mar 14 00:50:12.322791 update_engine[1492]: I20260314 00:50:12.299126 1492 update_check_scheduler.cc:74] Next update check in 2m19s Mar 14 00:50:12.291527 dbus-daemon[1482]: [system] SELinux support is enabled Mar 14 00:50:12.291912 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:50:12.296222 dbus-daemon[1482]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1439 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:50:12.306044 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:50:12.308991 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:50:12.306255 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:50:12.312405 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:50:12.313022 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:50:12.316556 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:50:12.316583 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:50:12.329870 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:50:12.330963 extend-filesystems[1484]: Resized partition /dev/vda9 Mar 14 00:50:12.330381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:50:12.330410 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:50:12.333966 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:50:12.340718 jq[1514]: true Mar 14 00:50:12.345724 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:50:12.366097 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 14 00:50:12.424337 systemd-logind[1491]: Watching system buttons on /dev/input/event2 (Power Button) Mar 14 00:50:12.424674 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:50:12.425890 systemd-logind[1491]: New seat seat0. Mar 14 00:50:12.426752 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1444) Mar 14 00:50:12.431577 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:50:12.490102 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:50:12.492566 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:50:12.505985 systemd[1]: Starting sshkeys.service... Mar 14 00:50:12.513745 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 14 00:50:12.534258 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 00:50:12.534258 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 14 00:50:12.534258 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 14 00:50:12.539437 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Mar 14 00:50:12.540340 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:50:12.541177 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:50:12.553122 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:50:12.560027 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:50:12.601309 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:50:12.601469 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:50:12.605470 dbus-daemon[1482]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1521 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:50:12.619866 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:50:12.651455 polkitd[1552]: Started polkitd version 121 Mar 14 00:50:12.681941 polkitd[1552]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:50:12.682020 polkitd[1552]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:50:12.690859 polkitd[1552]: Finished loading, compiling and executing 2 rules Mar 14 00:50:12.698578 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:50:12.701923 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:50:12.703495 polkitd[1552]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:50:12.739328 systemd-hostnamed[1521]: Hostname set to (static) Mar 14 00:50:12.797810 containerd[1512]: time="2026-03-14T00:50:12.797415766Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:50:12.810898 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:50:12.874666 containerd[1512]: time="2026-03-14T00:50:12.874600888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:50:12.880574 containerd[1512]: time="2026-03-14T00:50:12.880527431Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:50:12.880574 containerd[1512]: time="2026-03-14T00:50:12.880567867Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:50:12.880712 containerd[1512]: time="2026-03-14T00:50:12.880586420Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:50:12.881149 containerd[1512]: time="2026-03-14T00:50:12.880808923Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:50:12.881149 containerd[1512]: time="2026-03-14T00:50:12.880840087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881149 containerd[1512]: time="2026-03-14T00:50:12.880912859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881149 containerd[1512]: time="2026-03-14T00:50:12.880925829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881149 containerd[1512]: time="2026-03-14T00:50:12.881120348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881149 containerd[1512]: time="2026-03-14T00:50:12.881135320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881149 containerd[1512]: time="2026-03-14T00:50:12.881148173Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881322 containerd[1512]: time="2026-03-14T00:50:12.881158592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881322 containerd[1512]: time="2026-03-14T00:50:12.881222458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881701 containerd[1512]: time="2026-03-14T00:50:12.881454045Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881701 containerd[1512]: time="2026-03-14T00:50:12.881581699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:50:12.881701 containerd[1512]: time="2026-03-14T00:50:12.881597563Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:50:12.881701 containerd[1512]: time="2026-03-14T00:50:12.881682635Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:50:12.881815 containerd[1512]: time="2026-03-14T00:50:12.881747281Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:50:12.886642 containerd[1512]: time="2026-03-14T00:50:12.886223550Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:50:12.886642 containerd[1512]: time="2026-03-14T00:50:12.886305312Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:50:12.886642 containerd[1512]: time="2026-03-14T00:50:12.886329692Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:50:12.886642 containerd[1512]: time="2026-03-14T00:50:12.886372063Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:50:12.886642 containerd[1512]: time="2026-03-14T00:50:12.886388451Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:50:12.886642 containerd[1512]: time="2026-03-14T00:50:12.886532957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:50:12.886901 containerd[1512]: time="2026-03-14T00:50:12.886872170Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:50:12.887022 containerd[1512]: time="2026-03-14T00:50:12.887007474Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:50:12.887048 containerd[1512]: time="2026-03-14T00:50:12.887027800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:50:12.887048 containerd[1512]: time="2026-03-14T00:50:12.887043696Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:50:12.887094 containerd[1512]: time="2026-03-14T00:50:12.887057561Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:50:12.887094 containerd[1512]: time="2026-03-14T00:50:12.887071998Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:50:12.887094 containerd[1512]: time="2026-03-14T00:50:12.887088603Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:50:12.887165 containerd[1512]: time="2026-03-14T00:50:12.887103389Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:50:12.887165 containerd[1512]: time="2026-03-14T00:50:12.887118480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:50:12.887165 containerd[1512]: time="2026-03-14T00:50:12.887131422Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:50:12.887165 containerd[1512]: time="2026-03-14T00:50:12.887143577Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:50:12.887165 containerd[1512]: time="2026-03-14T00:50:12.887157076Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:50:12.887287 containerd[1512]: time="2026-03-14T00:50:12.887181330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887287 containerd[1512]: time="2026-03-14T00:50:12.887197070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887287 containerd[1512]: time="2026-03-14T00:50:12.887209266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887287 containerd[1512]: time="2026-03-14T00:50:12.887222464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887287 containerd[1512]: time="2026-03-14T00:50:12.887250096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887287 containerd[1512]: time="2026-03-14T00:50:12.887275047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887287972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887308498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887325830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887343594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887357079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887369381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887382260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887398022Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887418958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887433 containerd[1512]: time="2026-03-14T00:50:12.887431189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.887658 containerd[1512]: time="2026-03-14T00:50:12.887442610Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:50:12.889522 containerd[1512]: time="2026-03-14T00:50:12.888718953Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:50:12.889522 containerd[1512]: time="2026-03-14T00:50:12.888744187Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:50:12.889522 containerd[1512]: time="2026-03-14T00:50:12.888756771Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:50:12.889522 containerd[1512]: time="2026-03-14T00:50:12.888775315Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:50:12.889522 containerd[1512]: time="2026-03-14T00:50:12.888796693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.889522 containerd[1512]: time="2026-03-14T00:50:12.888809068Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:50:12.889522 containerd[1512]: time="2026-03-14T00:50:12.888819470Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:50:12.889522 containerd[1512]: time="2026-03-14T00:50:12.888830966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:50:12.889764 containerd[1512]: time="2026-03-14T00:50:12.889122427Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:50:12.889764 containerd[1512]: time="2026-03-14T00:50:12.889192856Z" level=info msg="Connect containerd service" Mar 14 00:50:12.889764 containerd[1512]: time="2026-03-14T00:50:12.889250448Z" level=info msg="using legacy CRI server" Mar 14 00:50:12.889764 containerd[1512]: time="2026-03-14T00:50:12.889258282Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:50:12.889764 containerd[1512]: time="2026-03-14T00:50:12.889383350Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:50:12.892742 containerd[1512]: time="2026-03-14T00:50:12.892140024Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:50:12.892742 containerd[1512]: time="2026-03-14T00:50:12.892287768Z" level=info msg="Start subscribing containerd event" Mar 14 00:50:12.892742 containerd[1512]: time="2026-03-14T00:50:12.892337447Z" level=info msg="Start recovering state" Mar 14 00:50:12.892742 containerd[1512]: time="2026-03-14T00:50:12.892414787Z" level=info msg="Start event monitor" Mar 14 00:50:12.892742 containerd[1512]: time="2026-03-14T00:50:12.892440845Z" level=info msg="Start snapshots syncer" Mar 14 00:50:12.892742 containerd[1512]: time="2026-03-14T00:50:12.892453148Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:50:12.892742 containerd[1512]: time="2026-03-14T00:50:12.892463303Z" level=info msg="Start streaming server" Mar 14 00:50:12.896600 containerd[1512]: time="2026-03-14T00:50:12.894849929Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:50:12.896600 containerd[1512]: time="2026-03-14T00:50:12.894911983Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:50:12.896600 containerd[1512]: time="2026-03-14T00:50:12.895445542Z" level=info msg="containerd successfully booted in 0.100453s" Mar 14 00:50:12.895570 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:50:12.961265 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:50:13.006282 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:50:13.020110 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:50:13.032259 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:50:13.032467 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:50:13.038115 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:50:13.053878 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:50:13.060377 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:50:13.065027 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:50:13.065867 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:50:13.199478 systemd-networkd[1439]: eth0: Gained IPv6LL Mar 14 00:50:13.201848 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 14 00:50:13.205613 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:50:13.211483 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:50:13.223025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:50:13.225144 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:50:13.278780 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:50:13.281306 tar[1495]: linux-amd64/README.md Mar 14 00:50:13.294001 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:50:14.161004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:50:14.162546 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:50:14.696503 kubelet[1605]: E0314 00:50:14.696398 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:50:14.701788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:50:14.702294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:50:14.703175 systemd[1]: kubelet.service: Consumed 1.142s CPU time. Mar 14 00:50:14.706215 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 14 00:50:14.707460 systemd-networkd[1439]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1955:24:19ff:fef4:6556/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1955:24:19ff:fef4:6556/64 assigned by NDisc. Mar 14 00:50:14.707464 systemd-networkd[1439]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 14 00:50:15.761517 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 14 00:50:18.126856 login[1583]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 14 00:50:18.127317 login[1582]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 14 00:50:18.139319 systemd-logind[1491]: New session 2 of user core. Mar 14 00:50:18.141453 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:50:18.150119 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:50:18.165634 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:50:18.177612 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:50:18.181549 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:50:18.290126 systemd[1621]: Queued start job for default target default.target. Mar 14 00:50:18.300929 systemd[1621]: Created slice app.slice - User Application Slice. Mar 14 00:50:18.301086 systemd[1621]: Reached target paths.target - Paths. Mar 14 00:50:18.301105 systemd[1621]: Reached target timers.target - Timers. Mar 14 00:50:18.303132 systemd[1621]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:50:18.320301 systemd[1621]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:50:18.320583 systemd[1621]: Reached target sockets.target - Sockets. Mar 14 00:50:18.320646 systemd[1621]: Reached target basic.target - Basic System. Mar 14 00:50:18.320802 systemd[1621]: Reached target default.target - Main User Target. Mar 14 00:50:18.320915 systemd[1621]: Startup finished in 130ms. Mar 14 00:50:18.320996 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:50:18.333072 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:50:19.129768 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 14 00:50:19.135962 systemd-logind[1491]: New session 1 of user core. Mar 14 00:50:19.144043 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:50:19.292960 coreos-metadata[1481]: Mar 14 00:50:19.292 WARN failed to locate config-drive, using the metadata service API instead Mar 14 00:50:19.316095 coreos-metadata[1481]: Mar 14 00:50:19.316 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 14 00:50:19.327979 coreos-metadata[1481]: Mar 14 00:50:19.327 INFO Fetch failed with 404: resource not found Mar 14 00:50:19.327979 coreos-metadata[1481]: Mar 14 00:50:19.327 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 14 00:50:19.328758 coreos-metadata[1481]: Mar 14 00:50:19.328 INFO Fetch successful Mar 14 00:50:19.328758 coreos-metadata[1481]: Mar 14 00:50:19.328 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 14 00:50:19.342279 coreos-metadata[1481]: Mar 14 00:50:19.342 INFO Fetch successful Mar 14 00:50:19.342279 coreos-metadata[1481]: Mar 14 00:50:19.342 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 14 00:50:19.355560 coreos-metadata[1481]: Mar 14 00:50:19.355 INFO Fetch successful Mar 14 00:50:19.355560 coreos-metadata[1481]: Mar 14 00:50:19.355 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 14 00:50:19.394127 coreos-metadata[1481]: Mar 14 00:50:19.393 INFO Fetch successful Mar 14 00:50:19.394317 coreos-metadata[1481]: Mar 14 00:50:19.394 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 14 00:50:19.418461 coreos-metadata[1481]: Mar 14 00:50:19.418 INFO Fetch successful Mar 14 00:50:19.448429 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:50:19.449247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:50:19.693618 coreos-metadata[1545]: Mar 14 00:50:19.693 WARN failed to locate config-drive, using the metadata service API instead Mar 14 00:50:19.717560 coreos-metadata[1545]: Mar 14 00:50:19.717 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 14 00:50:19.749499 coreos-metadata[1545]: Mar 14 00:50:19.749 INFO Fetch successful Mar 14 00:50:19.749822 coreos-metadata[1545]: Mar 14 00:50:19.749 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:50:19.782798 coreos-metadata[1545]: Mar 14 00:50:19.782 INFO Fetch successful Mar 14 00:50:19.784463 unknown[1545]: wrote ssh authorized keys file for user: core Mar 14 00:50:19.818016 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:50:19.818784 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:50:19.822594 systemd[1]: Finished sshkeys.service. Mar 14 00:50:19.824380 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:50:19.824874 systemd[1]: Startup finished in 1.177s (kernel) + 14.365s (initrd) + 10.762s (userspace) = 26.305s. Mar 14 00:50:22.251811 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:50:22.260406 systemd[1]: Started sshd@0-10.244.101.86:22-20.161.92.111:41838.service - OpenSSH per-connection server daemon (20.161.92.111:41838). Mar 14 00:50:22.821399 sshd[1663]: Accepted publickey for core from 20.161.92.111 port 41838 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:50:22.825129 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:50:22.836035 systemd-logind[1491]: New session 3 of user core. Mar 14 00:50:22.842937 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:50:23.323038 systemd[1]: Started sshd@1-10.244.101.86:22-20.161.92.111:41840.service - OpenSSH per-connection server daemon (20.161.92.111:41840). Mar 14 00:50:24.454797 sshd[1668]: Accepted publickey for core from 20.161.92.111 port 41840 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:50:24.460094 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:50:24.467213 systemd-logind[1491]: New session 4 of user core. Mar 14 00:50:24.475895 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:50:24.776262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:50:24.782993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:50:24.933594 sshd[1668]: pam_unix(sshd:session): session closed for user core Mar 14 00:50:24.939899 systemd[1]: sshd@1-10.244.101.86:22-20.161.92.111:41840.service: Deactivated successfully. Mar 14 00:50:24.943142 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:50:24.945927 systemd-logind[1491]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:50:24.947455 systemd-logind[1491]: Removed session 4. Mar 14 00:50:24.961663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:50:24.977280 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:50:25.035999 systemd[1]: Started sshd@2-10.244.101.86:22-20.161.92.111:41848.service - OpenSSH per-connection server daemon (20.161.92.111:41848). Mar 14 00:50:25.041430 kubelet[1682]: E0314 00:50:25.041237 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:50:25.047965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:50:25.048149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:50:25.595651 sshd[1689]: Accepted publickey for core from 20.161.92.111 port 41848 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:50:25.601138 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:50:25.611785 systemd-logind[1491]: New session 5 of user core. Mar 14 00:50:25.622912 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:50:25.989757 sshd[1689]: pam_unix(sshd:session): session closed for user core Mar 14 00:50:25.999320 systemd-logind[1491]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:50:26.001142 systemd[1]: sshd@2-10.244.101.86:22-20.161.92.111:41848.service: Deactivated successfully. Mar 14 00:50:26.003904 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:50:26.006270 systemd-logind[1491]: Removed session 5. Mar 14 00:50:26.098119 systemd[1]: Started sshd@3-10.244.101.86:22-20.161.92.111:41862.service - OpenSSH per-connection server daemon (20.161.92.111:41862). Mar 14 00:50:26.665077 sshd[1697]: Accepted publickey for core from 20.161.92.111 port 41862 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:50:26.668487 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:50:26.680770 systemd-logind[1491]: New session 6 of user core. Mar 14 00:50:26.689960 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:50:27.068141 sshd[1697]: pam_unix(sshd:session): session closed for user core Mar 14 00:50:27.079512 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:50:27.080842 systemd[1]: sshd@3-10.244.101.86:22-20.161.92.111:41862.service: Deactivated successfully. Mar 14 00:50:27.084065 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:50:27.085331 systemd-logind[1491]: Removed session 6. Mar 14 00:50:27.180137 systemd[1]: Started sshd@4-10.244.101.86:22-20.161.92.111:41866.service - OpenSSH per-connection server daemon (20.161.92.111:41866). Mar 14 00:50:27.753337 sshd[1704]: Accepted publickey for core from 20.161.92.111 port 41866 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:50:27.757324 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:50:27.765531 systemd-logind[1491]: New session 7 of user core. Mar 14 00:50:27.775932 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:50:28.082336 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:50:28.082683 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:50:28.097167 sudo[1707]: pam_unix(sudo:session): session closed for user root Mar 14 00:50:28.187365 sshd[1704]: pam_unix(sshd:session): session closed for user core Mar 14 00:50:28.198083 systemd[1]: sshd@4-10.244.101.86:22-20.161.92.111:41866.service: Deactivated successfully. Mar 14 00:50:28.202309 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:50:28.204354 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:50:28.206319 systemd-logind[1491]: Removed session 7. Mar 14 00:50:28.292102 systemd[1]: Started sshd@5-10.244.101.86:22-20.161.92.111:41874.service - OpenSSH per-connection server daemon (20.161.92.111:41874). Mar 14 00:50:28.861663 sshd[1712]: Accepted publickey for core from 20.161.92.111 port 41874 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:50:28.862857 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:50:28.869893 systemd-logind[1491]: New session 8 of user core. Mar 14 00:50:28.871900 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:50:29.179075 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:50:29.179453 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:50:29.185781 sudo[1716]: pam_unix(sudo:session): session closed for user root Mar 14 00:50:29.192159 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:50:29.192463 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:50:29.209983 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:50:29.213658 auditctl[1719]: No rules Mar 14 00:50:29.214181 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:50:29.214395 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:50:29.216982 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:50:29.279467 augenrules[1737]: No rules Mar 14 00:50:29.282479 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:50:29.284858 sudo[1715]: pam_unix(sudo:session): session closed for user root Mar 14 00:50:29.374632 sshd[1712]: pam_unix(sshd:session): session closed for user core Mar 14 00:50:29.382423 systemd[1]: sshd@5-10.244.101.86:22-20.161.92.111:41874.service: Deactivated successfully. Mar 14 00:50:29.385671 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:50:29.388157 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:50:29.390437 systemd-logind[1491]: Removed session 8. Mar 14 00:50:29.486466 systemd[1]: Started sshd@6-10.244.101.86:22-20.161.92.111:41878.service - OpenSSH per-connection server daemon (20.161.92.111:41878). Mar 14 00:50:30.070747 sshd[1745]: Accepted publickey for core from 20.161.92.111 port 41878 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:50:30.071896 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:50:30.079516 systemd-logind[1491]: New session 9 of user core. Mar 14 00:50:30.092982 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:50:30.393916 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:50:30.394272 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:50:30.837252 (dockerd)[1763]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:50:30.837951 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:50:31.263564 dockerd[1763]: time="2026-03-14T00:50:31.263185114Z" level=info msg="Starting up" Mar 14 00:50:31.388372 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2793302897-merged.mount: Deactivated successfully. Mar 14 00:50:31.401559 systemd[1]: var-lib-docker-metacopy\x2dcheck2407014576-merged.mount: Deactivated successfully. Mar 14 00:50:31.420363 dockerd[1763]: time="2026-03-14T00:50:31.420122758Z" level=info msg="Loading containers: start." Mar 14 00:50:31.544772 kernel: Initializing XFRM netlink socket Mar 14 00:50:31.583102 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 14 00:50:31.665069 systemd-networkd[1439]: docker0: Link UP Mar 14 00:50:31.683232 dockerd[1763]: time="2026-03-14T00:50:31.683147924Z" level=info msg="Loading containers: done." Mar 14 00:50:31.706165 dockerd[1763]: time="2026-03-14T00:50:31.706076259Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:50:31.706480 dockerd[1763]: time="2026-03-14T00:50:31.706254529Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:50:31.706480 dockerd[1763]: time="2026-03-14T00:50:31.706418723Z" level=info msg="Daemon has completed initialization" Mar 14 00:50:31.744018 dockerd[1763]: time="2026-03-14T00:50:31.743859355Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:50:31.744124 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:50:32.507497 systemd-resolved[1391]: Clock change detected. Flushing caches. Mar 14 00:50:32.508163 systemd-timesyncd[1408]: Contacted time server [2a00:da00:f411:2900::123]:123 (2.flatcar.pool.ntp.org). Mar 14 00:50:32.508317 systemd-timesyncd[1408]: Initial clock synchronization to Sat 2026-03-14 00:50:32.506337 UTC. Mar 14 00:50:32.905893 containerd[1512]: time="2026-03-14T00:50:32.905704853Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 14 00:50:33.545295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1856452379.mount: Deactivated successfully. Mar 14 00:50:35.314045 containerd[1512]: time="2026-03-14T00:50:35.313824260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:35.316600 containerd[1512]: time="2026-03-14T00:50:35.316468437Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074505" Mar 14 00:50:35.319287 containerd[1512]: time="2026-03-14T00:50:35.319229888Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:35.322515 containerd[1512]: time="2026-03-14T00:50:35.322455174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:35.324667 containerd[1512]: time="2026-03-14T00:50:35.324166454Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.418372079s" Mar 14 00:50:35.324667 containerd[1512]: time="2026-03-14T00:50:35.324671179Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 14 00:50:35.326900 containerd[1512]: time="2026-03-14T00:50:35.326487253Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 14 00:50:35.938165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:50:35.957561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:50:36.112878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:50:36.125627 (kubelet)[1971]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:50:36.181667 kubelet[1971]: E0314 00:50:36.181606 1971 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:50:36.184003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:50:36.184170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:50:38.127311 containerd[1512]: time="2026-03-14T00:50:38.125872586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:38.130586 containerd[1512]: time="2026-03-14T00:50:38.130226932Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165831" Mar 14 00:50:38.133268 containerd[1512]: time="2026-03-14T00:50:38.131970078Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:38.136964 containerd[1512]: time="2026-03-14T00:50:38.136912509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:38.138171 containerd[1512]: time="2026-03-14T00:50:38.138134792Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 2.811610404s" Mar 14 00:50:38.138278 containerd[1512]: time="2026-03-14T00:50:38.138212646Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 14 00:50:38.140242 containerd[1512]: time="2026-03-14T00:50:38.139972894Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 14 00:50:39.706723 containerd[1512]: time="2026-03-14T00:50:39.706643273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:39.708904 containerd[1512]: time="2026-03-14T00:50:39.708833241Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729832" Mar 14 00:50:39.709297 containerd[1512]: time="2026-03-14T00:50:39.709258430Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:39.712970 containerd[1512]: time="2026-03-14T00:50:39.712855440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:39.715302 containerd[1512]: time="2026-03-14T00:50:39.715259829Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.574712347s" Mar 14 00:50:39.715302 containerd[1512]: time="2026-03-14T00:50:39.715303988Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 14 00:50:39.716620 containerd[1512]: time="2026-03-14T00:50:39.716596083Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 14 00:50:40.947257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223344519.mount: Deactivated successfully. Mar 14 00:50:41.328608 containerd[1512]: time="2026-03-14T00:50:41.327947800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:41.329584 containerd[1512]: time="2026-03-14T00:50:41.328775888Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861778" Mar 14 00:50:41.330652 containerd[1512]: time="2026-03-14T00:50:41.330577028Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:41.332739 containerd[1512]: time="2026-03-14T00:50:41.332679479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:41.334144 containerd[1512]: time="2026-03-14T00:50:41.333953140Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.617323614s" Mar 14 00:50:41.334144 containerd[1512]: time="2026-03-14T00:50:41.333993860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 14 00:50:41.336378 containerd[1512]: time="2026-03-14T00:50:41.336254886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 14 00:50:41.968377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799235881.mount: Deactivated successfully. Mar 14 00:50:43.351214 containerd[1512]: time="2026-03-14T00:50:43.350800592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:43.353238 containerd[1512]: time="2026-03-14T00:50:43.353141732Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Mar 14 00:50:43.354406 containerd[1512]: time="2026-03-14T00:50:43.353925577Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:43.357470 containerd[1512]: time="2026-03-14T00:50:43.357426157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:43.360829 containerd[1512]: time="2026-03-14T00:50:43.360781823Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.024487729s" Mar 14 00:50:43.361014 containerd[1512]: time="2026-03-14T00:50:43.360990309Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 14 00:50:43.362691 containerd[1512]: time="2026-03-14T00:50:43.362658664Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:50:43.895987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685840536.mount: Deactivated successfully. Mar 14 00:50:43.901635 containerd[1512]: time="2026-03-14T00:50:43.900598187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:43.902086 containerd[1512]: time="2026-03-14T00:50:43.901971127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Mar 14 00:50:43.902916 containerd[1512]: time="2026-03-14T00:50:43.902862256Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:43.908433 containerd[1512]: time="2026-03-14T00:50:43.908339186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:43.910176 containerd[1512]: time="2026-03-14T00:50:43.910103061Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 547.401478ms" Mar 14 00:50:43.910500 containerd[1512]: time="2026-03-14T00:50:43.910456464Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:50:43.911403 containerd[1512]: time="2026-03-14T00:50:43.911310354Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 14 00:50:44.475441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4182094610.mount: Deactivated successfully. Mar 14 00:50:45.371002 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:50:45.475293 containerd[1512]: time="2026-03-14T00:50:45.475240436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:45.477209 containerd[1512]: time="2026-03-14T00:50:45.476343757Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860682" Mar 14 00:50:45.477209 containerd[1512]: time="2026-03-14T00:50:45.476497448Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:45.479459 containerd[1512]: time="2026-03-14T00:50:45.479412907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:50:45.480803 containerd[1512]: time="2026-03-14T00:50:45.480580985Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.569227979s" Mar 14 00:50:45.480803 containerd[1512]: time="2026-03-14T00:50:45.480621029Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 14 00:50:46.303562 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:50:46.312599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:50:46.451353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:50:46.457827 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:50:46.513464 kubelet[2142]: E0314 00:50:46.511297 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:50:46.513018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:50:46.513174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:50:48.723139 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:50:48.738839 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:50:48.782903 systemd[1]: Reloading requested from client PID 2157 ('systemctl') (unit session-9.scope)... Mar 14 00:50:48.782928 systemd[1]: Reloading... Mar 14 00:50:48.907811 zram_generator::config[2197]: No configuration found. Mar 14 00:50:49.058135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:50:49.138227 systemd[1]: Reloading finished in 354 ms. Mar 14 00:50:49.197893 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:50:49.198215 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:50:49.198551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:50:49.204442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:50:49.346394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:50:49.358254 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:50:49.426376 kubelet[2263]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:50:49.426376 kubelet[2263]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:50:49.426825 kubelet[2263]: I0314 00:50:49.426410 2263 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:50:49.667905 kubelet[2263]: I0314 00:50:49.667743 2263 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:50:49.667905 kubelet[2263]: I0314 00:50:49.667809 2263 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:50:49.674198 kubelet[2263]: I0314 00:50:49.674084 2263 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:50:49.674355 kubelet[2263]: I0314 00:50:49.674223 2263 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:50:49.674858 kubelet[2263]: I0314 00:50:49.674817 2263 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:50:49.684220 kubelet[2263]: E0314 00:50:49.683662 2263 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.101.86:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.101.86:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:50:49.686210 kubelet[2263]: I0314 00:50:49.686153 2263 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:50:49.698666 kubelet[2263]: E0314 00:50:49.698618 2263 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:50:49.698798 kubelet[2263]: I0314 00:50:49.698695 2263 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:50:49.703839 kubelet[2263]: I0314 00:50:49.703812 2263 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:50:49.706086 kubelet[2263]: I0314 00:50:49.705526 2263 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:50:49.706086 kubelet[2263]: I0314 00:50:49.705567 2263 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-avwyp.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:50:49.706086 kubelet[2263]: I0314 00:50:49.705766 2263 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:50:49.706086 kubelet[2263]: I0314 00:50:49.705777 2263 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:50:49.706462 kubelet[2263]: I0314 00:50:49.705915 2263 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:50:49.707200 kubelet[2263]: I0314 00:50:49.707158 2263 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:50:49.707520 kubelet[2263]: I0314 00:50:49.707509 2263 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:50:49.707605 kubelet[2263]: I0314 00:50:49.707597 2263 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:50:49.707676 kubelet[2263]: I0314 00:50:49.707670 2263 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:50:49.709445 kubelet[2263]: I0314 00:50:49.709423 2263 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:50:49.712362 kubelet[2263]: E0314 00:50:49.711988 2263 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.101.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-avwyp.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.101.86:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:50:49.713207 kubelet[2263]: I0314 00:50:49.712564 2263 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:50:49.713207 kubelet[2263]: I0314 00:50:49.713158 2263 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:50:49.713345 kubelet[2263]: I0314 00:50:49.713335 2263 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:50:49.713465 kubelet[2263]: W0314 00:50:49.713455 2263 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:50:49.716784 kubelet[2263]: I0314 00:50:49.716765 2263 server.go:1262] "Started kubelet" Mar 14 00:50:49.717048 kubelet[2263]: E0314 00:50:49.717029 2263 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.101.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.101.86:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:50:49.717604 kubelet[2263]: I0314 00:50:49.717580 2263 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:50:49.718725 kubelet[2263]: I0314 00:50:49.718704 2263 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:50:49.718863 kubelet[2263]: I0314 00:50:49.718838 2263 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:50:49.718959 kubelet[2263]: I0314 00:50:49.718946 2263 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:50:49.719361 kubelet[2263]: I0314 00:50:49.719347 2263 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:50:49.722395 kubelet[2263]: I0314 00:50:49.718709 2263 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:50:49.732576 kubelet[2263]: E0314 00:50:49.729684 2263 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.101.86:6443/api/v1/namespaces/default/events\": dial tcp 10.244.101.86:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-avwyp.gb1.brightbox.com.189c8ee0dd7a4fc3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-avwyp.gb1.brightbox.com,UID:srv-avwyp.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-avwyp.gb1.brightbox.com,},FirstTimestamp:2026-03-14 00:50:49.716731843 +0000 UTC m=+0.345606470,LastTimestamp:2026-03-14 00:50:49.716731843 +0000 UTC m=+0.345606470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-avwyp.gb1.brightbox.com,}" Mar 14 00:50:49.733431 kubelet[2263]: I0314 00:50:49.733393 2263 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:50:49.734213 kubelet[2263]: I0314 00:50:49.733845 2263 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:50:49.734213 kubelet[2263]: E0314 00:50:49.734070 2263 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-avwyp.gb1.brightbox.com\" not found" Mar 14 00:50:49.738434 kubelet[2263]: E0314 00:50:49.738368 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-avwyp.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.86:6443: connect: connection refused" interval="200ms" Mar 14 00:50:49.741216 kubelet[2263]: I0314 00:50:49.738571 2263 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:50:49.741216 kubelet[2263]: I0314 00:50:49.738636 2263 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:50:49.741216 kubelet[2263]: I0314 00:50:49.741069 2263 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:50:49.741903 kubelet[2263]: I0314 00:50:49.741873 2263 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:50:49.744589 kubelet[2263]: I0314 00:50:49.744571 2263 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:50:49.753934 kubelet[2263]: I0314 00:50:49.753883 2263 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:50:49.755478 kubelet[2263]: I0314 00:50:49.755445 2263 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:50:49.755478 kubelet[2263]: I0314 00:50:49.755483 2263 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:50:49.755599 kubelet[2263]: I0314 00:50:49.755513 2263 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:50:49.755599 kubelet[2263]: E0314 00:50:49.755557 2263 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:50:49.762894 kubelet[2263]: E0314 00:50:49.762857 2263 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.101.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.101.86:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:50:49.763027 kubelet[2263]: E0314 00:50:49.762975 2263 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.101.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.101.86:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:50:49.774215 kubelet[2263]: E0314 00:50:49.773954 2263 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:50:49.781951 kubelet[2263]: I0314 00:50:49.781926 2263 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:50:49.782745 kubelet[2263]: I0314 00:50:49.782520 2263 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:50:49.782894 kubelet[2263]: I0314 00:50:49.782883 2263 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:50:49.785816 kubelet[2263]: I0314 00:50:49.785801 2263 policy_none.go:49] "None policy: Start" Mar 14 00:50:49.785926 kubelet[2263]: I0314 00:50:49.785917 2263 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:50:49.785987 kubelet[2263]: I0314 00:50:49.785979 2263 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:50:49.786725 kubelet[2263]: I0314 00:50:49.786712 2263 policy_none.go:47] "Start" Mar 14 00:50:49.795065 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:50:49.810292 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:50:49.814878 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:50:49.825228 kubelet[2263]: E0314 00:50:49.825111 2263 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:50:49.825463 kubelet[2263]: I0314 00:50:49.825373 2263 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:50:49.825463 kubelet[2263]: I0314 00:50:49.825390 2263 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:50:49.826051 kubelet[2263]: I0314 00:50:49.826035 2263 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:50:49.828621 kubelet[2263]: E0314 00:50:49.828601 2263 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:50:49.828727 kubelet[2263]: E0314 00:50:49.828653 2263 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-avwyp.gb1.brightbox.com\" not found" Mar 14 00:50:49.875560 systemd[1]: Created slice kubepods-burstable-podb5f277d80d0bc595e7ac740533e0903f.slice - libcontainer container kubepods-burstable-podb5f277d80d0bc595e7ac740533e0903f.slice. Mar 14 00:50:49.894083 kubelet[2263]: E0314 00:50:49.893147 2263 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.897109 systemd[1]: Created slice kubepods-burstable-podd394621b7b3113d0e913ba576a405934.slice - libcontainer container kubepods-burstable-podd394621b7b3113d0e913ba576a405934.slice. Mar 14 00:50:49.901575 kubelet[2263]: E0314 00:50:49.900759 2263 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.904911 systemd[1]: Created slice kubepods-burstable-pode5f417885d6a95f6036a82e462786f8b.slice - libcontainer container kubepods-burstable-pode5f417885d6a95f6036a82e462786f8b.slice. Mar 14 00:50:49.907416 kubelet[2263]: E0314 00:50:49.907392 2263 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.928851 kubelet[2263]: I0314 00:50:49.928716 2263 kubelet_node_status.go:75] "Attempting to register node" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.929390 kubelet[2263]: E0314 00:50:49.929349 2263 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.86:6443/api/v1/nodes\": dial tcp 10.244.101.86:6443: connect: connection refused" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.941422 kubelet[2263]: I0314 00:50:49.941215 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5f277d80d0bc595e7ac740533e0903f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-avwyp.gb1.brightbox.com\" (UID: \"b5f277d80d0bc595e7ac740533e0903f\") " pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.941422 kubelet[2263]: I0314 00:50:49.941260 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-kubeconfig\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.941422 kubelet[2263]: I0314 00:50:49.941281 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.941422 kubelet[2263]: I0314 00:50:49.941303 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5f417885d6a95f6036a82e462786f8b-kubeconfig\") pod \"kube-scheduler-srv-avwyp.gb1.brightbox.com\" (UID: \"e5f417885d6a95f6036a82e462786f8b\") " pod="kube-system/kube-scheduler-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.941422 kubelet[2263]: E0314 00:50:49.941302 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-avwyp.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.86:6443: connect: connection refused" interval="400ms" Mar 14 00:50:49.941697 kubelet[2263]: I0314 00:50:49.941321 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-ca-certs\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.941697 kubelet[2263]: I0314 00:50:49.941372 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-flexvolume-dir\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.941697 kubelet[2263]: I0314 00:50:49.941391 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-k8s-certs\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.941697 kubelet[2263]: I0314 00:50:49.941413 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5f277d80d0bc595e7ac740533e0903f-ca-certs\") pod \"kube-apiserver-srv-avwyp.gb1.brightbox.com\" (UID: \"b5f277d80d0bc595e7ac740533e0903f\") " pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:49.941697 kubelet[2263]: I0314 00:50:49.941432 2263 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5f277d80d0bc595e7ac740533e0903f-k8s-certs\") pod \"kube-apiserver-srv-avwyp.gb1.brightbox.com\" (UID: \"b5f277d80d0bc595e7ac740533e0903f\") " pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:50.135463 kubelet[2263]: I0314 00:50:50.134853 2263 kubelet_node_status.go:75] "Attempting to register node" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:50.135738 kubelet[2263]: E0314 00:50:50.135601 2263 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.86:6443/api/v1/nodes\": dial tcp 10.244.101.86:6443: connect: connection refused" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:50.198772 containerd[1512]: time="2026-03-14T00:50:50.198615776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-avwyp.gb1.brightbox.com,Uid:b5f277d80d0bc595e7ac740533e0903f,Namespace:kube-system,Attempt:0,}" Mar 14 00:50:50.205220 containerd[1512]: time="2026-03-14T00:50:50.205096748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-avwyp.gb1.brightbox.com,Uid:d394621b7b3113d0e913ba576a405934,Namespace:kube-system,Attempt:0,}" Mar 14 00:50:50.209508 containerd[1512]: time="2026-03-14T00:50:50.209277338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-avwyp.gb1.brightbox.com,Uid:e5f417885d6a95f6036a82e462786f8b,Namespace:kube-system,Attempt:0,}" Mar 14 00:50:50.342713 kubelet[2263]: E0314 00:50:50.342656 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-avwyp.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.86:6443: connect: connection refused" interval="800ms" Mar 14 00:50:50.539067 kubelet[2263]: I0314 00:50:50.538897 2263 kubelet_node_status.go:75] "Attempting to register node" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:50.539930 kubelet[2263]: E0314 00:50:50.539275 2263 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.86:6443/api/v1/nodes\": dial tcp 10.244.101.86:6443: connect: connection refused" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:50.706911 kubelet[2263]: E0314 00:50:50.706832 2263 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.101.86:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.101.86:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:50:50.785950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4221716983.mount: Deactivated successfully. Mar 14 00:50:50.793554 containerd[1512]: time="2026-03-14T00:50:50.791062257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:50:50.793554 containerd[1512]: time="2026-03-14T00:50:50.791710132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:50:50.793554 containerd[1512]: time="2026-03-14T00:50:50.792293677Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 14 00:50:50.793554 containerd[1512]: time="2026-03-14T00:50:50.792659047Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:50:50.793554 containerd[1512]: time="2026-03-14T00:50:50.793198797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:50:50.794558 containerd[1512]: time="2026-03-14T00:50:50.794531043Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:50:50.798170 containerd[1512]: time="2026-03-14T00:50:50.798138661Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:50:50.799004 containerd[1512]: time="2026-03-14T00:50:50.798977309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 593.687512ms" Mar 14 00:50:50.801158 containerd[1512]: time="2026-03-14T00:50:50.801107053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:50:50.802566 containerd[1512]: time="2026-03-14T00:50:50.802528183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 593.176084ms" Mar 14 00:50:50.804489 containerd[1512]: time="2026-03-14T00:50:50.804458911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 605.565567ms" Mar 14 00:50:50.925726 kubelet[2263]: E0314 00:50:50.925672 2263 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.101.86:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-avwyp.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.101.86:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:50:50.974233 containerd[1512]: time="2026-03-14T00:50:50.974115202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:50:50.974408 containerd[1512]: time="2026-03-14T00:50:50.974193556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:50:50.974408 containerd[1512]: time="2026-03-14T00:50:50.974210440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:50:50.974408 containerd[1512]: time="2026-03-14T00:50:50.974318297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:50:50.977626 containerd[1512]: time="2026-03-14T00:50:50.977419963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:50:50.977626 containerd[1512]: time="2026-03-14T00:50:50.977472040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:50:50.977626 containerd[1512]: time="2026-03-14T00:50:50.977488092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:50:50.977626 containerd[1512]: time="2026-03-14T00:50:50.977559800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:50:50.978994 kubelet[2263]: E0314 00:50:50.978958 2263 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.101.86:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.101.86:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:50:50.987626 containerd[1512]: time="2026-03-14T00:50:50.987247037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:50:50.987626 containerd[1512]: time="2026-03-14T00:50:50.987335549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:50:50.987626 containerd[1512]: time="2026-03-14T00:50:50.987361191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:50:50.987626 containerd[1512]: time="2026-03-14T00:50:50.987529735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:50:51.012393 systemd[1]: Started cri-containerd-b26e3f96b3a37369ece6f6fe0b21e15eef1ea6b0caf4e5002c6ace25524aadf7.scope - libcontainer container b26e3f96b3a37369ece6f6fe0b21e15eef1ea6b0caf4e5002c6ace25524aadf7. Mar 14 00:50:51.017989 systemd[1]: Started cri-containerd-963943d252504ca6f84605ffe755234686c3c5beb96934a5ac8928639548d7c0.scope - libcontainer container 963943d252504ca6f84605ffe755234686c3c5beb96934a5ac8928639548d7c0. Mar 14 00:50:51.035435 systemd[1]: Started cri-containerd-d5417cee5355a1623cddfde7bcc0fe24ed5378b57fbe57bea20f69a7ff872e08.scope - libcontainer container d5417cee5355a1623cddfde7bcc0fe24ed5378b57fbe57bea20f69a7ff872e08. Mar 14 00:50:51.092879 containerd[1512]: time="2026-03-14T00:50:51.092564734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-avwyp.gb1.brightbox.com,Uid:b5f277d80d0bc595e7ac740533e0903f,Namespace:kube-system,Attempt:0,} returns sandbox id \"963943d252504ca6f84605ffe755234686c3c5beb96934a5ac8928639548d7c0\"" Mar 14 00:50:51.112280 containerd[1512]: time="2026-03-14T00:50:51.112120975Z" level=info msg="CreateContainer within sandbox \"963943d252504ca6f84605ffe755234686c3c5beb96934a5ac8928639548d7c0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:50:51.117446 containerd[1512]: time="2026-03-14T00:50:51.117305960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-avwyp.gb1.brightbox.com,Uid:d394621b7b3113d0e913ba576a405934,Namespace:kube-system,Attempt:0,} returns sandbox id \"b26e3f96b3a37369ece6f6fe0b21e15eef1ea6b0caf4e5002c6ace25524aadf7\"" Mar 14 00:50:51.122939 containerd[1512]: time="2026-03-14T00:50:51.122727974Z" level=info msg="CreateContainer within sandbox \"b26e3f96b3a37369ece6f6fe0b21e15eef1ea6b0caf4e5002c6ace25524aadf7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:50:51.127122 containerd[1512]: time="2026-03-14T00:50:51.126770303Z" level=info msg="CreateContainer within sandbox \"963943d252504ca6f84605ffe755234686c3c5beb96934a5ac8928639548d7c0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2bca4e3ef3b2f6a99e544a6251f41101b22ef6f1b97c2bcc350995b876fc927e\"" Mar 14 00:50:51.127723 containerd[1512]: time="2026-03-14T00:50:51.127696159Z" level=info msg="StartContainer for \"2bca4e3ef3b2f6a99e544a6251f41101b22ef6f1b97c2bcc350995b876fc927e\"" Mar 14 00:50:51.134165 containerd[1512]: time="2026-03-14T00:50:51.134122049Z" level=info msg="CreateContainer within sandbox \"b26e3f96b3a37369ece6f6fe0b21e15eef1ea6b0caf4e5002c6ace25524aadf7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4dd396e6b2aeb87a27cbb7d7e869bb8ef6095377b01daf53230b95c174c0ecf\"" Mar 14 00:50:51.135035 containerd[1512]: time="2026-03-14T00:50:51.134997909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-avwyp.gb1.brightbox.com,Uid:e5f417885d6a95f6036a82e462786f8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5417cee5355a1623cddfde7bcc0fe24ed5378b57fbe57bea20f69a7ff872e08\"" Mar 14 00:50:51.135289 containerd[1512]: time="2026-03-14T00:50:51.135225370Z" level=info msg="StartContainer for \"d4dd396e6b2aeb87a27cbb7d7e869bb8ef6095377b01daf53230b95c174c0ecf\"" Mar 14 00:50:51.135593 kubelet[2263]: E0314 00:50:51.135561 2263 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.101.86:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.101.86:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:50:51.139203 containerd[1512]: time="2026-03-14T00:50:51.138608006Z" level=info msg="CreateContainer within sandbox \"d5417cee5355a1623cddfde7bcc0fe24ed5378b57fbe57bea20f69a7ff872e08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:50:51.149656 kubelet[2263]: E0314 00:50:51.148993 2263 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.101.86:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-avwyp.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.101.86:6443: connect: connection refused" interval="1.6s" Mar 14 00:50:51.164885 containerd[1512]: time="2026-03-14T00:50:51.164826087Z" level=info msg="CreateContainer within sandbox \"d5417cee5355a1623cddfde7bcc0fe24ed5378b57fbe57bea20f69a7ff872e08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e43ac9fb4b60688308bf55c94b23ceaa41d20a429e6cee361b6f5d056c24320c\"" Mar 14 00:50:51.165983 containerd[1512]: time="2026-03-14T00:50:51.165777579Z" level=info msg="StartContainer for \"e43ac9fb4b60688308bf55c94b23ceaa41d20a429e6cee361b6f5d056c24320c\"" Mar 14 00:50:51.192406 systemd[1]: Started cri-containerd-2bca4e3ef3b2f6a99e544a6251f41101b22ef6f1b97c2bcc350995b876fc927e.scope - libcontainer container 2bca4e3ef3b2f6a99e544a6251f41101b22ef6f1b97c2bcc350995b876fc927e. Mar 14 00:50:51.194566 systemd[1]: Started cri-containerd-d4dd396e6b2aeb87a27cbb7d7e869bb8ef6095377b01daf53230b95c174c0ecf.scope - libcontainer container d4dd396e6b2aeb87a27cbb7d7e869bb8ef6095377b01daf53230b95c174c0ecf. Mar 14 00:50:51.222471 systemd[1]: Started cri-containerd-e43ac9fb4b60688308bf55c94b23ceaa41d20a429e6cee361b6f5d056c24320c.scope - libcontainer container e43ac9fb4b60688308bf55c94b23ceaa41d20a429e6cee361b6f5d056c24320c. Mar 14 00:50:51.286781 containerd[1512]: time="2026-03-14T00:50:51.286227786Z" level=info msg="StartContainer for \"2bca4e3ef3b2f6a99e544a6251f41101b22ef6f1b97c2bcc350995b876fc927e\" returns successfully" Mar 14 00:50:51.301346 containerd[1512]: time="2026-03-14T00:50:51.301293973Z" level=info msg="StartContainer for \"d4dd396e6b2aeb87a27cbb7d7e869bb8ef6095377b01daf53230b95c174c0ecf\" returns successfully" Mar 14 00:50:51.343394 kubelet[2263]: I0314 00:50:51.342013 2263 kubelet_node_status.go:75] "Attempting to register node" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:51.343394 kubelet[2263]: E0314 00:50:51.342355 2263 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.101.86:6443/api/v1/nodes\": dial tcp 10.244.101.86:6443: connect: connection refused" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:51.347732 containerd[1512]: time="2026-03-14T00:50:51.347684745Z" level=info msg="StartContainer for \"e43ac9fb4b60688308bf55c94b23ceaa41d20a429e6cee361b6f5d056c24320c\" returns successfully" Mar 14 00:50:51.794875 kubelet[2263]: E0314 00:50:51.794843 2263 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:51.807042 kubelet[2263]: E0314 00:50:51.807008 2263 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:51.812738 kubelet[2263]: E0314 00:50:51.812718 2263 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:52.814902 kubelet[2263]: E0314 00:50:52.814862 2263 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:52.817202 kubelet[2263]: E0314 00:50:52.816422 2263 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:52.946089 kubelet[2263]: I0314 00:50:52.946040 2263 kubelet_node_status.go:75] "Attempting to register node" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.099691 kubelet[2263]: E0314 00:50:53.099393 2263 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.420245 kubelet[2263]: E0314 00:50:53.420103 2263 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-avwyp.gb1.brightbox.com\" not found" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.504584 kubelet[2263]: I0314 00:50:53.504252 2263 kubelet_node_status.go:78] "Successfully registered node" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.534475 kubelet[2263]: I0314 00:50:53.534436 2263 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.597371 kubelet[2263]: E0314 00:50:53.597175 2263 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-avwyp.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.597371 kubelet[2263]: I0314 00:50:53.597246 2263 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.605219 kubelet[2263]: E0314 00:50:53.603202 2263 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-avwyp.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.605219 kubelet[2263]: I0314 00:50:53.603245 2263 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.610545 kubelet[2263]: E0314 00:50:53.610499 2263 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:53.713811 kubelet[2263]: I0314 00:50:53.713429 2263 apiserver.go:52] "Watching apiserver" Mar 14 00:50:53.740468 kubelet[2263]: I0314 00:50:53.740354 2263 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:50:54.697664 kubelet[2263]: I0314 00:50:54.697617 2263 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:54.705249 kubelet[2263]: I0314 00:50:54.705207 2263 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:50:55.847795 systemd[1]: Reloading requested from client PID 2555 ('systemctl') (unit session-9.scope)... Mar 14 00:50:55.847827 systemd[1]: Reloading... Mar 14 00:50:55.964235 zram_generator::config[2594]: No configuration found. Mar 14 00:50:56.135169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:50:56.236649 systemd[1]: Reloading finished in 388 ms. Mar 14 00:50:56.298053 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:50:56.314926 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:50:56.315789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:50:56.323643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:50:56.507366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:50:56.516585 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:50:56.593827 kubelet[2658]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:50:56.594225 kubelet[2658]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:50:56.594225 kubelet[2658]: I0314 00:50:56.594026 2658 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:50:56.605969 kubelet[2658]: I0314 00:50:56.605930 2658 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:50:56.605969 kubelet[2658]: I0314 00:50:56.605958 2658 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:50:56.608525 kubelet[2658]: I0314 00:50:56.608492 2658 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:50:56.608525 kubelet[2658]: I0314 00:50:56.608520 2658 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:50:56.608818 kubelet[2658]: I0314 00:50:56.608797 2658 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:50:56.610282 kubelet[2658]: I0314 00:50:56.610262 2658 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:50:56.614803 kubelet[2658]: I0314 00:50:56.614748 2658 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:50:56.619489 kubelet[2658]: E0314 00:50:56.619449 2658 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:50:56.619607 kubelet[2658]: I0314 00:50:56.619507 2658 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:50:56.622998 kubelet[2658]: I0314 00:50:56.622321 2658 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:50:56.622998 kubelet[2658]: I0314 00:50:56.622603 2658 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:50:56.622998 kubelet[2658]: I0314 00:50:56.622626 2658 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-avwyp.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:50:56.622998 kubelet[2658]: I0314 00:50:56.622811 2658 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:50:56.623350 kubelet[2658]: I0314 00:50:56.622822 2658 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:50:56.623350 kubelet[2658]: I0314 00:50:56.622856 2658 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:50:56.623598 kubelet[2658]: I0314 00:50:56.623585 2658 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:50:56.623847 kubelet[2658]: I0314 00:50:56.623824 2658 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:50:56.623942 kubelet[2658]: I0314 00:50:56.623932 2658 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:50:56.624017 kubelet[2658]: I0314 00:50:56.624010 2658 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:50:56.624077 kubelet[2658]: I0314 00:50:56.624071 2658 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:50:56.639556 kubelet[2658]: I0314 00:50:56.639499 2658 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:50:56.641108 kubelet[2658]: I0314 00:50:56.641058 2658 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:50:56.641351 kubelet[2658]: I0314 00:50:56.641156 2658 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:50:56.652786 kubelet[2658]: I0314 00:50:56.652107 2658 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:50:56.652786 kubelet[2658]: I0314 00:50:56.652203 2658 server.go:1262] "Started kubelet" Mar 14 00:50:56.652786 kubelet[2658]: I0314 00:50:56.652270 2658 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:50:56.653719 kubelet[2658]: I0314 00:50:56.653444 2658 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:50:56.655745 kubelet[2658]: I0314 00:50:56.655628 2658 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:50:56.660543 kubelet[2658]: I0314 00:50:56.660520 2658 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:50:56.661818 kubelet[2658]: I0314 00:50:56.661396 2658 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:50:56.661818 kubelet[2658]: I0314 00:50:56.661697 2658 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:50:56.663643 kubelet[2658]: I0314 00:50:56.663578 2658 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:50:56.664179 kubelet[2658]: I0314 00:50:56.663370 2658 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:50:56.664179 kubelet[2658]: I0314 00:50:56.663802 2658 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:50:56.664179 kubelet[2658]: I0314 00:50:56.663812 2658 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:50:56.666205 kubelet[2658]: I0314 00:50:56.664711 2658 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:50:56.674944 kubelet[2658]: I0314 00:50:56.674694 2658 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:50:56.680546 kubelet[2658]: I0314 00:50:56.680513 2658 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:50:56.681581 kubelet[2658]: I0314 00:50:56.681551 2658 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:50:56.681581 kubelet[2658]: I0314 00:50:56.681582 2658 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:50:56.681722 kubelet[2658]: I0314 00:50:56.681620 2658 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:50:56.681722 kubelet[2658]: E0314 00:50:56.681665 2658 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732378 2658 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732398 2658 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732417 2658 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732544 2658 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732553 2658 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732575 2658 policy_none.go:49] "None policy: Start" Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732591 2658 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732606 2658 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732718 2658 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:50:56.733356 kubelet[2658]: I0314 00:50:56.732737 2658 policy_none.go:47] "Start" Mar 14 00:50:56.739843 kubelet[2658]: E0314 00:50:56.739820 2658 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:50:56.741744 kubelet[2658]: I0314 00:50:56.741514 2658 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:50:56.742321 kubelet[2658]: I0314 00:50:56.742231 2658 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:50:56.743237 kubelet[2658]: I0314 00:50:56.743025 2658 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:50:56.747017 kubelet[2658]: E0314 00:50:56.746997 2658 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:50:56.783946 kubelet[2658]: I0314 00:50:56.783620 2658 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.784648 kubelet[2658]: I0314 00:50:56.784408 2658 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.787014 kubelet[2658]: I0314 00:50:56.785819 2658 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.797425 kubelet[2658]: I0314 00:50:56.797387 2658 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:50:56.798322 kubelet[2658]: I0314 00:50:56.798005 2658 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:50:56.801766 kubelet[2658]: I0314 00:50:56.801717 2658 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:50:56.802504 kubelet[2658]: E0314 00:50:56.802027 2658 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-avwyp.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.854611 sudo[2697]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:50:56.855040 sudo[2697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:50:56.864851 kubelet[2658]: I0314 00:50:56.864726 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5f277d80d0bc595e7ac740533e0903f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-avwyp.gb1.brightbox.com\" (UID: \"b5f277d80d0bc595e7ac740533e0903f\") " pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.864851 kubelet[2658]: I0314 00:50:56.864830 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-ca-certs\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.865244 kubelet[2658]: I0314 00:50:56.864875 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-flexvolume-dir\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.865244 kubelet[2658]: I0314 00:50:56.864897 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-k8s-certs\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.865244 kubelet[2658]: I0314 00:50:56.864926 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-kubeconfig\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.865244 kubelet[2658]: I0314 00:50:56.864951 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5f417885d6a95f6036a82e462786f8b-kubeconfig\") pod \"kube-scheduler-srv-avwyp.gb1.brightbox.com\" (UID: \"e5f417885d6a95f6036a82e462786f8b\") " pod="kube-system/kube-scheduler-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.865244 kubelet[2658]: I0314 00:50:56.864989 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5f277d80d0bc595e7ac740533e0903f-k8s-certs\") pod \"kube-apiserver-srv-avwyp.gb1.brightbox.com\" (UID: \"b5f277d80d0bc595e7ac740533e0903f\") " pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.865687 kubelet[2658]: I0314 00:50:56.865013 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d394621b7b3113d0e913ba576a405934-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-avwyp.gb1.brightbox.com\" (UID: \"d394621b7b3113d0e913ba576a405934\") " pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.865687 kubelet[2658]: I0314 00:50:56.865038 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5f277d80d0bc595e7ac740533e0903f-ca-certs\") pod \"kube-apiserver-srv-avwyp.gb1.brightbox.com\" (UID: \"b5f277d80d0bc595e7ac740533e0903f\") " pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.873816 kubelet[2658]: I0314 00:50:56.872302 2658 kubelet_node_status.go:75] "Attempting to register node" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.882203 kubelet[2658]: I0314 00:50:56.882162 2658 kubelet_node_status.go:124] "Node was previously registered" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:56.882308 kubelet[2658]: I0314 00:50:56.882263 2658 kubelet_node_status.go:78] "Successfully registered node" node="srv-avwyp.gb1.brightbox.com" Mar 14 00:50:57.591382 sudo[2697]: pam_unix(sudo:session): session closed for user root Mar 14 00:50:57.627212 kubelet[2658]: I0314 00:50:57.626508 2658 apiserver.go:52] "Watching apiserver" Mar 14 00:50:57.661705 kubelet[2658]: I0314 00:50:57.661635 2658 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:50:57.693833 kubelet[2658]: I0314 00:50:57.693731 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" podStartSLOduration=3.693705306 podStartE2EDuration="3.693705306s" podCreationTimestamp="2026-03-14 00:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:50:57.68223948 +0000 UTC m=+1.153484877" watchObservedRunningTime="2026-03-14 00:50:57.693705306 +0000 UTC m=+1.164950678" Mar 14 00:50:57.694108 kubelet[2658]: I0314 00:50:57.694075 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-avwyp.gb1.brightbox.com" podStartSLOduration=1.694065131 podStartE2EDuration="1.694065131s" podCreationTimestamp="2026-03-14 00:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:50:57.693895431 +0000 UTC m=+1.165140804" watchObservedRunningTime="2026-03-14 00:50:57.694065131 +0000 UTC m=+1.165310518" Mar 14 00:50:57.714588 kubelet[2658]: I0314 00:50:57.714550 2658 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:57.716451 kubelet[2658]: I0314 00:50:57.716417 2658 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:57.730022 kubelet[2658]: I0314 00:50:57.729515 2658 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:50:57.730022 kubelet[2658]: E0314 00:50:57.729574 2658 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-avwyp.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:57.730333 kubelet[2658]: I0314 00:50:57.730319 2658 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 00:50:57.730380 kubelet[2658]: E0314 00:50:57.730363 2658 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-avwyp.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-avwyp.gb1.brightbox.com" Mar 14 00:50:57.740656 kubelet[2658]: I0314 00:50:57.740602 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-avwyp.gb1.brightbox.com" podStartSLOduration=1.7405766329999999 podStartE2EDuration="1.740576633s" podCreationTimestamp="2026-03-14 00:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:50:57.70541658 +0000 UTC m=+1.176661976" watchObservedRunningTime="2026-03-14 00:50:57.740576633 +0000 UTC m=+1.211822026" Mar 14 00:50:58.160452 update_engine[1492]: I20260314 00:50:58.160266 1492 update_attempter.cc:509] Updating boot flags... Mar 14 00:50:58.223276 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2715) Mar 14 00:50:58.279394 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2719) Mar 14 00:50:59.130074 sudo[1748]: pam_unix(sudo:session): session closed for user root Mar 14 00:50:59.224901 sshd[1745]: pam_unix(sshd:session): session closed for user core Mar 14 00:50:59.233603 systemd[1]: sshd@6-10.244.101.86:22-20.161.92.111:41878.service: Deactivated successfully. Mar 14 00:50:59.238398 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:50:59.238783 systemd[1]: session-9.scope: Consumed 5.857s CPU time, 153.5M memory peak, 0B memory swap peak. Mar 14 00:50:59.241441 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:50:59.244894 systemd-logind[1491]: Removed session 9. Mar 14 00:51:01.264496 kubelet[2658]: I0314 00:51:01.264449 2658 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:51:01.265457 containerd[1512]: time="2026-03-14T00:51:01.265285708Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:51:01.266338 kubelet[2658]: I0314 00:51:01.266068 2658 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:51:02.407828 systemd[1]: Created slice kubepods-besteffort-pod7fd96e9a_53d5_4df9_840b_05c9cc4a8695.slice - libcontainer container kubepods-besteffort-pod7fd96e9a_53d5_4df9_840b_05c9cc4a8695.slice. Mar 14 00:51:02.425981 systemd[1]: Created slice kubepods-burstable-pod20b24e53_7a9b_4af8_96bc_13b79ff21e88.slice - libcontainer container kubepods-burstable-pod20b24e53_7a9b_4af8_96bc_13b79ff21e88.slice. Mar 14 00:51:02.502769 kubelet[2658]: I0314 00:51:02.502264 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-config-path\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.502769 kubelet[2658]: I0314 00:51:02.502319 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-host-proc-sys-kernel\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.502769 kubelet[2658]: I0314 00:51:02.502340 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7fd96e9a-53d5-4df9-840b-05c9cc4a8695-kube-proxy\") pod \"kube-proxy-vcf7p\" (UID: \"7fd96e9a-53d5-4df9-840b-05c9cc4a8695\") " pod="kube-system/kube-proxy-vcf7p" Mar 14 00:51:02.502769 kubelet[2658]: I0314 00:51:02.502355 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-hostproc\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.502769 kubelet[2658]: I0314 00:51:02.502371 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cni-path\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.502769 kubelet[2658]: I0314 00:51:02.502385 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-etc-cni-netd\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.504372 kubelet[2658]: I0314 00:51:02.502400 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7fd96e9a-53d5-4df9-840b-05c9cc4a8695-xtables-lock\") pod \"kube-proxy-vcf7p\" (UID: \"7fd96e9a-53d5-4df9-840b-05c9cc4a8695\") " pod="kube-system/kube-proxy-vcf7p" Mar 14 00:51:02.504372 kubelet[2658]: I0314 00:51:02.502417 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz8rx\" (UniqueName: \"kubernetes.io/projected/7fd96e9a-53d5-4df9-840b-05c9cc4a8695-kube-api-access-dz8rx\") pod \"kube-proxy-vcf7p\" (UID: \"7fd96e9a-53d5-4df9-840b-05c9cc4a8695\") " pod="kube-system/kube-proxy-vcf7p" Mar 14 00:51:02.504372 kubelet[2658]: I0314 00:51:02.502443 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-lib-modules\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.504372 kubelet[2658]: I0314 00:51:02.502459 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-xtables-lock\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.504372 kubelet[2658]: I0314 00:51:02.502473 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-host-proc-sys-net\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.504544 kubelet[2658]: I0314 00:51:02.502488 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb7wh\" (UniqueName: \"kubernetes.io/projected/20b24e53-7a9b-4af8-96bc-13b79ff21e88-kube-api-access-bb7wh\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.504544 kubelet[2658]: I0314 00:51:02.502505 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7fd96e9a-53d5-4df9-840b-05c9cc4a8695-lib-modules\") pod \"kube-proxy-vcf7p\" (UID: \"7fd96e9a-53d5-4df9-840b-05c9cc4a8695\") " pod="kube-system/kube-proxy-vcf7p" Mar 14 00:51:02.504544 kubelet[2658]: I0314 00:51:02.502518 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20b24e53-7a9b-4af8-96bc-13b79ff21e88-clustermesh-secrets\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.504544 kubelet[2658]: I0314 00:51:02.502532 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20b24e53-7a9b-4af8-96bc-13b79ff21e88-hubble-tls\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.504544 kubelet[2658]: I0314 00:51:02.502547 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-run\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.504544 kubelet[2658]: I0314 00:51:02.502562 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-bpf-maps\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.504720 kubelet[2658]: I0314 00:51:02.502609 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-cgroup\") pod \"cilium-pp7w5\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " pod="kube-system/cilium-pp7w5" Mar 14 00:51:02.537326 systemd[1]: Created slice kubepods-besteffort-pod0eb78056_43fe_4df1_a0e1_de68ef72e1a0.slice - libcontainer container kubepods-besteffort-pod0eb78056_43fe_4df1_a0e1_de68ef72e1a0.slice. Mar 14 00:51:02.604243 kubelet[2658]: I0314 00:51:02.603276 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0eb78056-43fe-4df1-a0e1-de68ef72e1a0-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-fwcgl\" (UID: \"0eb78056-43fe-4df1-a0e1-de68ef72e1a0\") " pod="kube-system/cilium-operator-6f9c7c5859-fwcgl" Mar 14 00:51:02.604243 kubelet[2658]: I0314 00:51:02.603390 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwcxk\" (UniqueName: \"kubernetes.io/projected/0eb78056-43fe-4df1-a0e1-de68ef72e1a0-kube-api-access-wwcxk\") pod \"cilium-operator-6f9c7c5859-fwcgl\" (UID: \"0eb78056-43fe-4df1-a0e1-de68ef72e1a0\") " pod="kube-system/cilium-operator-6f9c7c5859-fwcgl" Mar 14 00:51:02.721527 containerd[1512]: time="2026-03-14T00:51:02.721478689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vcf7p,Uid:7fd96e9a-53d5-4df9-840b-05c9cc4a8695,Namespace:kube-system,Attempt:0,}" Mar 14 00:51:02.733513 containerd[1512]: time="2026-03-14T00:51:02.732552083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pp7w5,Uid:20b24e53-7a9b-4af8-96bc-13b79ff21e88,Namespace:kube-system,Attempt:0,}" Mar 14 00:51:02.766902 containerd[1512]: time="2026-03-14T00:51:02.766811741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:51:02.767223 containerd[1512]: time="2026-03-14T00:51:02.767156238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:51:02.767367 containerd[1512]: time="2026-03-14T00:51:02.767337967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:02.767749 containerd[1512]: time="2026-03-14T00:51:02.767707546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:02.774587 containerd[1512]: time="2026-03-14T00:51:02.774508537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:51:02.774718 containerd[1512]: time="2026-03-14T00:51:02.774558198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:51:02.774718 containerd[1512]: time="2026-03-14T00:51:02.774587838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:02.774718 containerd[1512]: time="2026-03-14T00:51:02.774664484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:02.796395 systemd[1]: Started cri-containerd-81bc488b1ec2d652df9e45d41ce5b738ed5f99a18989488aee1f89d3b269e42a.scope - libcontainer container 81bc488b1ec2d652df9e45d41ce5b738ed5f99a18989488aee1f89d3b269e42a. Mar 14 00:51:02.814415 systemd[1]: Started cri-containerd-cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544.scope - libcontainer container cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544. Mar 14 00:51:02.843719 containerd[1512]: time="2026-03-14T00:51:02.843445014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vcf7p,Uid:7fd96e9a-53d5-4df9-840b-05c9cc4a8695,Namespace:kube-system,Attempt:0,} returns sandbox id \"81bc488b1ec2d652df9e45d41ce5b738ed5f99a18989488aee1f89d3b269e42a\"" Mar 14 00:51:02.843919 containerd[1512]: time="2026-03-14T00:51:02.843892591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fwcgl,Uid:0eb78056-43fe-4df1-a0e1-de68ef72e1a0,Namespace:kube-system,Attempt:0,}" Mar 14 00:51:02.851317 containerd[1512]: time="2026-03-14T00:51:02.851146702Z" level=info msg="CreateContainer within sandbox \"81bc488b1ec2d652df9e45d41ce5b738ed5f99a18989488aee1f89d3b269e42a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:51:02.860658 containerd[1512]: time="2026-03-14T00:51:02.860029340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pp7w5,Uid:20b24e53-7a9b-4af8-96bc-13b79ff21e88,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\"" Mar 14 00:51:02.863362 containerd[1512]: time="2026-03-14T00:51:02.862699972Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:51:02.865883 containerd[1512]: time="2026-03-14T00:51:02.865851740Z" level=info msg="CreateContainer within sandbox \"81bc488b1ec2d652df9e45d41ce5b738ed5f99a18989488aee1f89d3b269e42a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24d66b4ab938132e8defb82379fe63c5ccd3bc206f2b3861b773b3e291dd52e1\"" Mar 14 00:51:02.868269 containerd[1512]: time="2026-03-14T00:51:02.868171637Z" level=info msg="StartContainer for \"24d66b4ab938132e8defb82379fe63c5ccd3bc206f2b3861b773b3e291dd52e1\"" Mar 14 00:51:02.890074 containerd[1512]: time="2026-03-14T00:51:02.889270346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:51:02.890074 containerd[1512]: time="2026-03-14T00:51:02.889368585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:51:02.890074 containerd[1512]: time="2026-03-14T00:51:02.889385827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:02.890074 containerd[1512]: time="2026-03-14T00:51:02.889498568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:02.906379 systemd[1]: Started cri-containerd-24d66b4ab938132e8defb82379fe63c5ccd3bc206f2b3861b773b3e291dd52e1.scope - libcontainer container 24d66b4ab938132e8defb82379fe63c5ccd3bc206f2b3861b773b3e291dd52e1. Mar 14 00:51:02.916262 systemd[1]: Started cri-containerd-65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612.scope - libcontainer container 65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612. Mar 14 00:51:02.952635 containerd[1512]: time="2026-03-14T00:51:02.952590201Z" level=info msg="StartContainer for \"24d66b4ab938132e8defb82379fe63c5ccd3bc206f2b3861b773b3e291dd52e1\" returns successfully" Mar 14 00:51:02.981557 containerd[1512]: time="2026-03-14T00:51:02.981375075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-fwcgl,Uid:0eb78056-43fe-4df1-a0e1-de68ef72e1a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\"" Mar 14 00:51:03.759930 kubelet[2658]: I0314 00:51:03.759820 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vcf7p" podStartSLOduration=1.759786362 podStartE2EDuration="1.759786362s" podCreationTimestamp="2026-03-14 00:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:51:03.759769536 +0000 UTC m=+7.231014976" watchObservedRunningTime="2026-03-14 00:51:03.759786362 +0000 UTC m=+7.231031758" Mar 14 00:51:09.343693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016197303.mount: Deactivated successfully. Mar 14 00:51:10.639525 systemd[1]: Started sshd@7-10.244.101.86:22-46.32.174.108:43842.service - OpenSSH per-connection server daemon (46.32.174.108:43842). Mar 14 00:51:11.266070 sshd[3071]: Invalid user gittest from 46.32.174.108 port 43842 Mar 14 00:51:11.376365 sshd[3071]: Received disconnect from 46.32.174.108 port 43842:11: Bye Bye [preauth] Mar 14 00:51:11.376365 sshd[3071]: Disconnected from invalid user gittest 46.32.174.108 port 43842 [preauth] Mar 14 00:51:11.377511 systemd[1]: sshd@7-10.244.101.86:22-46.32.174.108:43842.service: Deactivated successfully. Mar 14 00:51:11.638923 containerd[1512]: time="2026-03-14T00:51:11.624531542Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 00:51:11.686559 containerd[1512]: time="2026-03-14T00:51:11.686271113Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:51:11.699438 containerd[1512]: time="2026-03-14T00:51:11.699371103Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.833278243s" Mar 14 00:51:11.699438 containerd[1512]: time="2026-03-14T00:51:11.699431000Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 00:51:11.699964 containerd[1512]: time="2026-03-14T00:51:11.699939874Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:51:11.702638 containerd[1512]: time="2026-03-14T00:51:11.702600831Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:51:11.706925 containerd[1512]: time="2026-03-14T00:51:11.706889757Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:51:11.785375 containerd[1512]: time="2026-03-14T00:51:11.785254127Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\"" Mar 14 00:51:11.787389 containerd[1512]: time="2026-03-14T00:51:11.787216723Z" level=info msg="StartContainer for \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\"" Mar 14 00:51:11.884463 systemd[1]: Started cri-containerd-bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77.scope - libcontainer container bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77. Mar 14 00:51:11.929698 containerd[1512]: time="2026-03-14T00:51:11.928314436Z" level=info msg="StartContainer for \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\" returns successfully" Mar 14 00:51:11.942596 systemd[1]: cri-containerd-bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77.scope: Deactivated successfully. Mar 14 00:51:12.033972 containerd[1512]: time="2026-03-14T00:51:12.018962308Z" level=info msg="shim disconnected" id=bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77 namespace=k8s.io Mar 14 00:51:12.033972 containerd[1512]: time="2026-03-14T00:51:12.033965473Z" level=warning msg="cleaning up after shim disconnected" id=bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77 namespace=k8s.io Mar 14 00:51:12.034296 containerd[1512]: time="2026-03-14T00:51:12.033987989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:51:12.047158 containerd[1512]: time="2026-03-14T00:51:12.047093594Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:51:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:51:12.771959 containerd[1512]: time="2026-03-14T00:51:12.771864124Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:51:12.775801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77-rootfs.mount: Deactivated successfully. Mar 14 00:51:12.807227 containerd[1512]: time="2026-03-14T00:51:12.805124211Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\"" Mar 14 00:51:12.806276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount339562528.mount: Deactivated successfully. Mar 14 00:51:12.809532 containerd[1512]: time="2026-03-14T00:51:12.808112793Z" level=info msg="StartContainer for \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\"" Mar 14 00:51:12.848890 systemd[1]: Started cri-containerd-8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85.scope - libcontainer container 8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85. Mar 14 00:51:12.888143 containerd[1512]: time="2026-03-14T00:51:12.887842572Z" level=info msg="StartContainer for \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\" returns successfully" Mar 14 00:51:12.904377 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:51:12.904649 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:51:12.904747 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:51:12.913561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:51:12.913838 systemd[1]: cri-containerd-8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85.scope: Deactivated successfully. Mar 14 00:51:12.941617 containerd[1512]: time="2026-03-14T00:51:12.941484453Z" level=info msg="shim disconnected" id=8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85 namespace=k8s.io Mar 14 00:51:12.942138 containerd[1512]: time="2026-03-14T00:51:12.941574245Z" level=warning msg="cleaning up after shim disconnected" id=8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85 namespace=k8s.io Mar 14 00:51:12.942138 containerd[1512]: time="2026-03-14T00:51:12.941911811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:51:12.958719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:51:13.782979 containerd[1512]: time="2026-03-14T00:51:13.782928617Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:51:13.822985 containerd[1512]: time="2026-03-14T00:51:13.821156180Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\"" Mar 14 00:51:13.822985 containerd[1512]: time="2026-03-14T00:51:13.821787595Z" level=info msg="StartContainer for \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\"" Mar 14 00:51:13.883375 systemd[1]: Started cri-containerd-7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45.scope - libcontainer container 7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45. Mar 14 00:51:13.932025 containerd[1512]: time="2026-03-14T00:51:13.931855539Z" level=info msg="StartContainer for \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\" returns successfully" Mar 14 00:51:13.943384 systemd[1]: cri-containerd-7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45.scope: Deactivated successfully. Mar 14 00:51:14.018561 containerd[1512]: time="2026-03-14T00:51:14.018140702Z" level=info msg="shim disconnected" id=7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45 namespace=k8s.io Mar 14 00:51:14.018561 containerd[1512]: time="2026-03-14T00:51:14.018519194Z" level=warning msg="cleaning up after shim disconnected" id=7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45 namespace=k8s.io Mar 14 00:51:14.018561 containerd[1512]: time="2026-03-14T00:51:14.018536324Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:51:14.081539 containerd[1512]: time="2026-03-14T00:51:14.080175406Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:51:14.081539 containerd[1512]: time="2026-03-14T00:51:14.080549865Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 00:51:14.081539 containerd[1512]: time="2026-03-14T00:51:14.081295086Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:51:14.082758 containerd[1512]: time="2026-03-14T00:51:14.082723126Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.380075273s" Mar 14 00:51:14.082758 containerd[1512]: time="2026-03-14T00:51:14.082759015Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 00:51:14.088432 containerd[1512]: time="2026-03-14T00:51:14.088395071Z" level=info msg="CreateContainer within sandbox \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:51:14.096138 containerd[1512]: time="2026-03-14T00:51:14.096106281Z" level=info msg="CreateContainer within sandbox \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\"" Mar 14 00:51:14.097555 containerd[1512]: time="2026-03-14T00:51:14.097249449Z" level=info msg="StartContainer for \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\"" Mar 14 00:51:14.136647 systemd[1]: Started cri-containerd-2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1.scope - libcontainer container 2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1. Mar 14 00:51:14.175039 containerd[1512]: time="2026-03-14T00:51:14.174976826Z" level=info msg="StartContainer for \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\" returns successfully" Mar 14 00:51:14.780088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45-rootfs.mount: Deactivated successfully. Mar 14 00:51:14.789164 containerd[1512]: time="2026-03-14T00:51:14.789019564Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:51:14.798325 containerd[1512]: time="2026-03-14T00:51:14.798252460Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\"" Mar 14 00:51:14.798762 containerd[1512]: time="2026-03-14T00:51:14.798736890Z" level=info msg="StartContainer for \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\"" Mar 14 00:51:14.835354 kubelet[2658]: I0314 00:51:14.835273 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-fwcgl" podStartSLOduration=1.73527765 podStartE2EDuration="12.835209331s" podCreationTimestamp="2026-03-14 00:51:02 +0000 UTC" firstStartedPulling="2026-03-14 00:51:02.984003611 +0000 UTC m=+6.455248984" lastFinishedPulling="2026-03-14 00:51:14.083935293 +0000 UTC m=+17.555180665" observedRunningTime="2026-03-14 00:51:14.835068386 +0000 UTC m=+18.306313763" watchObservedRunningTime="2026-03-14 00:51:14.835209331 +0000 UTC m=+18.306454705" Mar 14 00:51:14.867372 systemd[1]: Started cri-containerd-9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce.scope - libcontainer container 9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce. Mar 14 00:51:14.956381 containerd[1512]: time="2026-03-14T00:51:14.956340993Z" level=info msg="StartContainer for \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\" returns successfully" Mar 14 00:51:14.961660 systemd[1]: cri-containerd-9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce.scope: Deactivated successfully. Mar 14 00:51:14.993937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce-rootfs.mount: Deactivated successfully. Mar 14 00:51:14.999780 containerd[1512]: time="2026-03-14T00:51:14.999670574Z" level=info msg="shim disconnected" id=9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce namespace=k8s.io Mar 14 00:51:14.999780 containerd[1512]: time="2026-03-14T00:51:14.999755632Z" level=warning msg="cleaning up after shim disconnected" id=9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce namespace=k8s.io Mar 14 00:51:15.000431 containerd[1512]: time="2026-03-14T00:51:14.999764781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:51:15.794461 containerd[1512]: time="2026-03-14T00:51:15.794411104Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:51:15.849374 containerd[1512]: time="2026-03-14T00:51:15.849143308Z" level=info msg="CreateContainer within sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\"" Mar 14 00:51:15.852304 containerd[1512]: time="2026-03-14T00:51:15.849553647Z" level=info msg="StartContainer for \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\"" Mar 14 00:51:15.899545 systemd[1]: Started cri-containerd-1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63.scope - libcontainer container 1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63. Mar 14 00:51:15.942497 containerd[1512]: time="2026-03-14T00:51:15.942371227Z" level=info msg="StartContainer for \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\" returns successfully" Mar 14 00:51:16.167507 kubelet[2658]: I0314 00:51:16.166657 2658 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 14 00:51:16.209759 systemd[1]: Created slice kubepods-burstable-pod9c2ffd1f_3348_47d0_95c8_11e157021e69.slice - libcontainer container kubepods-burstable-pod9c2ffd1f_3348_47d0_95c8_11e157021e69.slice. Mar 14 00:51:16.220104 systemd[1]: Created slice kubepods-burstable-pod1c6c55f7_f4fc_46c7_9de3_5d60a6e07c7f.slice - libcontainer container kubepods-burstable-pod1c6c55f7_f4fc_46c7_9de3_5d60a6e07c7f.slice. Mar 14 00:51:16.315362 kubelet[2658]: I0314 00:51:16.315321 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c6c55f7-f4fc-46c7-9de3-5d60a6e07c7f-config-volume\") pod \"coredns-66bc5c9577-g9vjd\" (UID: \"1c6c55f7-f4fc-46c7-9de3-5d60a6e07c7f\") " pod="kube-system/coredns-66bc5c9577-g9vjd" Mar 14 00:51:16.315554 kubelet[2658]: I0314 00:51:16.315399 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c2ffd1f-3348-47d0-95c8-11e157021e69-config-volume\") pod \"coredns-66bc5c9577-xs94q\" (UID: \"9c2ffd1f-3348-47d0-95c8-11e157021e69\") " pod="kube-system/coredns-66bc5c9577-xs94q" Mar 14 00:51:16.315554 kubelet[2658]: I0314 00:51:16.315423 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx6dk\" (UniqueName: \"kubernetes.io/projected/9c2ffd1f-3348-47d0-95c8-11e157021e69-kube-api-access-lx6dk\") pod \"coredns-66bc5c9577-xs94q\" (UID: \"9c2ffd1f-3348-47d0-95c8-11e157021e69\") " pod="kube-system/coredns-66bc5c9577-xs94q" Mar 14 00:51:16.315554 kubelet[2658]: I0314 00:51:16.315464 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvgv\" (UniqueName: \"kubernetes.io/projected/1c6c55f7-f4fc-46c7-9de3-5d60a6e07c7f-kube-api-access-bkvgv\") pod \"coredns-66bc5c9577-g9vjd\" (UID: \"1c6c55f7-f4fc-46c7-9de3-5d60a6e07c7f\") " pod="kube-system/coredns-66bc5c9577-g9vjd" Mar 14 00:51:16.518308 containerd[1512]: time="2026-03-14T00:51:16.518054035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xs94q,Uid:9c2ffd1f-3348-47d0-95c8-11e157021e69,Namespace:kube-system,Attempt:0,}" Mar 14 00:51:16.526955 containerd[1512]: time="2026-03-14T00:51:16.525901600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g9vjd,Uid:1c6c55f7-f4fc-46c7-9de3-5d60a6e07c7f,Namespace:kube-system,Attempt:0,}" Mar 14 00:51:18.294367 systemd-networkd[1439]: cilium_host: Link UP Mar 14 00:51:18.296483 systemd-networkd[1439]: cilium_net: Link UP Mar 14 00:51:18.297144 systemd-networkd[1439]: cilium_net: Gained carrier Mar 14 00:51:18.297326 systemd-networkd[1439]: cilium_host: Gained carrier Mar 14 00:51:18.448036 systemd-networkd[1439]: cilium_vxlan: Link UP Mar 14 00:51:18.448045 systemd-networkd[1439]: cilium_vxlan: Gained carrier Mar 14 00:51:18.817310 kernel: NET: Registered PF_ALG protocol family Mar 14 00:51:19.245561 systemd-networkd[1439]: cilium_net: Gained IPv6LL Mar 14 00:51:19.309397 systemd-networkd[1439]: cilium_host: Gained IPv6LL Mar 14 00:51:19.692221 systemd-networkd[1439]: lxc_health: Link UP Mar 14 00:51:19.697294 systemd-networkd[1439]: lxc_health: Gained carrier Mar 14 00:51:20.120294 systemd-networkd[1439]: lxc33e112a7f69d: Link UP Mar 14 00:51:20.129642 systemd-networkd[1439]: lxc2b8d78e044e0: Link UP Mar 14 00:51:20.138241 kernel: eth0: renamed from tmp50644 Mar 14 00:51:20.142215 kernel: eth0: renamed from tmp0b03e Mar 14 00:51:20.145837 systemd-networkd[1439]: lxc2b8d78e044e0: Gained carrier Mar 14 00:51:20.147382 systemd-networkd[1439]: lxc33e112a7f69d: Gained carrier Mar 14 00:51:20.461320 systemd-networkd[1439]: cilium_vxlan: Gained IPv6LL Mar 14 00:51:20.763384 kubelet[2658]: I0314 00:51:20.762231 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pp7w5" podStartSLOduration=9.922719415 podStartE2EDuration="18.762154912s" podCreationTimestamp="2026-03-14 00:51:02 +0000 UTC" firstStartedPulling="2026-03-14 00:51:02.862317564 +0000 UTC m=+6.333562937" lastFinishedPulling="2026-03-14 00:51:11.701753062 +0000 UTC m=+15.172998434" observedRunningTime="2026-03-14 00:51:16.825571086 +0000 UTC m=+20.296816496" watchObservedRunningTime="2026-03-14 00:51:20.762154912 +0000 UTC m=+24.233400286" Mar 14 00:51:21.293633 systemd-networkd[1439]: lxc_health: Gained IPv6LL Mar 14 00:51:21.357398 systemd-networkd[1439]: lxc33e112a7f69d: Gained IPv6LL Mar 14 00:51:21.391406 systemd[1]: Started sshd@8-10.244.101.86:22-119.96.224.54:56448.service - OpenSSH per-connection server daemon (119.96.224.54:56448). Mar 14 00:51:22.125423 systemd-networkd[1439]: lxc2b8d78e044e0: Gained IPv6LL Mar 14 00:51:24.403693 containerd[1512]: time="2026-03-14T00:51:24.403485576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:51:24.407229 containerd[1512]: time="2026-03-14T00:51:24.405171018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:51:24.407229 containerd[1512]: time="2026-03-14T00:51:24.405247421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:24.407229 containerd[1512]: time="2026-03-14T00:51:24.405400821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:24.443241 systemd[1]: run-containerd-runc-k8s.io-5064434e6dfb8cc603fd558b4b5b98422397bfb0f09d1e8493dba6e4fa4ac3dd-runc.dk3QNc.mount: Deactivated successfully. Mar 14 00:51:24.444193 containerd[1512]: time="2026-03-14T00:51:24.442448210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:51:24.444193 containerd[1512]: time="2026-03-14T00:51:24.444035401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:51:24.444193 containerd[1512]: time="2026-03-14T00:51:24.444097873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:24.444468 containerd[1512]: time="2026-03-14T00:51:24.444281440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:51:24.468850 systemd[1]: Started cri-containerd-5064434e6dfb8cc603fd558b4b5b98422397bfb0f09d1e8493dba6e4fa4ac3dd.scope - libcontainer container 5064434e6dfb8cc603fd558b4b5b98422397bfb0f09d1e8493dba6e4fa4ac3dd. Mar 14 00:51:24.482352 systemd[1]: Started cri-containerd-0b03e874c36bdc3cbffb3aff8bc8acffecd3063b79348939a6980b3dd549fcd7.scope - libcontainer container 0b03e874c36bdc3cbffb3aff8bc8acffecd3063b79348939a6980b3dd549fcd7. Mar 14 00:51:24.568954 containerd[1512]: time="2026-03-14T00:51:24.568012804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xs94q,Uid:9c2ffd1f-3348-47d0-95c8-11e157021e69,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b03e874c36bdc3cbffb3aff8bc8acffecd3063b79348939a6980b3dd549fcd7\"" Mar 14 00:51:24.579866 containerd[1512]: time="2026-03-14T00:51:24.579774508Z" level=info msg="CreateContainer within sandbox \"0b03e874c36bdc3cbffb3aff8bc8acffecd3063b79348939a6980b3dd549fcd7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:51:24.583969 containerd[1512]: time="2026-03-14T00:51:24.583932181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-g9vjd,Uid:1c6c55f7-f4fc-46c7-9de3-5d60a6e07c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5064434e6dfb8cc603fd558b4b5b98422397bfb0f09d1e8493dba6e4fa4ac3dd\"" Mar 14 00:51:24.590255 containerd[1512]: time="2026-03-14T00:51:24.590220322Z" level=info msg="CreateContainer within sandbox \"5064434e6dfb8cc603fd558b4b5b98422397bfb0f09d1e8493dba6e4fa4ac3dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:51:24.599159 containerd[1512]: time="2026-03-14T00:51:24.599035540Z" level=info msg="CreateContainer within sandbox \"5064434e6dfb8cc603fd558b4b5b98422397bfb0f09d1e8493dba6e4fa4ac3dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c9bfdeb7f7985a1fc0a5b6a50db6e75c9c612634777a32ec73f43472d2a01d59\"" Mar 14 00:51:24.602218 containerd[1512]: time="2026-03-14T00:51:24.600770608Z" level=info msg="StartContainer for \"c9bfdeb7f7985a1fc0a5b6a50db6e75c9c612634777a32ec73f43472d2a01d59\"" Mar 14 00:51:24.604660 containerd[1512]: time="2026-03-14T00:51:24.604635424Z" level=info msg="CreateContainer within sandbox \"0b03e874c36bdc3cbffb3aff8bc8acffecd3063b79348939a6980b3dd549fcd7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"841fceea800a2771d2994f645f6b015564564be82c6c790d06ba006320270a6c\"" Mar 14 00:51:24.605290 containerd[1512]: time="2026-03-14T00:51:24.605259224Z" level=info msg="StartContainer for \"841fceea800a2771d2994f645f6b015564564be82c6c790d06ba006320270a6c\"" Mar 14 00:51:24.651351 systemd[1]: Started cri-containerd-c9bfdeb7f7985a1fc0a5b6a50db6e75c9c612634777a32ec73f43472d2a01d59.scope - libcontainer container c9bfdeb7f7985a1fc0a5b6a50db6e75c9c612634777a32ec73f43472d2a01d59. Mar 14 00:51:24.656461 systemd[1]: Started cri-containerd-841fceea800a2771d2994f645f6b015564564be82c6c790d06ba006320270a6c.scope - libcontainer container 841fceea800a2771d2994f645f6b015564564be82c6c790d06ba006320270a6c. Mar 14 00:51:24.695055 containerd[1512]: time="2026-03-14T00:51:24.694884255Z" level=info msg="StartContainer for \"c9bfdeb7f7985a1fc0a5b6a50db6e75c9c612634777a32ec73f43472d2a01d59\" returns successfully" Mar 14 00:51:24.698545 containerd[1512]: time="2026-03-14T00:51:24.698433887Z" level=info msg="StartContainer for \"841fceea800a2771d2994f645f6b015564564be82c6c790d06ba006320270a6c\" returns successfully" Mar 14 00:51:24.853020 kubelet[2658]: I0314 00:51:24.852835 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g9vjd" podStartSLOduration=22.852814076 podStartE2EDuration="22.852814076s" podCreationTimestamp="2026-03-14 00:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:51:24.85261504 +0000 UTC m=+28.323860420" watchObservedRunningTime="2026-03-14 00:51:24.852814076 +0000 UTC m=+28.324059468" Mar 14 00:51:24.873835 kubelet[2658]: I0314 00:51:24.873505 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xs94q" podStartSLOduration=22.873405185 podStartE2EDuration="22.873405185s" podCreationTimestamp="2026-03-14 00:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:51:24.868965963 +0000 UTC m=+28.340211509" watchObservedRunningTime="2026-03-14 00:51:24.873405185 +0000 UTC m=+28.344650580" Mar 14 00:51:57.598603 systemd[1]: Started sshd@9-10.244.101.86:22-20.161.92.111:37214.service - OpenSSH per-connection server daemon (20.161.92.111:37214). Mar 14 00:51:58.198341 sshd[4071]: Accepted publickey for core from 20.161.92.111 port 37214 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:51:58.200530 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:51:58.211261 systemd-logind[1491]: New session 10 of user core. Mar 14 00:51:58.221375 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:51:59.093871 sshd[4071]: pam_unix(sshd:session): session closed for user core Mar 14 00:51:59.109717 systemd[1]: sshd@9-10.244.101.86:22-20.161.92.111:37214.service: Deactivated successfully. Mar 14 00:51:59.115052 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:51:59.117116 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:51:59.118789 systemd-logind[1491]: Removed session 10. Mar 14 00:52:04.202926 systemd[1]: Started sshd@10-10.244.101.86:22-20.161.92.111:54708.service - OpenSSH per-connection server daemon (20.161.92.111:54708). Mar 14 00:52:04.425518 systemd[1]: Started sshd@11-10.244.101.86:22-4.213.160.153:52046.service - OpenSSH per-connection server daemon (4.213.160.153:52046). Mar 14 00:52:04.836084 sshd[4088]: Accepted publickey for core from 20.161.92.111 port 54708 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:04.839035 sshd[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:04.845518 systemd-logind[1491]: New session 11 of user core. Mar 14 00:52:04.853394 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:52:05.278863 sshd[4091]: Invalid user ais from 4.213.160.153 port 52046 Mar 14 00:52:05.325367 sshd[4088]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:05.334938 systemd[1]: sshd@10-10.244.101.86:22-20.161.92.111:54708.service: Deactivated successfully. Mar 14 00:52:05.338537 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:52:05.339536 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:52:05.340671 systemd-logind[1491]: Removed session 11. Mar 14 00:52:05.434336 sshd[4091]: Received disconnect from 4.213.160.153 port 52046:11: Bye Bye [preauth] Mar 14 00:52:05.435221 sshd[4091]: Disconnected from invalid user ais 4.213.160.153 port 52046 [preauth] Mar 14 00:52:05.437987 systemd[1]: sshd@11-10.244.101.86:22-4.213.160.153:52046.service: Deactivated successfully. Mar 14 00:52:10.440888 systemd[1]: Started sshd@12-10.244.101.86:22-20.161.92.111:51162.service - OpenSSH per-connection server daemon (20.161.92.111:51162). Mar 14 00:52:11.014762 sshd[4108]: Accepted publickey for core from 20.161.92.111 port 51162 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:11.016407 sshd[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:11.025683 systemd-logind[1491]: New session 12 of user core. Mar 14 00:52:11.036621 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:52:11.518092 sshd[4108]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:11.528498 systemd[1]: sshd@12-10.244.101.86:22-20.161.92.111:51162.service: Deactivated successfully. Mar 14 00:52:11.532151 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:52:11.534068 systemd-logind[1491]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:52:11.536919 systemd-logind[1491]: Removed session 12. Mar 14 00:52:11.632689 systemd[1]: Started sshd@13-10.244.101.86:22-20.161.92.111:51168.service - OpenSSH per-connection server daemon (20.161.92.111:51168). Mar 14 00:52:12.220462 sshd[4122]: Accepted publickey for core from 20.161.92.111 port 51168 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:12.225005 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:12.236334 systemd-logind[1491]: New session 13 of user core. Mar 14 00:52:12.244459 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:52:12.801463 sshd[4122]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:12.807497 systemd-logind[1491]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:52:12.808110 systemd[1]: sshd@13-10.244.101.86:22-20.161.92.111:51168.service: Deactivated successfully. Mar 14 00:52:12.812757 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:52:12.815696 systemd-logind[1491]: Removed session 13. Mar 14 00:52:12.902591 systemd[1]: Started sshd@14-10.244.101.86:22-20.161.92.111:51182.service - OpenSSH per-connection server daemon (20.161.92.111:51182). Mar 14 00:52:13.459843 sshd[4133]: Accepted publickey for core from 20.161.92.111 port 51182 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:13.466672 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:13.477470 systemd-logind[1491]: New session 14 of user core. Mar 14 00:52:13.486710 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:52:13.944847 sshd[4133]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:13.954079 systemd[1]: sshd@14-10.244.101.86:22-20.161.92.111:51182.service: Deactivated successfully. Mar 14 00:52:13.957761 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:52:13.960782 systemd-logind[1491]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:52:13.962274 systemd-logind[1491]: Removed session 14. Mar 14 00:52:19.050489 systemd[1]: Started sshd@15-10.244.101.86:22-20.161.92.111:51188.service - OpenSSH per-connection server daemon (20.161.92.111:51188). Mar 14 00:52:19.613910 sshd[4146]: Accepted publickey for core from 20.161.92.111 port 51188 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:19.617923 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:19.628087 systemd-logind[1491]: New session 15 of user core. Mar 14 00:52:19.636422 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:52:20.086753 sshd[4146]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:20.096404 systemd[1]: sshd@15-10.244.101.86:22-20.161.92.111:51188.service: Deactivated successfully. Mar 14 00:52:20.100691 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:52:20.103123 systemd-logind[1491]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:52:20.104515 systemd-logind[1491]: Removed session 15. Mar 14 00:52:25.208648 systemd[1]: Started sshd@16-10.244.101.86:22-20.161.92.111:52712.service - OpenSSH per-connection server daemon (20.161.92.111:52712). Mar 14 00:52:25.789758 sshd[4160]: Accepted publickey for core from 20.161.92.111 port 52712 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:25.793019 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:25.799282 systemd-logind[1491]: New session 16 of user core. Mar 14 00:52:25.806501 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:52:26.290964 sshd[4160]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:26.302423 systemd[1]: sshd@16-10.244.101.86:22-20.161.92.111:52712.service: Deactivated successfully. Mar 14 00:52:26.308696 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:52:26.312259 systemd-logind[1491]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:52:26.314016 systemd-logind[1491]: Removed session 16. Mar 14 00:52:26.401679 systemd[1]: Started sshd@17-10.244.101.86:22-20.161.92.111:52714.service - OpenSSH per-connection server daemon (20.161.92.111:52714). Mar 14 00:52:26.962796 sshd[4173]: Accepted publickey for core from 20.161.92.111 port 52714 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:26.963535 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:26.971925 systemd-logind[1491]: New session 17 of user core. Mar 14 00:52:26.979408 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:52:27.676109 sshd[4173]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:27.693325 systemd[1]: sshd@17-10.244.101.86:22-20.161.92.111:52714.service: Deactivated successfully. Mar 14 00:52:27.696665 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:52:27.698134 systemd-logind[1491]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:52:27.700145 systemd-logind[1491]: Removed session 17. Mar 14 00:52:27.786618 systemd[1]: Started sshd@18-10.244.101.86:22-20.161.92.111:52722.service - OpenSSH per-connection server daemon (20.161.92.111:52722). Mar 14 00:52:28.377911 sshd[4184]: Accepted publickey for core from 20.161.92.111 port 52722 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:28.379905 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:28.390479 systemd-logind[1491]: New session 18 of user core. Mar 14 00:52:28.395412 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:52:29.554996 sshd[4184]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:29.580046 systemd[1]: sshd@18-10.244.101.86:22-20.161.92.111:52722.service: Deactivated successfully. Mar 14 00:52:29.584384 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:52:29.586433 systemd-logind[1491]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:52:29.589034 systemd-logind[1491]: Removed session 18. Mar 14 00:52:29.657487 systemd[1]: Started sshd@19-10.244.101.86:22-20.161.92.111:52732.service - OpenSSH per-connection server daemon (20.161.92.111:52732). Mar 14 00:52:30.221316 sshd[4203]: Accepted publickey for core from 20.161.92.111 port 52732 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:30.224591 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:30.233172 systemd-logind[1491]: New session 19 of user core. Mar 14 00:52:30.245343 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:52:30.852663 sshd[4203]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:30.864090 systemd[1]: sshd@19-10.244.101.86:22-20.161.92.111:52732.service: Deactivated successfully. Mar 14 00:52:30.867689 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:52:30.870986 systemd-logind[1491]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:52:30.872967 systemd-logind[1491]: Removed session 19. Mar 14 00:52:30.966702 systemd[1]: Started sshd@20-10.244.101.86:22-20.161.92.111:34502.service - OpenSSH per-connection server daemon (20.161.92.111:34502). Mar 14 00:52:31.638313 sshd[4216]: Accepted publickey for core from 20.161.92.111 port 34502 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:31.642161 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:31.648749 systemd-logind[1491]: New session 20 of user core. Mar 14 00:52:31.657355 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:52:32.128990 sshd[4216]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:32.139939 systemd[1]: sshd@20-10.244.101.86:22-20.161.92.111:34502.service: Deactivated successfully. Mar 14 00:52:32.143579 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:52:32.145324 systemd-logind[1491]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:52:32.148098 systemd-logind[1491]: Removed session 20. Mar 14 00:52:32.197109 update_engine[1492]: I20260314 00:52:32.196739 1492 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 14 00:52:32.197109 update_engine[1492]: I20260314 00:52:32.196821 1492 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 14 00:52:32.203612 systemd[1]: Started sshd@21-10.244.101.86:22-154.221.27.234:45546.service - OpenSSH per-connection server daemon (154.221.27.234:45546). Mar 14 00:52:32.206305 update_engine[1492]: I20260314 00:52:32.205425 1492 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 14 00:52:32.206305 update_engine[1492]: I20260314 00:52:32.206103 1492 omaha_request_params.cc:62] Current group set to lts Mar 14 00:52:32.206914 update_engine[1492]: I20260314 00:52:32.206538 1492 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 14 00:52:32.206914 update_engine[1492]: I20260314 00:52:32.206557 1492 update_attempter.cc:643] Scheduling an action processor start. Mar 14 00:52:32.206914 update_engine[1492]: I20260314 00:52:32.206580 1492 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:52:32.206914 update_engine[1492]: I20260314 00:52:32.206633 1492 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 14 00:52:32.206914 update_engine[1492]: I20260314 00:52:32.206712 1492 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:52:32.206914 update_engine[1492]: I20260314 00:52:32.206720 1492 omaha_request_action.cc:272] Request: Mar 14 00:52:32.206914 update_engine[1492]: Mar 14 00:52:32.206914 update_engine[1492]: Mar 14 00:52:32.206914 update_engine[1492]: Mar 14 00:52:32.206914 update_engine[1492]: Mar 14 00:52:32.206914 update_engine[1492]: Mar 14 00:52:32.206914 update_engine[1492]: Mar 14 00:52:32.206914 update_engine[1492]: Mar 14 00:52:32.206914 update_engine[1492]: Mar 14 00:52:32.206914 update_engine[1492]: I20260314 00:52:32.206728 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:52:32.210725 update_engine[1492]: I20260314 00:52:32.210693 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:52:32.211171 update_engine[1492]: I20260314 00:52:32.211134 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:52:32.224789 update_engine[1492]: E20260314 00:52:32.224740 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:52:32.225331 update_engine[1492]: I20260314 00:52:32.225275 1492 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 14 00:52:32.231633 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 14 00:52:33.324572 sshd[4228]: Invalid user george from 154.221.27.234 port 45546 Mar 14 00:52:33.543083 sshd[4228]: Received disconnect from 154.221.27.234 port 45546:11: Bye Bye [preauth] Mar 14 00:52:33.543083 sshd[4228]: Disconnected from invalid user george 154.221.27.234 port 45546 [preauth] Mar 14 00:52:33.548676 systemd[1]: sshd@21-10.244.101.86:22-154.221.27.234:45546.service: Deactivated successfully. Mar 14 00:52:37.244609 systemd[1]: Started sshd@22-10.244.101.86:22-20.161.92.111:34508.service - OpenSSH per-connection server daemon (20.161.92.111:34508). Mar 14 00:52:37.809227 sshd[4237]: Accepted publickey for core from 20.161.92.111 port 34508 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:37.812484 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:37.820973 systemd-logind[1491]: New session 21 of user core. Mar 14 00:52:37.831408 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:52:38.281437 sshd[4237]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:38.292387 systemd[1]: sshd@22-10.244.101.86:22-20.161.92.111:34508.service: Deactivated successfully. Mar 14 00:52:38.297022 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:52:38.300510 systemd-logind[1491]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:52:38.301661 systemd-logind[1491]: Removed session 21. Mar 14 00:52:42.162250 update_engine[1492]: I20260314 00:52:42.160481 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:52:42.162250 update_engine[1492]: I20260314 00:52:42.161492 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:52:42.163438 update_engine[1492]: I20260314 00:52:42.163368 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:52:42.164040 update_engine[1492]: E20260314 00:52:42.163980 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:52:42.164404 update_engine[1492]: I20260314 00:52:42.164336 1492 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 14 00:52:43.390911 systemd[1]: Started sshd@23-10.244.101.86:22-20.161.92.111:59684.service - OpenSSH per-connection server daemon (20.161.92.111:59684). Mar 14 00:52:43.979254 sshd[4249]: Accepted publickey for core from 20.161.92.111 port 59684 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:43.983059 sshd[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:43.993414 systemd-logind[1491]: New session 22 of user core. Mar 14 00:52:44.004577 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:52:44.472757 sshd[4249]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:44.479864 systemd-logind[1491]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:52:44.480744 systemd[1]: sshd@23-10.244.101.86:22-20.161.92.111:59684.service: Deactivated successfully. Mar 14 00:52:44.485006 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:52:44.488760 systemd-logind[1491]: Removed session 22. Mar 14 00:52:44.578804 systemd[1]: Started sshd@24-10.244.101.86:22-20.161.92.111:59688.service - OpenSSH per-connection server daemon (20.161.92.111:59688). Mar 14 00:52:45.147945 sshd[4262]: Accepted publickey for core from 20.161.92.111 port 59688 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:45.152394 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:45.159700 systemd-logind[1491]: New session 23 of user core. Mar 14 00:52:45.171408 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:52:46.858684 containerd[1512]: time="2026-03-14T00:52:46.858597242Z" level=info msg="StopContainer for \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\" with timeout 30 (s)" Mar 14 00:52:46.859353 systemd[1]: run-containerd-runc-k8s.io-1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63-runc.HAR64R.mount: Deactivated successfully. Mar 14 00:52:46.862580 containerd[1512]: time="2026-03-14T00:52:46.861380932Z" level=info msg="Stop container \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\" with signal terminated" Mar 14 00:52:46.897631 systemd[1]: cri-containerd-2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1.scope: Deactivated successfully. Mar 14 00:52:46.899758 containerd[1512]: time="2026-03-14T00:52:46.899531847Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:52:46.916555 containerd[1512]: time="2026-03-14T00:52:46.916513638Z" level=info msg="StopContainer for \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\" with timeout 2 (s)" Mar 14 00:52:46.916902 containerd[1512]: time="2026-03-14T00:52:46.916828298Z" level=info msg="Stop container \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\" with signal terminated" Mar 14 00:52:46.931355 systemd-networkd[1439]: lxc_health: Link DOWN Mar 14 00:52:46.931363 systemd-networkd[1439]: lxc_health: Lost carrier Mar 14 00:52:46.947539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1-rootfs.mount: Deactivated successfully. Mar 14 00:52:46.950608 containerd[1512]: time="2026-03-14T00:52:46.950461132Z" level=info msg="shim disconnected" id=2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1 namespace=k8s.io Mar 14 00:52:46.950842 containerd[1512]: time="2026-03-14T00:52:46.950820991Z" level=warning msg="cleaning up after shim disconnected" id=2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1 namespace=k8s.io Mar 14 00:52:46.950983 containerd[1512]: time="2026-03-14T00:52:46.950875727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:52:46.956643 systemd[1]: cri-containerd-1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63.scope: Deactivated successfully. Mar 14 00:52:46.956908 systemd[1]: cri-containerd-1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63.scope: Consumed 7.817s CPU time. Mar 14 00:52:46.980856 containerd[1512]: time="2026-03-14T00:52:46.980823436Z" level=info msg="StopContainer for \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\" returns successfully" Mar 14 00:52:46.981995 containerd[1512]: time="2026-03-14T00:52:46.981968960Z" level=info msg="StopPodSandbox for \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\"" Mar 14 00:52:46.982122 containerd[1512]: time="2026-03-14T00:52:46.982108708Z" level=info msg="Container to stop \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:52:46.983969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612-shm.mount: Deactivated successfully. Mar 14 00:52:46.990092 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63-rootfs.mount: Deactivated successfully. Mar 14 00:52:46.995335 containerd[1512]: time="2026-03-14T00:52:46.995277090Z" level=info msg="shim disconnected" id=1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63 namespace=k8s.io Mar 14 00:52:46.995648 containerd[1512]: time="2026-03-14T00:52:46.995494984Z" level=warning msg="cleaning up after shim disconnected" id=1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63 namespace=k8s.io Mar 14 00:52:46.995648 containerd[1512]: time="2026-03-14T00:52:46.995509783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:52:46.996847 systemd[1]: cri-containerd-65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612.scope: Deactivated successfully. Mar 14 00:52:47.015219 containerd[1512]: time="2026-03-14T00:52:47.015117102Z" level=info msg="StopContainer for \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\" returns successfully" Mar 14 00:52:47.016374 containerd[1512]: time="2026-03-14T00:52:47.016171332Z" level=info msg="StopPodSandbox for \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\"" Mar 14 00:52:47.016510 containerd[1512]: time="2026-03-14T00:52:47.016493807Z" level=info msg="Container to stop \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:52:47.017034 containerd[1512]: time="2026-03-14T00:52:47.017012200Z" level=info msg="Container to stop \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:52:47.017112 containerd[1512]: time="2026-03-14T00:52:47.017101148Z" level=info msg="Container to stop \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:52:47.017238 containerd[1512]: time="2026-03-14T00:52:47.017226642Z" level=info msg="Container to stop \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:52:47.017414 containerd[1512]: time="2026-03-14T00:52:47.017326623Z" level=info msg="Container to stop \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:52:47.027994 systemd[1]: cri-containerd-cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544.scope: Deactivated successfully. Mar 14 00:52:47.040670 containerd[1512]: time="2026-03-14T00:52:47.040514405Z" level=info msg="shim disconnected" id=65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612 namespace=k8s.io Mar 14 00:52:47.040670 containerd[1512]: time="2026-03-14T00:52:47.040561431Z" level=warning msg="cleaning up after shim disconnected" id=65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612 namespace=k8s.io Mar 14 00:52:47.040670 containerd[1512]: time="2026-03-14T00:52:47.040569052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:52:47.058222 containerd[1512]: time="2026-03-14T00:52:47.058002510Z" level=info msg="shim disconnected" id=cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544 namespace=k8s.io Mar 14 00:52:47.058222 containerd[1512]: time="2026-03-14T00:52:47.058071396Z" level=warning msg="cleaning up after shim disconnected" id=cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544 namespace=k8s.io Mar 14 00:52:47.058222 containerd[1512]: time="2026-03-14T00:52:47.058080644Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:52:47.078441 containerd[1512]: time="2026-03-14T00:52:47.077846107Z" level=info msg="TearDown network for sandbox \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\" successfully" Mar 14 00:52:47.078441 containerd[1512]: time="2026-03-14T00:52:47.077891565Z" level=info msg="StopPodSandbox for \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\" returns successfully" Mar 14 00:52:47.097277 containerd[1512]: time="2026-03-14T00:52:47.097207260Z" level=info msg="TearDown network for sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" successfully" Mar 14 00:52:47.097567 containerd[1512]: time="2026-03-14T00:52:47.097547037Z" level=info msg="StopPodSandbox for \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" returns successfully" Mar 14 00:52:47.142362 kubelet[2658]: I0314 00:52:47.139605 2658 scope.go:117] "RemoveContainer" containerID="2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1" Mar 14 00:52:47.146536 containerd[1512]: time="2026-03-14T00:52:47.146066296Z" level=info msg="RemoveContainer for \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\"" Mar 14 00:52:47.150866 containerd[1512]: time="2026-03-14T00:52:47.150745970Z" level=info msg="RemoveContainer for \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\" returns successfully" Mar 14 00:52:47.150977 kubelet[2658]: I0314 00:52:47.150952 2658 scope.go:117] "RemoveContainer" containerID="2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1" Mar 14 00:52:47.158917 containerd[1512]: time="2026-03-14T00:52:47.151161963Z" level=error msg="ContainerStatus for \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\": not found" Mar 14 00:52:47.159038 kubelet[2658]: E0314 00:52:47.158946 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\": not found" containerID="2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1" Mar 14 00:52:47.159038 kubelet[2658]: I0314 00:52:47.158989 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1"} err="failed to get container status \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f035ba4378faf7a8e11768844f013a5c49eec99af72c2d7738d0099d92c2da1\": not found" Mar 14 00:52:47.161694 kubelet[2658]: I0314 00:52:47.159047 2658 scope.go:117] "RemoveContainer" containerID="1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63" Mar 14 00:52:47.163314 containerd[1512]: time="2026-03-14T00:52:47.163277724Z" level=info msg="RemoveContainer for \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\"" Mar 14 00:52:47.169246 containerd[1512]: time="2026-03-14T00:52:47.169140481Z" level=info msg="RemoveContainer for \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\" returns successfully" Mar 14 00:52:47.169903 kubelet[2658]: I0314 00:52:47.169529 2658 scope.go:117] "RemoveContainer" containerID="9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce" Mar 14 00:52:47.172139 containerd[1512]: time="2026-03-14T00:52:47.172094495Z" level=info msg="RemoveContainer for \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\"" Mar 14 00:52:47.175523 containerd[1512]: time="2026-03-14T00:52:47.175457594Z" level=info msg="RemoveContainer for \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\" returns successfully" Mar 14 00:52:47.175801 kubelet[2658]: I0314 00:52:47.175748 2658 scope.go:117] "RemoveContainer" containerID="7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45" Mar 14 00:52:47.178356 containerd[1512]: time="2026-03-14T00:52:47.178269940Z" level=info msg="RemoveContainer for \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\"" Mar 14 00:52:47.181225 containerd[1512]: time="2026-03-14T00:52:47.181145674Z" level=info msg="RemoveContainer for \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\" returns successfully" Mar 14 00:52:47.181871 kubelet[2658]: I0314 00:52:47.181519 2658 scope.go:117] "RemoveContainer" containerID="8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85" Mar 14 00:52:47.182932 containerd[1512]: time="2026-03-14T00:52:47.182904550Z" level=info msg="RemoveContainer for \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\"" Mar 14 00:52:47.185777 containerd[1512]: time="2026-03-14T00:52:47.185675583Z" level=info msg="RemoveContainer for \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\" returns successfully" Mar 14 00:52:47.185890 kubelet[2658]: I0314 00:52:47.185847 2658 scope.go:117] "RemoveContainer" containerID="bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77" Mar 14 00:52:47.187049 containerd[1512]: time="2026-03-14T00:52:47.187019032Z" level=info msg="RemoveContainer for \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\"" Mar 14 00:52:47.188924 containerd[1512]: time="2026-03-14T00:52:47.188894968Z" level=info msg="RemoveContainer for \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\" returns successfully" Mar 14 00:52:47.189320 containerd[1512]: time="2026-03-14T00:52:47.189242071Z" level=error msg="ContainerStatus for \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\": not found" Mar 14 00:52:47.189366 kubelet[2658]: I0314 00:52:47.189030 2658 scope.go:117] "RemoveContainer" containerID="1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63" Mar 14 00:52:47.189621 kubelet[2658]: E0314 00:52:47.189498 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\": not found" containerID="1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63" Mar 14 00:52:47.189621 kubelet[2658]: I0314 00:52:47.189537 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63"} err="failed to get container status \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f7fb6e2e1e5e9576e5dac6cc30a31b9abe78f5420ab832aafeac058469fad63\": not found" Mar 14 00:52:47.189621 kubelet[2658]: I0314 00:52:47.189558 2658 scope.go:117] "RemoveContainer" containerID="9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce" Mar 14 00:52:47.189749 containerd[1512]: time="2026-03-14T00:52:47.189711534Z" level=error msg="ContainerStatus for \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\": not found" Mar 14 00:52:47.190024 kubelet[2658]: E0314 00:52:47.189911 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\": not found" containerID="9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce" Mar 14 00:52:47.190024 kubelet[2658]: I0314 00:52:47.189931 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce"} err="failed to get container status \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f396d38d521f587c18c0a41cf8e0c1f94ec64d8586e659b1200e5d8484360ce\": not found" Mar 14 00:52:47.190024 kubelet[2658]: I0314 00:52:47.189944 2658 scope.go:117] "RemoveContainer" containerID="7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45" Mar 14 00:52:47.190300 containerd[1512]: time="2026-03-14T00:52:47.190095065Z" level=error msg="ContainerStatus for \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\": not found" Mar 14 00:52:47.190535 kubelet[2658]: E0314 00:52:47.190386 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\": not found" containerID="7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45" Mar 14 00:52:47.190535 kubelet[2658]: I0314 00:52:47.190422 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45"} err="failed to get container status \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\": rpc error: code = NotFound desc = an error occurred when try to find container \"7ad0ff3b6ebc87468db0007a85fd396f66c726e70b2f3abeb774bc697f937b45\": not found" Mar 14 00:52:47.190535 kubelet[2658]: I0314 00:52:47.190436 2658 scope.go:117] "RemoveContainer" containerID="8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85" Mar 14 00:52:47.190947 containerd[1512]: time="2026-03-14T00:52:47.190842991Z" level=error msg="ContainerStatus for \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\": not found" Mar 14 00:52:47.191148 kubelet[2658]: E0314 00:52:47.191044 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\": not found" containerID="8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85" Mar 14 00:52:47.191148 kubelet[2658]: I0314 00:52:47.191062 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85"} err="failed to get container status \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b29ebac1b14c195d76950052028ed3989e19bca87e4fd225de773afe46a3d85\": not found" Mar 14 00:52:47.191148 kubelet[2658]: I0314 00:52:47.191076 2658 scope.go:117] "RemoveContainer" containerID="bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77" Mar 14 00:52:47.191565 containerd[1512]: time="2026-03-14T00:52:47.191390071Z" level=error msg="ContainerStatus for \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\": not found" Mar 14 00:52:47.191628 kubelet[2658]: E0314 00:52:47.191506 2658 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\": not found" containerID="bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77" Mar 14 00:52:47.191628 kubelet[2658]: I0314 00:52:47.191532 2658 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77"} err="failed to get container status \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\": rpc error: code = NotFound desc = an error occurred when try to find container \"bddbfe2005eb4836cbbccdaf52127cf988b4d20f79bc3de1392fe4492ff67a77\": not found" Mar 14 00:52:47.234972 kubelet[2658]: I0314 00:52:47.233395 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-config-path\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.234972 kubelet[2658]: I0314 00:52:47.233494 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-host-proc-sys-kernel\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.234972 kubelet[2658]: I0314 00:52:47.233549 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-cgroup\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.234972 kubelet[2658]: I0314 00:52:47.233592 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20b24e53-7a9b-4af8-96bc-13b79ff21e88-clustermesh-secrets\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.234972 kubelet[2658]: I0314 00:52:47.233636 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-xtables-lock\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.234972 kubelet[2658]: I0314 00:52:47.233668 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-host-proc-sys-net\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.235882 kubelet[2658]: I0314 00:52:47.233711 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20b24e53-7a9b-4af8-96bc-13b79ff21e88-hubble-tls\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.235882 kubelet[2658]: I0314 00:52:47.233743 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-run\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.235882 kubelet[2658]: I0314 00:52:47.233784 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0eb78056-43fe-4df1-a0e1-de68ef72e1a0-cilium-config-path\") pod \"0eb78056-43fe-4df1-a0e1-de68ef72e1a0\" (UID: \"0eb78056-43fe-4df1-a0e1-de68ef72e1a0\") " Mar 14 00:52:47.235882 kubelet[2658]: I0314 00:52:47.233819 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-lib-modules\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.235882 kubelet[2658]: I0314 00:52:47.233869 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb7wh\" (UniqueName: \"kubernetes.io/projected/20b24e53-7a9b-4af8-96bc-13b79ff21e88-kube-api-access-bb7wh\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.235882 kubelet[2658]: I0314 00:52:47.233925 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-hostproc\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.236641 kubelet[2658]: I0314 00:52:47.233968 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-bpf-maps\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.236641 kubelet[2658]: I0314 00:52:47.234012 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwcxk\" (UniqueName: \"kubernetes.io/projected/0eb78056-43fe-4df1-a0e1-de68ef72e1a0-kube-api-access-wwcxk\") pod \"0eb78056-43fe-4df1-a0e1-de68ef72e1a0\" (UID: \"0eb78056-43fe-4df1-a0e1-de68ef72e1a0\") " Mar 14 00:52:47.236641 kubelet[2658]: I0314 00:52:47.234052 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cni-path\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.236641 kubelet[2658]: I0314 00:52:47.234086 2658 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-etc-cni-netd\") pod \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\" (UID: \"20b24e53-7a9b-4af8-96bc-13b79ff21e88\") " Mar 14 00:52:47.236641 kubelet[2658]: I0314 00:52:47.234409 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.237211 kubelet[2658]: I0314 00:52:47.235392 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.237211 kubelet[2658]: I0314 00:52:47.235494 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.240299 kubelet[2658]: I0314 00:52:47.240015 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.240299 kubelet[2658]: I0314 00:52:47.240118 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.246019 kubelet[2658]: I0314 00:52:47.245935 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:52:47.247425 kubelet[2658]: I0314 00:52:47.247392 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.248446 kubelet[2658]: I0314 00:52:47.248300 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b24e53-7a9b-4af8-96bc-13b79ff21e88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:52:47.248446 kubelet[2658]: I0314 00:52:47.248422 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.248907 kubelet[2658]: I0314 00:52:47.248720 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.250024 kubelet[2658]: I0314 00:52:47.249974 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eb78056-43fe-4df1-a0e1-de68ef72e1a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0eb78056-43fe-4df1-a0e1-de68ef72e1a0" (UID: "0eb78056-43fe-4df1-a0e1-de68ef72e1a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:52:47.254291 kubelet[2658]: I0314 00:52:47.254254 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-hostproc" (OuterVolumeSpecName: "hostproc") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.254672 kubelet[2658]: I0314 00:52:47.254413 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cni-path" (OuterVolumeSpecName: "cni-path") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:52:47.254895 kubelet[2658]: I0314 00:52:47.254866 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b24e53-7a9b-4af8-96bc-13b79ff21e88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:52:47.255009 kubelet[2658]: I0314 00:52:47.254991 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b24e53-7a9b-4af8-96bc-13b79ff21e88-kube-api-access-bb7wh" (OuterVolumeSpecName: "kube-api-access-bb7wh") pod "20b24e53-7a9b-4af8-96bc-13b79ff21e88" (UID: "20b24e53-7a9b-4af8-96bc-13b79ff21e88"). InnerVolumeSpecName "kube-api-access-bb7wh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:52:47.263032 kubelet[2658]: I0314 00:52:47.263000 2658 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb78056-43fe-4df1-a0e1-de68ef72e1a0-kube-api-access-wwcxk" (OuterVolumeSpecName: "kube-api-access-wwcxk") pod "0eb78056-43fe-4df1-a0e1-de68ef72e1a0" (UID: "0eb78056-43fe-4df1-a0e1-de68ef72e1a0"). InnerVolumeSpecName "kube-api-access-wwcxk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:52:47.334500 kubelet[2658]: I0314 00:52:47.334441 2658 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-config-path\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.334986 kubelet[2658]: I0314 00:52:47.334755 2658 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-host-proc-sys-kernel\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.334986 kubelet[2658]: I0314 00:52:47.334783 2658 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-cgroup\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.334986 kubelet[2658]: I0314 00:52:47.334798 2658 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20b24e53-7a9b-4af8-96bc-13b79ff21e88-clustermesh-secrets\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.334986 kubelet[2658]: I0314 00:52:47.334813 2658 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-xtables-lock\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.334986 kubelet[2658]: I0314 00:52:47.334827 2658 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-host-proc-sys-net\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.334986 kubelet[2658]: I0314 00:52:47.334840 2658 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20b24e53-7a9b-4af8-96bc-13b79ff21e88-hubble-tls\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.334986 kubelet[2658]: I0314 00:52:47.334853 2658 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cilium-run\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.334986 kubelet[2658]: I0314 00:52:47.334868 2658 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0eb78056-43fe-4df1-a0e1-de68ef72e1a0-cilium-config-path\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.335722 kubelet[2658]: I0314 00:52:47.334881 2658 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-lib-modules\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.335722 kubelet[2658]: I0314 00:52:47.334894 2658 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bb7wh\" (UniqueName: \"kubernetes.io/projected/20b24e53-7a9b-4af8-96bc-13b79ff21e88-kube-api-access-bb7wh\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.335722 kubelet[2658]: I0314 00:52:47.334909 2658 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-hostproc\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.335722 kubelet[2658]: I0314 00:52:47.334921 2658 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-bpf-maps\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.335722 kubelet[2658]: I0314 00:52:47.334934 2658 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wwcxk\" (UniqueName: \"kubernetes.io/projected/0eb78056-43fe-4df1-a0e1-de68ef72e1a0-kube-api-access-wwcxk\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.335722 kubelet[2658]: I0314 00:52:47.334947 2658 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-cni-path\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.335722 kubelet[2658]: I0314 00:52:47.334963 2658 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20b24e53-7a9b-4af8-96bc-13b79ff21e88-etc-cni-netd\") on node \"srv-avwyp.gb1.brightbox.com\" DevicePath \"\"" Mar 14 00:52:47.452733 systemd[1]: Removed slice kubepods-besteffort-pod0eb78056_43fe_4df1_a0e1_de68ef72e1a0.slice - libcontainer container kubepods-besteffort-pod0eb78056_43fe_4df1_a0e1_de68ef72e1a0.slice. Mar 14 00:52:47.460510 systemd[1]: Removed slice kubepods-burstable-pod20b24e53_7a9b_4af8_96bc_13b79ff21e88.slice - libcontainer container kubepods-burstable-pod20b24e53_7a9b_4af8_96bc_13b79ff21e88.slice. Mar 14 00:52:47.460785 systemd[1]: kubepods-burstable-pod20b24e53_7a9b_4af8_96bc_13b79ff21e88.slice: Consumed 7.916s CPU time. Mar 14 00:52:47.846514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612-rootfs.mount: Deactivated successfully. Mar 14 00:52:47.846724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544-rootfs.mount: Deactivated successfully. Mar 14 00:52:47.846812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544-shm.mount: Deactivated successfully. Mar 14 00:52:47.846892 systemd[1]: var-lib-kubelet-pods-0eb78056\x2d43fe\x2d4df1\x2da0e1\x2dde68ef72e1a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwcxk.mount: Deactivated successfully. Mar 14 00:52:47.846969 systemd[1]: var-lib-kubelet-pods-20b24e53\x2d7a9b\x2d4af8\x2d96bc\x2d13b79ff21e88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbb7wh.mount: Deactivated successfully. Mar 14 00:52:47.847059 systemd[1]: var-lib-kubelet-pods-20b24e53\x2d7a9b\x2d4af8\x2d96bc\x2d13b79ff21e88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:52:47.847142 systemd[1]: var-lib-kubelet-pods-20b24e53\x2d7a9b\x2d4af8\x2d96bc\x2d13b79ff21e88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:52:48.686247 kubelet[2658]: I0314 00:52:48.686123 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb78056-43fe-4df1-a0e1-de68ef72e1a0" path="/var/lib/kubelet/pods/0eb78056-43fe-4df1-a0e1-de68ef72e1a0/volumes" Mar 14 00:52:48.686944 kubelet[2658]: I0314 00:52:48.686816 2658 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b24e53-7a9b-4af8-96bc-13b79ff21e88" path="/var/lib/kubelet/pods/20b24e53-7a9b-4af8-96bc-13b79ff21e88/volumes" Mar 14 00:52:48.819360 sshd[4262]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:48.827177 systemd[1]: sshd@24-10.244.101.86:22-20.161.92.111:59688.service: Deactivated successfully. Mar 14 00:52:48.830749 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:52:48.832439 systemd-logind[1491]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:52:48.834120 systemd-logind[1491]: Removed session 23. Mar 14 00:52:48.925659 systemd[1]: Started sshd@25-10.244.101.86:22-20.161.92.111:59698.service - OpenSSH per-connection server daemon (20.161.92.111:59698). Mar 14 00:52:49.496463 sshd[4424]: Accepted publickey for core from 20.161.92.111 port 59698 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:49.501824 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:49.514089 systemd-logind[1491]: New session 24 of user core. Mar 14 00:52:49.524356 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:52:50.313003 systemd[1]: Created slice kubepods-burstable-podb9685c1e_771e_4902_a1b0_fe76ddd359e0.slice - libcontainer container kubepods-burstable-podb9685c1e_771e_4902_a1b0_fe76ddd359e0.slice. Mar 14 00:52:50.349698 sshd[4424]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:50.359725 systemd[1]: sshd@25-10.244.101.86:22-20.161.92.111:59698.service: Deactivated successfully. Mar 14 00:52:50.363316 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:52:50.364494 systemd-logind[1491]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:52:50.365945 systemd-logind[1491]: Removed session 24. Mar 14 00:52:50.457930 kubelet[2658]: I0314 00:52:50.457416 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-hostproc\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.457930 kubelet[2658]: I0314 00:52:50.457461 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-cni-path\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.457930 kubelet[2658]: I0314 00:52:50.457489 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-xtables-lock\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.457930 kubelet[2658]: I0314 00:52:50.457520 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-cilium-run\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.457930 kubelet[2658]: I0314 00:52:50.457548 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-cilium-cgroup\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.457930 kubelet[2658]: I0314 00:52:50.457577 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9685c1e-771e-4902-a1b0-fe76ddd359e0-cilium-ipsec-secrets\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.461741 kubelet[2658]: I0314 00:52:50.457599 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-etc-cni-netd\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.461741 kubelet[2658]: I0314 00:52:50.457622 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqr5\" (UniqueName: \"kubernetes.io/projected/b9685c1e-771e-4902-a1b0-fe76ddd359e0-kube-api-access-6jqr5\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.461741 kubelet[2658]: I0314 00:52:50.457654 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-bpf-maps\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.461741 kubelet[2658]: I0314 00:52:50.457682 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-lib-modules\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.461741 kubelet[2658]: I0314 00:52:50.457710 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9685c1e-771e-4902-a1b0-fe76ddd359e0-clustermesh-secrets\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.461741 kubelet[2658]: I0314 00:52:50.457730 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9685c1e-771e-4902-a1b0-fe76ddd359e0-cilium-config-path\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.459849 systemd[1]: Started sshd@26-10.244.101.86:22-20.161.92.111:32782.service - OpenSSH per-connection server daemon (20.161.92.111:32782). Mar 14 00:52:50.462079 kubelet[2658]: I0314 00:52:50.457753 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-host-proc-sys-net\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.462079 kubelet[2658]: I0314 00:52:50.457776 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9685c1e-771e-4902-a1b0-fe76ddd359e0-host-proc-sys-kernel\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.462079 kubelet[2658]: I0314 00:52:50.457798 2658 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9685c1e-771e-4902-a1b0-fe76ddd359e0-hubble-tls\") pod \"cilium-rqsfc\" (UID: \"b9685c1e-771e-4902-a1b0-fe76ddd359e0\") " pod="kube-system/cilium-rqsfc" Mar 14 00:52:50.621139 containerd[1512]: time="2026-03-14T00:52:50.620930307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqsfc,Uid:b9685c1e-771e-4902-a1b0-fe76ddd359e0,Namespace:kube-system,Attempt:0,}" Mar 14 00:52:50.650175 containerd[1512]: time="2026-03-14T00:52:50.649741379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:52:50.650175 containerd[1512]: time="2026-03-14T00:52:50.649846649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:52:50.650175 containerd[1512]: time="2026-03-14T00:52:50.649861563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:52:50.650175 containerd[1512]: time="2026-03-14T00:52:50.649975794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:52:50.676565 systemd[1]: Started cri-containerd-6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212.scope - libcontainer container 6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212. Mar 14 00:52:50.709452 containerd[1512]: time="2026-03-14T00:52:50.709201601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqsfc,Uid:b9685c1e-771e-4902-a1b0-fe76ddd359e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\"" Mar 14 00:52:50.716497 containerd[1512]: time="2026-03-14T00:52:50.716331322Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:52:50.740407 containerd[1512]: time="2026-03-14T00:52:50.740366027Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bad6468a328eb2de19ba6aa4356fef226835ac4dd9cb732aa1adf7f8135507cb\"" Mar 14 00:52:50.743218 containerd[1512]: time="2026-03-14T00:52:50.742368503Z" level=info msg="StartContainer for \"bad6468a328eb2de19ba6aa4356fef226835ac4dd9cb732aa1adf7f8135507cb\"" Mar 14 00:52:50.781437 systemd[1]: Started cri-containerd-bad6468a328eb2de19ba6aa4356fef226835ac4dd9cb732aa1adf7f8135507cb.scope - libcontainer container bad6468a328eb2de19ba6aa4356fef226835ac4dd9cb732aa1adf7f8135507cb. Mar 14 00:52:50.817004 containerd[1512]: time="2026-03-14T00:52:50.816961824Z" level=info msg="StartContainer for \"bad6468a328eb2de19ba6aa4356fef226835ac4dd9cb732aa1adf7f8135507cb\" returns successfully" Mar 14 00:52:50.842067 systemd[1]: cri-containerd-bad6468a328eb2de19ba6aa4356fef226835ac4dd9cb732aa1adf7f8135507cb.scope: Deactivated successfully. Mar 14 00:52:50.872456 containerd[1512]: time="2026-03-14T00:52:50.872298933Z" level=info msg="shim disconnected" id=bad6468a328eb2de19ba6aa4356fef226835ac4dd9cb732aa1adf7f8135507cb namespace=k8s.io Mar 14 00:52:50.872456 containerd[1512]: time="2026-03-14T00:52:50.872382750Z" level=warning msg="cleaning up after shim disconnected" id=bad6468a328eb2de19ba6aa4356fef226835ac4dd9cb732aa1adf7f8135507cb namespace=k8s.io Mar 14 00:52:50.872456 containerd[1512]: time="2026-03-14T00:52:50.872391833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:52:51.020230 sshd[4436]: Accepted publickey for core from 20.161.92.111 port 32782 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:51.023525 sshd[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:51.034710 systemd-logind[1491]: New session 25 of user core. Mar 14 00:52:51.045370 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:52:51.176600 containerd[1512]: time="2026-03-14T00:52:51.176300573Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:52:51.184623 containerd[1512]: time="2026-03-14T00:52:51.184505693Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e3be97debff050220c2c9c683d797ae58cbc52aeae071eca93ec160e6231b0e6\"" Mar 14 00:52:51.186093 containerd[1512]: time="2026-03-14T00:52:51.185356238Z" level=info msg="StartContainer for \"e3be97debff050220c2c9c683d797ae58cbc52aeae071eca93ec160e6231b0e6\"" Mar 14 00:52:51.222367 systemd[1]: Started cri-containerd-e3be97debff050220c2c9c683d797ae58cbc52aeae071eca93ec160e6231b0e6.scope - libcontainer container e3be97debff050220c2c9c683d797ae58cbc52aeae071eca93ec160e6231b0e6. Mar 14 00:52:51.251651 containerd[1512]: time="2026-03-14T00:52:51.251476751Z" level=info msg="StartContainer for \"e3be97debff050220c2c9c683d797ae58cbc52aeae071eca93ec160e6231b0e6\" returns successfully" Mar 14 00:52:51.271298 systemd[1]: cri-containerd-e3be97debff050220c2c9c683d797ae58cbc52aeae071eca93ec160e6231b0e6.scope: Deactivated successfully. Mar 14 00:52:51.305061 containerd[1512]: time="2026-03-14T00:52:51.304975230Z" level=info msg="shim disconnected" id=e3be97debff050220c2c9c683d797ae58cbc52aeae071eca93ec160e6231b0e6 namespace=k8s.io Mar 14 00:52:51.305061 containerd[1512]: time="2026-03-14T00:52:51.305039877Z" level=warning msg="cleaning up after shim disconnected" id=e3be97debff050220c2c9c683d797ae58cbc52aeae071eca93ec160e6231b0e6 namespace=k8s.io Mar 14 00:52:51.305061 containerd[1512]: time="2026-03-14T00:52:51.305049451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:52:51.411230 sshd[4436]: pam_unix(sshd:session): session closed for user core Mar 14 00:52:51.415328 systemd[1]: sshd@26-10.244.101.86:22-20.161.92.111:32782.service: Deactivated successfully. Mar 14 00:52:51.418319 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:52:51.419901 systemd-logind[1491]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:52:51.421554 systemd-logind[1491]: Removed session 25. Mar 14 00:52:51.520050 systemd[1]: Started sshd@27-10.244.101.86:22-20.161.92.111:32794.service - OpenSSH per-connection server daemon (20.161.92.111:32794). Mar 14 00:52:51.787617 kubelet[2658]: E0314 00:52:51.787133 2658 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:52:52.092456 sshd[4617]: Accepted publickey for core from 20.161.92.111 port 32794 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 00:52:52.094266 sshd[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:52:52.101177 systemd-logind[1491]: New session 26 of user core. Mar 14 00:52:52.109444 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:52:52.159317 update_engine[1492]: I20260314 00:52:52.158902 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:52:52.160499 update_engine[1492]: I20260314 00:52:52.159552 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:52:52.160499 update_engine[1492]: I20260314 00:52:52.159901 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:52:52.160665 update_engine[1492]: E20260314 00:52:52.160522 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:52:52.160665 update_engine[1492]: I20260314 00:52:52.160614 1492 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 14 00:52:52.180484 containerd[1512]: time="2026-03-14T00:52:52.180382399Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:52:52.204905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349347609.mount: Deactivated successfully. Mar 14 00:52:52.209660 containerd[1512]: time="2026-03-14T00:52:52.209567269Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a\"" Mar 14 00:52:52.212034 containerd[1512]: time="2026-03-14T00:52:52.210583585Z" level=info msg="StartContainer for \"0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a\"" Mar 14 00:52:52.262503 systemd[1]: Started cri-containerd-0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a.scope - libcontainer container 0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a. Mar 14 00:52:52.314772 containerd[1512]: time="2026-03-14T00:52:52.313906246Z" level=info msg="StartContainer for \"0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a\" returns successfully" Mar 14 00:52:52.325799 systemd[1]: cri-containerd-0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a.scope: Deactivated successfully. Mar 14 00:52:52.360608 containerd[1512]: time="2026-03-14T00:52:52.360223186Z" level=info msg="shim disconnected" id=0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a namespace=k8s.io Mar 14 00:52:52.360608 containerd[1512]: time="2026-03-14T00:52:52.360292834Z" level=warning msg="cleaning up after shim disconnected" id=0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a namespace=k8s.io Mar 14 00:52:52.360608 containerd[1512]: time="2026-03-14T00:52:52.360300942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:52:52.573481 systemd[1]: run-containerd-runc-k8s.io-0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a-runc.MFPj7L.mount: Deactivated successfully. Mar 14 00:52:52.575428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b215dd1801e3ff876e895522fa27a88904538aa1995070342981ef995b2a20a-rootfs.mount: Deactivated successfully. Mar 14 00:52:53.202892 containerd[1512]: time="2026-03-14T00:52:53.201962305Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:52:53.221286 containerd[1512]: time="2026-03-14T00:52:53.220937989Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05\"" Mar 14 00:52:53.227239 containerd[1512]: time="2026-03-14T00:52:53.226458682Z" level=info msg="StartContainer for \"1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05\"" Mar 14 00:52:53.282404 systemd[1]: Started cri-containerd-1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05.scope - libcontainer container 1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05. Mar 14 00:52:53.323456 systemd[1]: cri-containerd-1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05.scope: Deactivated successfully. Mar 14 00:52:53.327843 containerd[1512]: time="2026-03-14T00:52:53.327339020Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9685c1e_771e_4902_a1b0_fe76ddd359e0.slice/cri-containerd-1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05.scope/memory.events\": no such file or directory" Mar 14 00:52:53.331467 containerd[1512]: time="2026-03-14T00:52:53.330499133Z" level=info msg="StartContainer for \"1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05\" returns successfully" Mar 14 00:52:53.361887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05-rootfs.mount: Deactivated successfully. Mar 14 00:52:53.363662 containerd[1512]: time="2026-03-14T00:52:53.363540561Z" level=info msg="shim disconnected" id=1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05 namespace=k8s.io Mar 14 00:52:53.363807 containerd[1512]: time="2026-03-14T00:52:53.363778587Z" level=warning msg="cleaning up after shim disconnected" id=1a5a9761e7f42de0a1823a57ff2f64be5be13f2a6d092131d2eb54bbfd5a8f05 namespace=k8s.io Mar 14 00:52:53.363807 containerd[1512]: time="2026-03-14T00:52:53.363804931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:52:53.380168 containerd[1512]: time="2026-03-14T00:52:53.380090556Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:52:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:52:53.683507 kubelet[2658]: E0314 00:52:53.682560 2658 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-xs94q" podUID="9c2ffd1f-3348-47d0-95c8-11e157021e69" Mar 14 00:52:54.211295 containerd[1512]: time="2026-03-14T00:52:54.211092583Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:52:54.228257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250512575.mount: Deactivated successfully. Mar 14 00:52:54.241614 containerd[1512]: time="2026-03-14T00:52:54.240642124Z" level=info msg="CreateContainer within sandbox \"6b8dbb1d6575d8258cdae8d43524d1bd92435d839e52d282c2505faf295c5212\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c20f675f7df1c8cf8d9626617ccd15d732039d6a688e6007d0cf48b69a6ca93d\"" Mar 14 00:52:54.250665 containerd[1512]: time="2026-03-14T00:52:54.250623304Z" level=info msg="StartContainer for \"c20f675f7df1c8cf8d9626617ccd15d732039d6a688e6007d0cf48b69a6ca93d\"" Mar 14 00:52:54.297703 systemd[1]: Started cri-containerd-c20f675f7df1c8cf8d9626617ccd15d732039d6a688e6007d0cf48b69a6ca93d.scope - libcontainer container c20f675f7df1c8cf8d9626617ccd15d732039d6a688e6007d0cf48b69a6ca93d. Mar 14 00:52:54.337259 containerd[1512]: time="2026-03-14T00:52:54.337154541Z" level=info msg="StartContainer for \"c20f675f7df1c8cf8d9626617ccd15d732039d6a688e6007d0cf48b69a6ca93d\" returns successfully" Mar 14 00:52:54.808703 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 00:52:55.232221 systemd[1]: run-containerd-runc-k8s.io-c20f675f7df1c8cf8d9626617ccd15d732039d6a688e6007d0cf48b69a6ca93d-runc.teoBrA.mount: Deactivated successfully. Mar 14 00:52:55.239574 kubelet[2658]: I0314 00:52:55.239321 2658 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rqsfc" podStartSLOduration=5.23913779 podStartE2EDuration="5.23913779s" podCreationTimestamp="2026-03-14 00:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:52:55.236142377 +0000 UTC m=+118.707387773" watchObservedRunningTime="2026-03-14 00:52:55.23913779 +0000 UTC m=+118.710383200" Mar 14 00:52:55.685402 kubelet[2658]: E0314 00:52:55.682591 2658 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-xs94q" podUID="9c2ffd1f-3348-47d0-95c8-11e157021e69" Mar 14 00:52:56.706371 containerd[1512]: time="2026-03-14T00:52:56.706161221Z" level=info msg="StopPodSandbox for \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\"" Mar 14 00:52:56.706371 containerd[1512]: time="2026-03-14T00:52:56.706285287Z" level=info msg="TearDown network for sandbox \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\" successfully" Mar 14 00:52:56.706371 containerd[1512]: time="2026-03-14T00:52:56.706297703Z" level=info msg="StopPodSandbox for \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\" returns successfully" Mar 14 00:52:56.709644 containerd[1512]: time="2026-03-14T00:52:56.707814307Z" level=info msg="RemovePodSandbox for \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\"" Mar 14 00:52:56.709644 containerd[1512]: time="2026-03-14T00:52:56.707860558Z" level=info msg="Forcibly stopping sandbox \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\"" Mar 14 00:52:56.709644 containerd[1512]: time="2026-03-14T00:52:56.707917022Z" level=info msg="TearDown network for sandbox \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\" successfully" Mar 14 00:52:56.712577 containerd[1512]: time="2026-03-14T00:52:56.712430625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:52:56.712577 containerd[1512]: time="2026-03-14T00:52:56.712500337Z" level=info msg="RemovePodSandbox \"65a5baeb48766ef4dc1aa79b8048cf6a9b7a9648962526e947b1629096403612\" returns successfully" Mar 14 00:52:56.721863 containerd[1512]: time="2026-03-14T00:52:56.719979047Z" level=info msg="StopPodSandbox for \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\"" Mar 14 00:52:56.721863 containerd[1512]: time="2026-03-14T00:52:56.720063402Z" level=info msg="TearDown network for sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" successfully" Mar 14 00:52:56.721863 containerd[1512]: time="2026-03-14T00:52:56.720084552Z" level=info msg="StopPodSandbox for \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" returns successfully" Mar 14 00:52:56.721863 containerd[1512]: time="2026-03-14T00:52:56.720772925Z" level=info msg="RemovePodSandbox for \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\"" Mar 14 00:52:56.721863 containerd[1512]: time="2026-03-14T00:52:56.720810741Z" level=info msg="Forcibly stopping sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\"" Mar 14 00:52:56.721863 containerd[1512]: time="2026-03-14T00:52:56.720866728Z" level=info msg="TearDown network for sandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" successfully" Mar 14 00:52:56.725761 containerd[1512]: time="2026-03-14T00:52:56.725722296Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:52:56.726332 containerd[1512]: time="2026-03-14T00:52:56.726310043Z" level=info msg="RemovePodSandbox \"cfc25fae69bc1db0dd5dc9657401040a8df6b9f8849460ffe729edb0747a3544\" returns successfully" Mar 14 00:52:58.341320 systemd-networkd[1439]: lxc_health: Link UP Mar 14 00:52:58.387447 systemd-networkd[1439]: lxc_health: Gained carrier Mar 14 00:52:59.135666 systemd[1]: run-containerd-runc-k8s.io-c20f675f7df1c8cf8d9626617ccd15d732039d6a688e6007d0cf48b69a6ca93d-runc.meJk4J.mount: Deactivated successfully. Mar 14 00:53:00.365556 systemd-networkd[1439]: lxc_health: Gained IPv6LL Mar 14 00:53:02.166987 update_engine[1492]: I20260314 00:53:02.166738 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:53:02.169200 update_engine[1492]: I20260314 00:53:02.167977 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:53:02.169200 update_engine[1492]: I20260314 00:53:02.168756 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:53:02.169467 update_engine[1492]: E20260314 00:53:02.169420 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:53:02.169567 update_engine[1492]: I20260314 00:53:02.169538 1492 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:53:02.169603 update_engine[1492]: I20260314 00:53:02.169576 1492 omaha_request_action.cc:617] Omaha request response: Mar 14 00:53:02.171196 update_engine[1492]: E20260314 00:53:02.169809 1492 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 14 00:53:02.171196 update_engine[1492]: I20260314 00:53:02.170107 1492 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 14 00:53:02.171196 update_engine[1492]: I20260314 00:53:02.170135 1492 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:53:02.171196 update_engine[1492]: I20260314 00:53:02.170151 1492 update_attempter.cc:306] Processing Done. Mar 14 00:53:02.171196 update_engine[1492]: E20260314 00:53:02.170244 1492 update_attempter.cc:619] Update failed. Mar 14 00:53:02.174616 update_engine[1492]: I20260314 00:53:02.174546 1492 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 14 00:53:02.174693 update_engine[1492]: I20260314 00:53:02.174610 1492 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 14 00:53:02.174693 update_engine[1492]: I20260314 00:53:02.174633 1492 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 14 00:53:02.175153 update_engine[1492]: I20260314 00:53:02.174988 1492 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:53:02.175153 update_engine[1492]: I20260314 00:53:02.175089 1492 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:53:02.175153 update_engine[1492]: I20260314 00:53:02.175120 1492 omaha_request_action.cc:272] Request: Mar 14 00:53:02.175153 update_engine[1492]: Mar 14 00:53:02.175153 update_engine[1492]: Mar 14 00:53:02.175153 update_engine[1492]: Mar 14 00:53:02.175153 update_engine[1492]: Mar 14 00:53:02.175153 update_engine[1492]: Mar 14 00:53:02.175153 update_engine[1492]: Mar 14 00:53:02.175453 update_engine[1492]: I20260314 00:53:02.175163 1492 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:53:02.177237 update_engine[1492]: I20260314 00:53:02.175661 1492 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:53:02.177237 update_engine[1492]: I20260314 00:53:02.176048 1492 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:53:02.181121 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 14 00:53:02.181554 update_engine[1492]: E20260314 00:53:02.181228 1492 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:53:02.181554 update_engine[1492]: I20260314 00:53:02.181295 1492 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:53:02.181554 update_engine[1492]: I20260314 00:53:02.181510 1492 omaha_request_action.cc:617] Omaha request response: Mar 14 00:53:02.181554 update_engine[1492]: I20260314 00:53:02.181531 1492 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:53:02.181554 update_engine[1492]: I20260314 00:53:02.181540 1492 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:53:02.181554 update_engine[1492]: I20260314 00:53:02.181545 1492 update_attempter.cc:306] Processing Done. Mar 14 00:53:02.181554 update_engine[1492]: I20260314 00:53:02.181555 1492 update_attempter.cc:310] Error event sent. Mar 14 00:53:02.181827 update_engine[1492]: I20260314 00:53:02.181573 1492 update_check_scheduler.cc:74] Next update check in 46m48s Mar 14 00:53:02.182452 locksmithd[1523]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 14 00:53:03.471517 systemd[1]: run-containerd-runc-k8s.io-c20f675f7df1c8cf8d9626617ccd15d732039d6a688e6007d0cf48b69a6ca93d-runc.1FhZ2U.mount: Deactivated successfully. Mar 14 00:53:03.634124 sshd[4617]: pam_unix(sshd:session): session closed for user core Mar 14 00:53:03.654702 systemd[1]: sshd@27-10.244.101.86:22-20.161.92.111:32794.service: Deactivated successfully. Mar 14 00:53:03.659458 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:53:03.661679 systemd-logind[1491]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:53:03.663944 systemd-logind[1491]: Removed session 26.