Mar 4 01:36:10.031312 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 01:36:10.031348 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:36:10.031362 kernel: BIOS-provided physical RAM map: Mar 4 01:36:10.033912 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 4 01:36:10.033924 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 4 01:36:10.033934 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 4 01:36:10.033946 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 4 01:36:10.033956 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 4 01:36:10.033966 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 01:36:10.033977 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 4 01:36:10.033987 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 4 01:36:10.034004 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 4 01:36:10.034020 kernel: NX (Execute Disable) protection: active Mar 4 01:36:10.034031 kernel: APIC: Static calls initialized Mar 4 01:36:10.034043 kernel: SMBIOS 2.8 present. Mar 4 01:36:10.034055 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 4 01:36:10.034066 kernel: Hypervisor detected: KVM Mar 4 01:36:10.034090 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 01:36:10.034102 kernel: kvm-clock: using sched offset of 4473595841 cycles Mar 4 01:36:10.034113 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 01:36:10.034125 kernel: tsc: Detected 2499.998 MHz processor Mar 4 01:36:10.034144 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 01:36:10.034156 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 01:36:10.034167 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 4 01:36:10.034178 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 4 01:36:10.034190 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 01:36:10.034206 kernel: Using GB pages for direct mapping Mar 4 01:36:10.034218 kernel: ACPI: Early table checksum verification disabled Mar 4 01:36:10.034241 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 4 01:36:10.034253 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:36:10.034264 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:36:10.034276 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:36:10.034287 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 4 01:36:10.034298 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:36:10.034310 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:36:10.034327 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:36:10.034339 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:36:10.034350 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 4 01:36:10.034374 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 4 01:36:10.034390 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 4 01:36:10.034409 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 4 01:36:10.034422 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 4 01:36:10.034438 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 4 01:36:10.034450 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 4 01:36:10.034462 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 4 01:36:10.034474 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 4 01:36:10.034486 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 4 01:36:10.034498 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 4 01:36:10.034509 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 4 01:36:10.034526 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 4 01:36:10.034538 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 4 01:36:10.034549 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 4 01:36:10.034561 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 4 01:36:10.034573 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 4 01:36:10.034585 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 4 01:36:10.034596 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 4 01:36:10.034608 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 4 01:36:10.034620 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 4 01:36:10.034636 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 4 01:36:10.034653 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 4 01:36:10.034665 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 4 01:36:10.034677 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 4 01:36:10.034689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 4 01:36:10.034701 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 4 01:36:10.034713 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 4 01:36:10.034725 kernel: Zone ranges: Mar 4 01:36:10.034737 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 01:36:10.034749 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 4 01:36:10.034769 kernel: Normal empty Mar 4 01:36:10.034781 kernel: Movable zone start for each node Mar 4 01:36:10.034793 kernel: Early memory node ranges Mar 4 01:36:10.034805 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 4 01:36:10.034817 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 4 01:36:10.034832 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 4 01:36:10.034844 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 01:36:10.034855 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 4 01:36:10.034867 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 4 01:36:10.034879 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 01:36:10.034896 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 01:36:10.034908 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 01:36:10.034919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 01:36:10.034931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 01:36:10.034943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 01:36:10.034955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 01:36:10.034967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 01:36:10.034979 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 01:36:10.034990 kernel: TSC deadline timer available Mar 4 01:36:10.035007 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 4 01:36:10.035019 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 01:36:10.035031 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 4 01:36:10.035043 kernel: Booting paravirtualized kernel on KVM Mar 4 01:36:10.035055 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 01:36:10.035067 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 4 01:36:10.035079 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 4 01:36:10.035091 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 4 01:36:10.035102 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 4 01:36:10.035119 kernel: kvm-guest: PV spinlocks enabled Mar 4 01:36:10.035131 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 01:36:10.035145 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:36:10.035157 kernel: random: crng init done Mar 4 01:36:10.035169 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 01:36:10.035181 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 4 01:36:10.035192 kernel: Fallback order for Node 0: 0 Mar 4 01:36:10.035204 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 4 01:36:10.035232 kernel: Policy zone: DMA32 Mar 4 01:36:10.035246 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 01:36:10.035258 kernel: software IO TLB: area num 16. Mar 4 01:36:10.035270 kernel: Memory: 1901588K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194768K reserved, 0K cma-reserved) Mar 4 01:36:10.035282 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 4 01:36:10.035294 kernel: Kernel/User page tables isolation: enabled Mar 4 01:36:10.035306 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 01:36:10.035318 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 01:36:10.035330 kernel: Dynamic Preempt: voluntary Mar 4 01:36:10.035348 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 01:36:10.035361 kernel: rcu: RCU event tracing is enabled. Mar 4 01:36:10.036155 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 4 01:36:10.036168 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 01:36:10.036181 kernel: Rude variant of Tasks RCU enabled. Mar 4 01:36:10.036208 kernel: Tracing variant of Tasks RCU enabled. Mar 4 01:36:10.036245 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 01:36:10.036258 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 4 01:36:10.036271 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 4 01:36:10.036283 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 01:36:10.036295 kernel: Console: colour VGA+ 80x25 Mar 4 01:36:10.036308 kernel: printk: console [tty0] enabled Mar 4 01:36:10.036326 kernel: printk: console [ttyS0] enabled Mar 4 01:36:10.036339 kernel: ACPI: Core revision 20230628 Mar 4 01:36:10.036351 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 01:36:10.036432 kernel: x2apic enabled Mar 4 01:36:10.036450 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 01:36:10.036471 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 4 01:36:10.036484 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 4 01:36:10.036497 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 01:36:10.036509 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 4 01:36:10.036522 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 4 01:36:10.036534 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 01:36:10.036547 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 01:36:10.036559 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 01:36:10.036572 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 4 01:36:10.036584 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 4 01:36:10.036602 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 4 01:36:10.036614 kernel: MDS: Mitigation: Clear CPU buffers Mar 4 01:36:10.036626 kernel: MMIO Stale Data: Unknown: No mitigations Mar 4 01:36:10.036639 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 4 01:36:10.036651 kernel: active return thunk: its_return_thunk Mar 4 01:36:10.036663 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 4 01:36:10.036676 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 01:36:10.036689 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 01:36:10.036701 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 01:36:10.036714 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 01:36:10.036726 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 4 01:36:10.036744 kernel: Freeing SMP alternatives memory: 32K Mar 4 01:36:10.036756 kernel: pid_max: default: 32768 minimum: 301 Mar 4 01:36:10.036769 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 01:36:10.036781 kernel: landlock: Up and running. Mar 4 01:36:10.036794 kernel: SELinux: Initializing. Mar 4 01:36:10.036806 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 4 01:36:10.036819 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 4 01:36:10.036831 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 4 01:36:10.036844 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 4 01:36:10.036857 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 4 01:36:10.036875 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 4 01:36:10.036888 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 4 01:36:10.036900 kernel: signal: max sigframe size: 1776 Mar 4 01:36:10.036913 kernel: rcu: Hierarchical SRCU implementation. Mar 4 01:36:10.036926 kernel: rcu: Max phase no-delay instances is 400. Mar 4 01:36:10.036938 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 01:36:10.036951 kernel: smp: Bringing up secondary CPUs ... Mar 4 01:36:10.036968 kernel: smpboot: x86: Booting SMP configuration: Mar 4 01:36:10.036980 kernel: .... node #0, CPUs: #1 Mar 4 01:36:10.036998 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 4 01:36:10.037011 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 01:36:10.037023 kernel: smpboot: Max logical packages: 16 Mar 4 01:36:10.037036 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 4 01:36:10.037048 kernel: devtmpfs: initialized Mar 4 01:36:10.037061 kernel: x86/mm: Memory block size: 128MB Mar 4 01:36:10.037073 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 01:36:10.037086 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 4 01:36:10.037098 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 01:36:10.037111 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 01:36:10.037129 kernel: audit: initializing netlink subsys (disabled) Mar 4 01:36:10.037142 kernel: audit: type=2000 audit(1772588168.176:1): state=initialized audit_enabled=0 res=1 Mar 4 01:36:10.037154 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 01:36:10.037166 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 01:36:10.037179 kernel: cpuidle: using governor menu Mar 4 01:36:10.037191 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 01:36:10.037204 kernel: dca service started, version 1.12.1 Mar 4 01:36:10.037217 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 01:36:10.037248 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 01:36:10.037261 kernel: PCI: Using configuration type 1 for base access Mar 4 01:36:10.037274 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 01:36:10.037286 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 01:36:10.037299 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 01:36:10.037312 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 01:36:10.037324 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 01:36:10.037337 kernel: ACPI: Added _OSI(Module Device) Mar 4 01:36:10.037349 kernel: ACPI: Added _OSI(Processor Device) Mar 4 01:36:10.037388 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 01:36:10.037402 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 01:36:10.037415 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 01:36:10.037432 kernel: ACPI: Interpreter enabled Mar 4 01:36:10.037445 kernel: ACPI: PM: (supports S0 S5) Mar 4 01:36:10.037457 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 01:36:10.037470 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 01:36:10.037482 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 01:36:10.037495 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 01:36:10.037507 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 01:36:10.037788 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 01:36:10.037980 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 4 01:36:10.038155 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 4 01:36:10.038174 kernel: PCI host bridge to bus 0000:00 Mar 4 01:36:10.040294 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 01:36:10.040502 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 01:36:10.040687 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 01:36:10.040856 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 4 01:36:10.041025 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 01:36:10.041186 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 4 01:36:10.041361 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 01:36:10.041598 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 01:36:10.041800 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 4 01:36:10.042004 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 4 01:36:10.042195 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 4 01:36:10.042742 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 4 01:36:10.042933 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 01:36:10.043143 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 4 01:36:10.043339 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 4 01:36:10.044781 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 4 01:36:10.044967 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 4 01:36:10.045165 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 4 01:36:10.045389 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 4 01:36:10.045597 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 4 01:36:10.045779 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 4 01:36:10.046076 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 4 01:36:10.046282 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 4 01:36:10.047554 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 4 01:36:10.047741 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 4 01:36:10.047929 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 4 01:36:10.048110 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 4 01:36:10.048320 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 4 01:36:10.050555 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 4 01:36:10.050753 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 4 01:36:10.050934 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 4 01:36:10.051118 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 4 01:36:10.051309 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 4 01:36:10.051617 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 4 01:36:10.051821 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 4 01:36:10.051997 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 4 01:36:10.052172 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 4 01:36:10.052361 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 4 01:36:10.053599 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 01:36:10.053797 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 01:36:10.053999 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 01:36:10.054214 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 4 01:36:10.054443 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 4 01:36:10.054629 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 01:36:10.054804 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 4 01:36:10.055002 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 4 01:36:10.055183 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 4 01:36:10.058458 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 4 01:36:10.058650 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 4 01:36:10.058828 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 4 01:36:10.059017 kernel: pci_bus 0000:02: extended config space not accessible Mar 4 01:36:10.059233 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 4 01:36:10.059478 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 4 01:36:10.059678 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 4 01:36:10.059859 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 4 01:36:10.060048 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 4 01:36:10.060240 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 4 01:36:10.062043 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 4 01:36:10.062250 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 4 01:36:10.062461 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 4 01:36:10.062677 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 4 01:36:10.062877 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 4 01:36:10.063060 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 4 01:36:10.063264 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 4 01:36:10.063472 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 4 01:36:10.063671 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 4 01:36:10.063872 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 4 01:36:10.064069 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 4 01:36:10.064285 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 4 01:36:10.064526 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 4 01:36:10.064727 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 4 01:36:10.064922 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 4 01:36:10.065124 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 4 01:36:10.065324 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 4 01:36:10.065579 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 4 01:36:10.065754 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 4 01:36:10.065937 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 4 01:36:10.066112 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 4 01:36:10.066302 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 4 01:36:10.066496 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 4 01:36:10.066516 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 01:36:10.066530 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 01:36:10.066543 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 01:36:10.066556 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 01:36:10.066569 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 01:36:10.066590 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 01:36:10.066603 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 01:36:10.066615 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 01:36:10.066628 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 01:36:10.066640 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 01:36:10.066653 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 01:36:10.066665 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 01:36:10.066678 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 01:36:10.066690 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 01:36:10.066708 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 01:36:10.066721 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 01:36:10.066734 kernel: iommu: Default domain type: Translated Mar 4 01:36:10.066746 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 01:36:10.066759 kernel: PCI: Using ACPI for IRQ routing Mar 4 01:36:10.066779 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 01:36:10.066791 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 4 01:36:10.066803 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 4 01:36:10.066987 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 01:36:10.067176 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 01:36:10.067417 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 01:36:10.067439 kernel: vgaarb: loaded Mar 4 01:36:10.067452 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 01:36:10.067465 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 01:36:10.067478 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 01:36:10.067490 kernel: pnp: PnP ACPI init Mar 4 01:36:10.067687 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 01:36:10.067722 kernel: pnp: PnP ACPI: found 5 devices Mar 4 01:36:10.067735 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 01:36:10.067748 kernel: NET: Registered PF_INET protocol family Mar 4 01:36:10.067761 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 01:36:10.067774 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 4 01:36:10.067787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 01:36:10.067800 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 4 01:36:10.067818 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 4 01:36:10.067837 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 4 01:36:10.067850 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 4 01:36:10.067863 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 4 01:36:10.067875 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 01:36:10.067888 kernel: NET: Registered PF_XDP protocol family Mar 4 01:36:10.068060 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 4 01:36:10.068249 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 4 01:36:10.068442 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 4 01:36:10.068628 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 4 01:36:10.068804 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 4 01:36:10.068978 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 4 01:36:10.069152 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 4 01:36:10.069342 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 4 01:36:10.069561 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 4 01:36:10.069746 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 4 01:36:10.069918 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 4 01:36:10.070090 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 4 01:36:10.070276 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 4 01:36:10.070470 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 4 01:36:10.070644 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 4 01:36:10.070816 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 4 01:36:10.070996 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 4 01:36:10.071207 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 4 01:36:10.071420 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 4 01:36:10.071600 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 4 01:36:10.071775 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 4 01:36:10.071950 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 4 01:36:10.072125 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 4 01:36:10.072316 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 4 01:36:10.072543 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 4 01:36:10.072718 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 4 01:36:10.072900 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 4 01:36:10.073074 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 4 01:36:10.073262 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 4 01:36:10.073464 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 4 01:36:10.073639 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 4 01:36:10.073822 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 4 01:36:10.073997 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 4 01:36:10.074172 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 4 01:36:10.074361 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 4 01:36:10.074568 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 4 01:36:10.074758 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 4 01:36:10.074947 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 4 01:36:10.075144 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 4 01:36:10.075344 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 4 01:36:10.075556 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 4 01:36:10.075756 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 4 01:36:10.075961 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 4 01:36:10.076149 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 4 01:36:10.076349 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 4 01:36:10.076582 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 4 01:36:10.076756 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 4 01:36:10.076929 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 4 01:36:10.077101 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 4 01:36:10.077289 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 4 01:36:10.077483 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 01:36:10.077653 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 01:36:10.077811 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 01:36:10.077968 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 4 01:36:10.078135 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 01:36:10.078310 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 4 01:36:10.078516 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 4 01:36:10.078686 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 4 01:36:10.078854 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 4 01:36:10.079041 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 4 01:36:10.079253 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 4 01:36:10.079527 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 4 01:36:10.079697 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 4 01:36:10.079870 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 4 01:36:10.080036 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 4 01:36:10.080201 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 4 01:36:10.080408 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 4 01:36:10.080586 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 4 01:36:10.080753 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 4 01:36:10.080936 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 4 01:36:10.081103 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 4 01:36:10.081281 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 4 01:36:10.081501 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 4 01:36:10.081669 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 4 01:36:10.081842 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 4 01:36:10.082014 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 4 01:36:10.082179 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 4 01:36:10.082360 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 4 01:36:10.082555 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 4 01:36:10.082722 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 4 01:36:10.082888 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 4 01:36:10.082916 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 01:36:10.082931 kernel: PCI: CLS 0 bytes, default 64 Mar 4 01:36:10.082944 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 4 01:36:10.082958 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 4 01:36:10.082971 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 4 01:36:10.082985 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 4 01:36:10.082998 kernel: Initialise system trusted keyrings Mar 4 01:36:10.083011 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 4 01:36:10.083031 kernel: Key type asymmetric registered Mar 4 01:36:10.083045 kernel: Asymmetric key parser 'x509' registered Mar 4 01:36:10.083058 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 01:36:10.083071 kernel: io scheduler mq-deadline registered Mar 4 01:36:10.083084 kernel: io scheduler kyber registered Mar 4 01:36:10.083097 kernel: io scheduler bfq registered Mar 4 01:36:10.083288 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 4 01:36:10.083486 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 4 01:36:10.083665 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 01:36:10.083852 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 4 01:36:10.084028 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 4 01:36:10.084204 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 01:36:10.084411 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 4 01:36:10.084590 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 4 01:36:10.084766 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 01:36:10.084952 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 4 01:36:10.085129 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 4 01:36:10.085319 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 01:36:10.085516 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 4 01:36:10.085712 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 4 01:36:10.085890 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 01:36:10.086076 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 4 01:36:10.086268 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 4 01:36:10.086511 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 01:36:10.086689 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 4 01:36:10.086862 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 4 01:36:10.087035 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 01:36:10.087226 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 4 01:36:10.087423 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 4 01:36:10.087607 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 01:36:10.087628 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 01:36:10.087648 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 01:36:10.087676 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 01:36:10.087699 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 01:36:10.087720 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 01:36:10.087734 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 01:36:10.087747 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 01:36:10.087770 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 01:36:10.087784 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 01:36:10.087979 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 4 01:36:10.088153 kernel: rtc_cmos 00:03: registered as rtc0 Mar 4 01:36:10.088332 kernel: rtc_cmos 00:03: setting system clock to 2026-03-04T01:36:09 UTC (1772588169) Mar 4 01:36:10.088540 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 4 01:36:10.088561 kernel: intel_pstate: CPU model not supported Mar 4 01:36:10.088575 kernel: NET: Registered PF_INET6 protocol family Mar 4 01:36:10.088588 kernel: Segment Routing with IPv6 Mar 4 01:36:10.088601 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 01:36:10.088615 kernel: NET: Registered PF_PACKET protocol family Mar 4 01:36:10.088628 kernel: Key type dns_resolver registered Mar 4 01:36:10.088641 kernel: IPI shorthand broadcast: enabled Mar 4 01:36:10.088655 kernel: sched_clock: Marking stable (1263004219, 236315881)->(1624843569, -125523469) Mar 4 01:36:10.088677 kernel: registered taskstats version 1 Mar 4 01:36:10.088690 kernel: Loading compiled-in X.509 certificates Mar 4 01:36:10.088703 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 01:36:10.088716 kernel: Key type .fscrypt registered Mar 4 01:36:10.088729 kernel: Key type fscrypt-provisioning registered Mar 4 01:36:10.088742 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 01:36:10.088756 kernel: ima: Allocated hash algorithm: sha1 Mar 4 01:36:10.088769 kernel: ima: No architecture policies found Mar 4 01:36:10.088782 kernel: clk: Disabling unused clocks Mar 4 01:36:10.088801 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 01:36:10.088815 kernel: Write protecting the kernel read-only data: 36864k Mar 4 01:36:10.088828 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 01:36:10.088841 kernel: Run /init as init process Mar 4 01:36:10.088854 kernel: with arguments: Mar 4 01:36:10.088868 kernel: /init Mar 4 01:36:10.088880 kernel: with environment: Mar 4 01:36:10.088893 kernel: HOME=/ Mar 4 01:36:10.088906 kernel: TERM=linux Mar 4 01:36:10.088935 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:36:10.088952 systemd[1]: Detected virtualization kvm. Mar 4 01:36:10.088967 systemd[1]: Detected architecture x86-64. Mar 4 01:36:10.088981 systemd[1]: Running in initrd. Mar 4 01:36:10.088995 systemd[1]: No hostname configured, using default hostname. Mar 4 01:36:10.089008 systemd[1]: Hostname set to . Mar 4 01:36:10.089023 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:36:10.089043 systemd[1]: Queued start job for default target initrd.target. Mar 4 01:36:10.089057 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:36:10.089071 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:36:10.089086 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 01:36:10.089101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:36:10.089124 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 01:36:10.089139 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 01:36:10.089160 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 01:36:10.089176 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 01:36:10.089190 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:36:10.089204 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:36:10.089230 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:36:10.089253 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:36:10.089267 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:36:10.089282 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:36:10.089301 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:36:10.089315 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:36:10.089330 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:36:10.089344 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:36:10.089359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:36:10.089439 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:36:10.089454 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:36:10.089468 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:36:10.089483 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 01:36:10.089505 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:36:10.089519 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 01:36:10.089534 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 01:36:10.089548 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:36:10.089563 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:36:10.089578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:36:10.089635 systemd-journald[202]: Collecting audit messages is disabled. Mar 4 01:36:10.089674 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 01:36:10.089689 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:36:10.089703 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 01:36:10.089724 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:36:10.089739 systemd-journald[202]: Journal started Mar 4 01:36:10.089765 systemd-journald[202]: Runtime Journal (/run/log/journal/3bfb9d4d2f0f4fa082cf801b57a73ee6) is 4.7M, max 38.0M, 33.2M free. Mar 4 01:36:10.042729 systemd-modules-load[203]: Inserted module 'overlay' Mar 4 01:36:10.145353 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 01:36:10.145423 kernel: Bridge firewalling registered Mar 4 01:36:10.099943 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 4 01:36:10.154141 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:36:10.154155 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:36:10.155207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:36:10.158783 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:36:10.165554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:36:10.177654 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:36:10.182564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:36:10.187347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:36:10.200624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:36:10.209416 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:36:10.213793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:36:10.222947 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 01:36:10.227536 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:36:10.228704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:36:10.240418 dracut-cmdline[237]: dracut-dracut-053 Mar 4 01:36:10.245010 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:36:10.280056 systemd-resolved[240]: Positive Trust Anchors: Mar 4 01:36:10.280079 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:36:10.280122 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:36:10.289196 systemd-resolved[240]: Defaulting to hostname 'linux'. Mar 4 01:36:10.292337 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:36:10.294026 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:36:10.352390 kernel: SCSI subsystem initialized Mar 4 01:36:10.363392 kernel: Loading iSCSI transport class v2.0-870. Mar 4 01:36:10.376398 kernel: iscsi: registered transport (tcp) Mar 4 01:36:10.403152 kernel: iscsi: registered transport (qla4xxx) Mar 4 01:36:10.403220 kernel: QLogic iSCSI HBA Driver Mar 4 01:36:10.459189 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 01:36:10.466560 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 01:36:10.499996 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 01:36:10.500068 kernel: device-mapper: uevent: version 1.0.3 Mar 4 01:36:10.502386 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 01:36:10.550422 kernel: raid6: sse2x4 gen() 13757 MB/s Mar 4 01:36:10.568397 kernel: raid6: sse2x2 gen() 9603 MB/s Mar 4 01:36:10.587110 kernel: raid6: sse2x1 gen() 10144 MB/s Mar 4 01:36:10.587152 kernel: raid6: using algorithm sse2x4 gen() 13757 MB/s Mar 4 01:36:10.606053 kernel: raid6: .... xor() 7707 MB/s, rmw enabled Mar 4 01:36:10.606109 kernel: raid6: using ssse3x2 recovery algorithm Mar 4 01:36:10.632407 kernel: xor: automatically using best checksumming function avx Mar 4 01:36:10.823421 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 01:36:10.838512 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:36:10.846618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:36:10.877506 systemd-udevd[424]: Using default interface naming scheme 'v255'. Mar 4 01:36:10.884864 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:36:10.894761 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 01:36:10.914908 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Mar 4 01:36:10.956233 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:36:10.963567 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:36:11.078964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:36:11.085761 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 01:36:11.120210 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 01:36:11.121738 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:36:11.123127 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:36:11.125554 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:36:11.135222 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 01:36:11.159426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:36:11.206857 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 4 01:36:11.223557 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 01:36:11.227419 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 4 01:36:11.246075 kernel: AVX version of gcm_enc/dec engaged. Mar 4 01:36:11.246141 kernel: AES CTR mode by8 optimization enabled Mar 4 01:36:11.258625 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 01:36:11.258686 kernel: GPT:17805311 != 125829119 Mar 4 01:36:11.258707 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 01:36:11.258724 kernel: GPT:17805311 != 125829119 Mar 4 01:36:11.258740 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 01:36:11.262404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:36:11.262838 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:36:11.263021 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:36:11.263995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:36:11.267573 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:36:11.267751 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:36:11.268523 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:36:11.278742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:36:11.290413 kernel: ACPI: bus type USB registered Mar 4 01:36:11.305421 kernel: usbcore: registered new interface driver usbfs Mar 4 01:36:11.305481 kernel: usbcore: registered new interface driver hub Mar 4 01:36:11.305511 kernel: usbcore: registered new device driver usb Mar 4 01:36:11.328392 kernel: libata version 3.00 loaded. Mar 4 01:36:11.351397 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (468) Mar 4 01:36:11.360386 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (482) Mar 4 01:36:11.370387 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 01:36:11.370664 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 01:36:11.372396 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 01:36:11.372636 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 01:36:11.403948 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 01:36:11.424816 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 4 01:36:11.426953 kernel: scsi host0: ahci Mar 4 01:36:11.427209 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 4 01:36:11.427454 kernel: scsi host1: ahci Mar 4 01:36:11.427682 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 4 01:36:11.427906 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 4 01:36:11.431381 kernel: scsi host2: ahci Mar 4 01:36:11.431443 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 4 01:36:11.431675 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 4 01:36:11.435926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 01:36:11.448956 kernel: hub 1-0:1.0: USB hub found Mar 4 01:36:11.449254 kernel: hub 1-0:1.0: 4 ports detected Mar 4 01:36:11.449512 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 4 01:36:11.449742 kernel: scsi host3: ahci Mar 4 01:36:11.449947 kernel: hub 2-0:1.0: USB hub found Mar 4 01:36:11.450171 kernel: scsi host4: ahci Mar 4 01:36:11.454451 kernel: hub 2-0:1.0: 4 ports detected Mar 4 01:36:11.454684 kernel: scsi host5: ahci Mar 4 01:36:11.454906 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Mar 4 01:36:11.454928 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Mar 4 01:36:11.451511 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:36:11.467876 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Mar 4 01:36:11.467913 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Mar 4 01:36:11.467932 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Mar 4 01:36:11.467949 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Mar 4 01:36:11.475376 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 01:36:11.476268 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 01:36:11.484533 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:36:11.499665 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 01:36:11.502549 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:36:11.513739 disk-uuid[564]: Primary Header is updated. Mar 4 01:36:11.513739 disk-uuid[564]: Secondary Entries is updated. Mar 4 01:36:11.513739 disk-uuid[564]: Secondary Header is updated. Mar 4 01:36:11.519699 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:36:11.531390 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:36:11.538242 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:36:11.539326 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:36:11.681486 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 4 01:36:11.773905 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 01:36:11.773996 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 4 01:36:11.774387 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 01:36:11.777397 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 01:36:11.780830 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 01:36:11.780884 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 01:36:11.822393 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 01:36:11.829107 kernel: usbcore: registered new interface driver usbhid Mar 4 01:36:11.829147 kernel: usbhid: USB HID core driver Mar 4 01:36:11.838690 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 4 01:36:11.838728 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 4 01:36:12.540543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:36:12.541421 disk-uuid[565]: The operation has completed successfully. Mar 4 01:36:12.592034 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 01:36:12.592244 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 01:36:12.617601 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 01:36:12.623022 sh[586]: Success Mar 4 01:36:12.639390 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 4 01:36:12.696218 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 01:36:12.699507 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 01:36:12.703437 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 01:36:12.730434 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 01:36:12.730488 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:36:12.732600 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 01:36:12.736233 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 01:36:12.736271 kernel: BTRFS info (device dm-0): using free space tree Mar 4 01:36:12.746360 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 01:36:12.748599 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 01:36:12.764592 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 01:36:12.768566 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 01:36:12.793806 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:36:12.793878 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:36:12.793908 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:36:12.800391 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:36:12.814482 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 01:36:12.818054 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:36:12.823130 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 01:36:12.832658 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 01:36:12.913741 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:36:12.923599 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:36:12.967458 systemd-networkd[769]: lo: Link UP Mar 4 01:36:12.970487 systemd-networkd[769]: lo: Gained carrier Mar 4 01:36:12.974166 systemd-networkd[769]: Enumeration completed Mar 4 01:36:12.975582 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:36:12.976960 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:36:12.976965 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:36:12.978351 systemd-networkd[769]: eth0: Link UP Mar 4 01:36:12.978357 systemd-networkd[769]: eth0: Gained carrier Mar 4 01:36:12.978628 systemd[1]: Reached target network.target - Network. Mar 4 01:36:12.981094 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:36:12.990720 ignition[694]: Ignition 2.19.0 Mar 4 01:36:12.990744 ignition[694]: Stage: fetch-offline Mar 4 01:36:12.990832 ignition[694]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:36:12.990857 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 01:36:12.991043 ignition[694]: parsed url from cmdline: "" Mar 4 01:36:12.991050 ignition[694]: no config URL provided Mar 4 01:36:12.991060 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:36:12.991076 ignition[694]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:36:12.991085 ignition[694]: failed to fetch config: resource requires networking Mar 4 01:36:12.997569 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:36:12.993766 ignition[694]: Ignition finished successfully Mar 4 01:36:12.999525 systemd-networkd[769]: eth0: DHCPv4 address 10.230.15.118/30, gateway 10.230.15.117 acquired from 10.230.15.117 Mar 4 01:36:13.005593 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 01:36:13.041698 ignition[777]: Ignition 2.19.0 Mar 4 01:36:13.041723 ignition[777]: Stage: fetch Mar 4 01:36:13.042042 ignition[777]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:36:13.042069 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 01:36:13.042226 ignition[777]: parsed url from cmdline: "" Mar 4 01:36:13.042233 ignition[777]: no config URL provided Mar 4 01:36:13.042243 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:36:13.042260 ignition[777]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:36:13.043449 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 4 01:36:13.043467 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 4 01:36:13.043505 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 4 01:36:13.061205 ignition[777]: GET result: OK Mar 4 01:36:13.061341 ignition[777]: parsing config with SHA512: f6369118b07ef003f064e0ab08abd9699ec4ecb3c9bb83b1611395d90381376d3055bb991863e6e8797eda5f428a84a59bb20c5174ebe6b444b1b6d4b53b6b53 Mar 4 01:36:13.066750 unknown[777]: fetched base config from "system" Mar 4 01:36:13.066766 unknown[777]: fetched base config from "system" Mar 4 01:36:13.067266 ignition[777]: fetch: fetch complete Mar 4 01:36:13.066776 unknown[777]: fetched user config from "openstack" Mar 4 01:36:13.067274 ignition[777]: fetch: fetch passed Mar 4 01:36:13.069300 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 01:36:13.067339 ignition[777]: Ignition finished successfully Mar 4 01:36:13.085527 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 01:36:13.103428 ignition[784]: Ignition 2.19.0 Mar 4 01:36:13.103449 ignition[784]: Stage: kargs Mar 4 01:36:13.103685 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:36:13.106118 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 01:36:13.103705 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 01:36:13.104858 ignition[784]: kargs: kargs passed Mar 4 01:36:13.104925 ignition[784]: Ignition finished successfully Mar 4 01:36:13.121044 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 01:36:13.141184 ignition[790]: Ignition 2.19.0 Mar 4 01:36:13.142239 ignition[790]: Stage: disks Mar 4 01:36:13.142493 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:36:13.142515 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 01:36:13.145402 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 01:36:13.143664 ignition[790]: disks: disks passed Mar 4 01:36:13.143741 ignition[790]: Ignition finished successfully Mar 4 01:36:13.147683 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 01:36:13.149116 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:36:13.150620 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:36:13.152106 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:36:13.153719 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:36:13.163602 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 01:36:13.181723 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 4 01:36:13.185230 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 01:36:13.191526 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 01:36:13.314390 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 01:36:13.315264 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 01:36:13.316667 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 01:36:13.330531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:36:13.334550 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 01:36:13.335712 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 01:36:13.337582 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 4 01:36:13.339172 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 01:36:13.339212 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:36:13.350417 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (807) Mar 4 01:36:13.356826 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:36:13.356867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:36:13.356888 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:36:13.358994 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 01:36:13.366612 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 01:36:13.374397 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:36:13.377502 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:36:13.448815 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 01:36:13.455986 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Mar 4 01:36:13.463142 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 01:36:13.471955 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 01:36:13.577438 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 01:36:13.587489 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 01:36:13.590554 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 01:36:13.601414 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:36:13.637460 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 01:36:13.646155 ignition[925]: INFO : Ignition 2.19.0 Mar 4 01:36:13.646155 ignition[925]: INFO : Stage: mount Mar 4 01:36:13.648546 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:36:13.648546 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 01:36:13.648546 ignition[925]: INFO : mount: mount passed Mar 4 01:36:13.648546 ignition[925]: INFO : Ignition finished successfully Mar 4 01:36:13.648723 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 01:36:13.728823 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 01:36:14.819747 systemd-networkd[769]: eth0: Gained IPv6LL Mar 4 01:36:16.102620 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83dd:24:19ff:fee6:f76/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83dd:24:19ff:fee6:f76/64 assigned by NDisc. Mar 4 01:36:16.102635 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 4 01:36:20.521731 coreos-metadata[809]: Mar 04 01:36:20.521 WARN failed to locate config-drive, using the metadata service API instead Mar 4 01:36:20.546790 coreos-metadata[809]: Mar 04 01:36:20.546 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 4 01:36:20.563848 coreos-metadata[809]: Mar 04 01:36:20.563 INFO Fetch successful Mar 4 01:36:20.564812 coreos-metadata[809]: Mar 04 01:36:20.564 INFO wrote hostname srv-g1uyu.gb1.brightbox.com to /sysroot/etc/hostname Mar 4 01:36:20.566585 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 4 01:36:20.566793 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 4 01:36:20.584597 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 01:36:20.600597 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:36:20.613437 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 4 01:36:20.624594 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:36:20.624643 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:36:20.624671 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:36:20.630389 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:36:20.633790 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:36:20.674023 ignition[958]: INFO : Ignition 2.19.0 Mar 4 01:36:20.675218 ignition[958]: INFO : Stage: files Mar 4 01:36:20.677723 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:36:20.677723 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 01:36:20.679756 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Mar 4 01:36:20.681956 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 01:36:20.681956 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 01:36:20.694017 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 01:36:20.695353 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 01:36:20.695353 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 01:36:20.695208 unknown[958]: wrote ssh authorized keys file for user: core Mar 4 01:36:20.698548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:36:20.698548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 01:36:20.877599 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 01:36:21.174189 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:36:21.175672 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 4 01:36:21.175672 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 4 01:36:21.613684 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 01:36:21.895620 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 4 01:36:21.895620 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 01:36:21.900250 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 4 01:36:22.149035 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 01:36:23.467013 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 01:36:23.467013 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 4 01:36:23.473354 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:36:23.473354 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:36:23.473354 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 4 01:36:23.473354 ignition[958]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 4 01:36:23.473354 ignition[958]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 01:36:23.473354 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:36:23.473354 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:36:23.473354 ignition[958]: INFO : files: files passed Mar 4 01:36:23.473354 ignition[958]: INFO : Ignition finished successfully Mar 4 01:36:23.472925 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 01:36:23.482654 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 01:36:23.493967 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 01:36:23.498394 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 01:36:23.499360 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 01:36:23.510019 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:36:23.511437 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:36:23.512998 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:36:23.515645 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:36:23.517882 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 01:36:23.536701 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 01:36:23.583350 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 01:36:23.584448 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 01:36:23.586838 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 01:36:23.587609 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 01:36:23.589315 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 01:36:23.605691 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 01:36:23.624503 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:36:23.637677 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 01:36:23.653174 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:36:23.654125 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:36:23.657977 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 01:36:23.660274 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 01:36:23.660503 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:36:23.662418 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 01:36:23.664194 systemd[1]: Stopped target basic.target - Basic System. Mar 4 01:36:23.665789 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 01:36:23.666633 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:36:23.668210 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 01:36:23.672509 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 01:36:23.673715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:36:23.675630 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 01:36:23.676545 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 01:36:23.678224 systemd[1]: Stopped target swap.target - Swaps. Mar 4 01:36:23.679707 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 01:36:23.679913 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:36:23.681798 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:36:23.682742 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:36:23.684100 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 01:36:23.684277 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:36:23.685577 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 01:36:23.685755 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 01:36:23.687740 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 01:36:23.687954 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:36:23.689767 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 01:36:23.689959 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 01:36:23.697653 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 01:36:23.699400 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 01:36:23.700763 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 01:36:23.700969 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:36:23.705910 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 01:36:23.706092 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:36:23.723806 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 01:36:23.725203 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 01:36:23.726984 ignition[1010]: INFO : Ignition 2.19.0 Mar 4 01:36:23.726984 ignition[1010]: INFO : Stage: umount Mar 4 01:36:23.726984 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:36:23.726984 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 01:36:23.733047 ignition[1010]: INFO : umount: umount passed Mar 4 01:36:23.733047 ignition[1010]: INFO : Ignition finished successfully Mar 4 01:36:23.729283 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 01:36:23.729482 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 01:36:23.730862 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 01:36:23.730997 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 01:36:23.732194 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 01:36:23.732284 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 01:36:23.733781 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 01:36:23.733892 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 01:36:23.735113 systemd[1]: Stopped target network.target - Network. Mar 4 01:36:23.736388 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 01:36:23.736491 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:36:23.737863 systemd[1]: Stopped target paths.target - Path Units. Mar 4 01:36:23.739158 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 01:36:23.744427 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:36:23.745259 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 01:36:23.746743 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 01:36:23.748467 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 01:36:23.748541 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:36:23.749784 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 01:36:23.749862 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:36:23.751265 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 01:36:23.751350 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 01:36:23.752999 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 01:36:23.753088 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 01:36:23.754891 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 01:36:23.756468 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 01:36:23.762328 systemd-networkd[769]: eth0: DHCPv6 lease lost Mar 4 01:36:23.765691 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 01:36:23.765873 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 01:36:23.767725 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 01:36:23.767910 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 01:36:23.772011 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 01:36:23.772357 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:36:23.777553 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 01:36:23.778324 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 01:36:23.778427 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:36:23.780603 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:36:23.780690 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:36:23.784756 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 01:36:23.784827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 01:36:23.785897 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 01:36:23.785966 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:36:23.788454 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:36:23.799210 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 01:36:23.800538 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:36:23.805624 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 01:36:23.805715 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 01:36:23.808529 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 01:36:23.808605 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:36:23.810174 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 01:36:23.810247 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:36:23.812343 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 01:36:23.812432 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 01:36:23.813889 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:36:23.813970 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:36:23.827635 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 01:36:23.830758 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 01:36:23.830862 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:36:23.832487 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 01:36:23.832558 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:36:23.834031 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 01:36:23.834100 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:36:23.836532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:36:23.836607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:36:23.839985 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 01:36:23.840129 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 01:36:23.841424 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 01:36:23.841573 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 01:36:23.880125 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 01:36:23.885590 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 01:36:23.885795 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 01:36:23.887457 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 01:36:23.888605 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 01:36:23.888683 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 01:36:23.900593 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 01:36:23.909569 systemd[1]: Switching root. Mar 4 01:36:23.938875 systemd-journald[202]: Journal stopped Mar 4 01:36:25.539304 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 4 01:36:25.539553 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 01:36:25.539604 kernel: SELinux: policy capability open_perms=1 Mar 4 01:36:25.539633 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 01:36:25.539661 kernel: SELinux: policy capability always_check_network=0 Mar 4 01:36:25.539690 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 01:36:25.539716 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 01:36:25.539741 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 01:36:25.539761 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 01:36:25.539790 systemd[1]: Successfully loaded SELinux policy in 50.331ms. Mar 4 01:36:25.539855 kernel: audit: type=1403 audit(1772588184.215:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 01:36:25.539898 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.517ms. Mar 4 01:36:25.539928 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:36:25.539952 systemd[1]: Detected virtualization kvm. Mar 4 01:36:25.539973 systemd[1]: Detected architecture x86-64. Mar 4 01:36:25.540013 systemd[1]: Detected first boot. Mar 4 01:36:25.540042 systemd[1]: Hostname set to . Mar 4 01:36:25.540089 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:36:25.540116 zram_generator::config[1055]: No configuration found. Mar 4 01:36:25.540164 systemd[1]: Populated /etc with preset unit settings. Mar 4 01:36:25.540186 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 01:36:25.540207 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 01:36:25.540228 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 01:36:25.540254 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 01:36:25.540284 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 01:36:25.540310 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 01:36:25.540332 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 01:36:25.544568 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 01:36:25.544607 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 01:36:25.544639 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 01:36:25.544670 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 01:36:25.544698 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:36:25.544730 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:36:25.544752 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 01:36:25.544773 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 01:36:25.544818 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 01:36:25.544853 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:36:25.544875 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 01:36:25.544895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:36:25.544916 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 01:36:25.544944 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 01:36:25.544980 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 01:36:25.545004 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 01:36:25.545035 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:36:25.545058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:36:25.545085 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:36:25.545106 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:36:25.545139 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 01:36:25.545178 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 01:36:25.545214 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:36:25.545242 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:36:25.545265 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:36:25.545285 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 01:36:25.545312 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 01:36:25.545334 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 01:36:25.545355 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 01:36:25.555327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:36:25.555421 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 01:36:25.555449 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 01:36:25.555471 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 01:36:25.555502 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 01:36:25.555531 systemd[1]: Reached target machines.target - Containers. Mar 4 01:36:25.555559 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 01:36:25.555582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:36:25.555619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:36:25.555655 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 01:36:25.555687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:36:25.555708 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:36:25.555738 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:36:25.555761 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 01:36:25.555788 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:36:25.555829 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 01:36:25.555853 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 01:36:25.555887 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 01:36:25.555910 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 01:36:25.555937 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 01:36:25.555968 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:36:25.555991 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:36:25.556012 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 01:36:25.556033 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 01:36:25.556054 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:36:25.556080 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 01:36:25.556118 systemd[1]: Stopped verity-setup.service. Mar 4 01:36:25.556149 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:36:25.556172 kernel: loop: module loaded Mar 4 01:36:25.556192 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 01:36:25.556213 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 01:36:25.556246 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 01:36:25.556267 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 01:36:25.556306 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 01:36:25.556328 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 01:36:25.556355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:36:25.556486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 01:36:25.556523 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 01:36:25.556552 kernel: fuse: init (API version 7.39) Mar 4 01:36:25.556574 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 01:36:25.556618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:36:25.556653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:36:25.556683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:36:25.556705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:36:25.556760 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 01:36:25.556782 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 01:36:25.556870 systemd-journald[1151]: Collecting audit messages is disabled. Mar 4 01:36:25.556936 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:36:25.556961 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:36:25.556998 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:36:25.557027 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 01:36:25.557061 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 01:36:25.557085 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 01:36:25.557106 systemd-journald[1151]: Journal started Mar 4 01:36:25.557139 systemd-journald[1151]: Runtime Journal (/run/log/journal/3bfb9d4d2f0f4fa082cf801b57a73ee6) is 4.7M, max 38.0M, 33.2M free. Mar 4 01:36:25.566556 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 01:36:25.085132 systemd[1]: Queued start job for default target multi-user.target. Mar 4 01:36:25.107655 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 01:36:25.108391 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 01:36:25.592636 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 01:36:25.592720 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 01:36:25.595896 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:36:25.603410 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 01:36:25.609384 kernel: ACPI: bus type drm_connector registered Mar 4 01:36:25.615432 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 01:36:25.623629 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 01:36:25.623704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:36:25.639305 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 01:36:25.639360 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:36:25.645597 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 01:36:25.645631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:36:25.656440 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:36:25.671411 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 01:36:25.680648 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:36:25.686438 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:36:25.689285 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:36:25.689622 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:36:25.690638 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 01:36:25.692985 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 01:36:25.695479 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 01:36:25.733467 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 01:36:25.755457 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:36:25.762114 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 01:36:25.773580 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 01:36:25.781571 kernel: loop0: detected capacity change from 0 to 140768 Mar 4 01:36:25.784653 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 01:36:25.797448 systemd-journald[1151]: Time spent on flushing to /var/log/journal/3bfb9d4d2f0f4fa082cf801b57a73ee6 is 53.229ms for 1148 entries. Mar 4 01:36:25.797448 systemd-journald[1151]: System Journal (/var/log/journal/3bfb9d4d2f0f4fa082cf801b57a73ee6) is 8.0M, max 584.8M, 576.8M free. Mar 4 01:36:25.886029 systemd-journald[1151]: Received client request to flush runtime journal. Mar 4 01:36:25.886094 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 01:36:25.843229 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 01:36:25.894252 kernel: loop1: detected capacity change from 0 to 8 Mar 4 01:36:25.844746 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 01:36:25.854800 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:36:25.862516 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 01:36:25.891988 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 4 01:36:25.892009 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Mar 4 01:36:25.896814 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 01:36:25.910137 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 4 01:36:25.914325 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:36:25.925566 kernel: loop2: detected capacity change from 0 to 228704 Mar 4 01:36:25.927677 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 01:36:25.969903 kernel: loop3: detected capacity change from 0 to 142488 Mar 4 01:36:26.035767 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 01:36:26.037420 kernel: loop4: detected capacity change from 0 to 140768 Mar 4 01:36:26.051574 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:36:26.077396 kernel: loop5: detected capacity change from 0 to 8 Mar 4 01:36:26.089456 kernel: loop6: detected capacity change from 0 to 228704 Mar 4 01:36:26.110171 kernel: loop7: detected capacity change from 0 to 142488 Mar 4 01:36:26.118067 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Mar 4 01:36:26.118986 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Mar 4 01:36:26.136178 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:36:26.143807 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 4 01:36:26.146669 (sd-merge)[1212]: Merged extensions into '/usr'. Mar 4 01:36:26.156540 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 01:36:26.156583 systemd[1]: Reloading... Mar 4 01:36:26.273416 zram_generator::config[1239]: No configuration found. Mar 4 01:36:26.501735 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 01:36:26.506605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:36:26.574224 systemd[1]: Reloading finished in 416 ms. Mar 4 01:36:26.602145 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 01:36:26.610431 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 01:36:26.619631 systemd[1]: Starting ensure-sysext.service... Mar 4 01:36:26.630724 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:36:26.652861 systemd[1]: Reloading requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Mar 4 01:36:26.652885 systemd[1]: Reloading... Mar 4 01:36:26.673131 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 01:36:26.676539 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 01:36:26.679238 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 01:36:26.680749 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Mar 4 01:36:26.681606 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Mar 4 01:36:26.690845 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:36:26.691007 systemd-tmpfiles[1299]: Skipping /boot Mar 4 01:36:26.724169 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:36:26.724585 systemd-tmpfiles[1299]: Skipping /boot Mar 4 01:36:26.782395 zram_generator::config[1335]: No configuration found. Mar 4 01:36:26.956329 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:36:27.024421 systemd[1]: Reloading finished in 370 ms. Mar 4 01:36:27.046231 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 01:36:27.054007 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:36:27.063669 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:36:27.068572 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 01:36:27.079598 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 01:36:27.085641 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:36:27.093561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:36:27.103234 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 01:36:27.119765 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 01:36:27.124682 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:36:27.124965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:36:27.130236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:36:27.137705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:36:27.150286 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:36:27.153567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:36:27.153747 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:36:27.155167 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 01:36:27.167740 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 01:36:27.178091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:36:27.178906 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:36:27.179283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:36:27.179528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:36:27.181357 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Mar 4 01:36:27.189630 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:36:27.190050 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:36:27.197980 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:36:27.200927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:36:27.201123 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:36:27.203314 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 01:36:27.208124 systemd[1]: Finished ensure-sysext.service. Mar 4 01:36:27.209993 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:36:27.214613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:36:27.231041 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 01:36:27.235091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:36:27.235724 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:36:27.237896 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:36:27.238532 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:36:27.245287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:36:27.245581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:36:27.247496 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:36:27.247620 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:36:27.252793 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:36:27.265605 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:36:27.267896 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 01:36:27.273415 augenrules[1419]: No rules Mar 4 01:36:27.275354 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:36:27.300494 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 01:36:27.326245 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 01:36:27.328888 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 01:36:27.439349 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 01:36:27.440573 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 01:36:27.452701 systemd-networkd[1421]: lo: Link UP Mar 4 01:36:27.453194 systemd-networkd[1421]: lo: Gained carrier Mar 4 01:36:27.454805 systemd-timesyncd[1414]: No network connectivity, watching for changes. Mar 4 01:36:27.455566 systemd-networkd[1421]: Enumeration completed Mar 4 01:36:27.455789 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:36:27.463631 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 01:36:27.485953 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 4 01:36:27.500589 systemd-resolved[1391]: Positive Trust Anchors: Mar 4 01:36:27.500616 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:36:27.500663 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:36:27.511858 systemd-resolved[1391]: Using system hostname 'srv-g1uyu.gb1.brightbox.com'. Mar 4 01:36:27.514193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:36:27.525325 systemd[1]: Reached target network.target - Network. Mar 4 01:36:27.526063 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:36:27.564394 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1425) Mar 4 01:36:27.584941 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:36:27.584955 systemd-networkd[1421]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:36:27.588318 systemd-networkd[1421]: eth0: Link UP Mar 4 01:36:27.588330 systemd-networkd[1421]: eth0: Gained carrier Mar 4 01:36:27.588348 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:36:27.611335 systemd-networkd[1421]: eth0: DHCPv4 address 10.230.15.118/30, gateway 10.230.15.117 acquired from 10.230.15.117 Mar 4 01:36:27.613211 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Mar 4 01:36:27.647419 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 01:36:27.649393 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 4 01:36:27.658398 kernel: ACPI: button: Power Button [PWRF] Mar 4 01:36:27.701398 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 01:36:27.706792 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 01:36:27.707086 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 01:36:27.729328 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:36:27.740295 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 01:36:27.757392 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 4 01:36:27.780883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:36:27.785790 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 01:36:28.024328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:36:28.026012 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 01:36:28.035661 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 01:36:28.055391 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:36:28.090751 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 01:36:28.092125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:36:28.093165 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:36:28.094195 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 01:36:28.095317 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 01:36:28.096591 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 01:36:28.097653 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 01:36:28.098560 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 01:36:28.099500 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 01:36:28.099542 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:36:28.100421 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:36:28.102536 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 01:36:28.105490 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 01:36:28.110961 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 01:36:28.113820 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 01:36:28.115367 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 01:36:28.116358 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:36:28.117173 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:36:28.118053 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:36:28.118219 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:36:28.120294 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 01:36:28.127401 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:36:28.125475 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 4 01:36:28.137238 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 01:36:28.143479 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 01:36:28.147579 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 01:36:28.150456 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 01:36:28.156797 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 01:36:28.164314 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 01:36:28.170598 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 01:36:28.179170 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 01:36:28.181482 jq[1479]: false Mar 4 01:36:28.198381 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 01:36:28.200212 dbus-daemon[1478]: [system] SELinux support is enabled Mar 4 01:36:28.201242 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 01:36:28.204508 dbus-daemon[1478]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1421 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 4 01:36:28.202426 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 01:36:28.211583 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 01:36:28.217520 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 01:36:28.220336 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 01:36:28.226473 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 01:36:28.232847 extend-filesystems[1480]: Found loop4 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found loop5 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found loop6 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found loop7 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found vda Mar 4 01:36:28.232847 extend-filesystems[1480]: Found vda1 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found vda2 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found vda3 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found usr Mar 4 01:36:28.232847 extend-filesystems[1480]: Found vda4 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found vda6 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found vda7 Mar 4 01:36:28.232847 extend-filesystems[1480]: Found vda9 Mar 4 01:36:28.232847 extend-filesystems[1480]: Checking size of /dev/vda9 Mar 4 01:36:28.343921 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 4 01:36:28.343985 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1439) Mar 4 01:36:28.272424 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 4 01:36:28.344181 extend-filesystems[1480]: Resized partition /dev/vda9 Mar 4 01:36:28.235033 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 01:36:28.350783 extend-filesystems[1514]: resize2fs 1.47.1 (20-May-2024) Mar 4 01:36:28.237453 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 01:36:28.248785 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 01:36:28.364772 update_engine[1488]: I20260304 01:36:28.280318 1488 main.cc:92] Flatcar Update Engine starting Mar 4 01:36:28.364772 update_engine[1488]: I20260304 01:36:28.282327 1488 update_check_scheduler.cc:74] Next update check in 11m44s Mar 4 01:36:28.249060 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 01:36:28.376438 jq[1490]: true Mar 4 01:36:28.265079 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 01:36:28.266893 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 01:36:28.266936 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 01:36:28.377292 tar[1497]: linux-amd64/LICENSE Mar 4 01:36:28.377292 tar[1497]: linux-amd64/helm Mar 4 01:36:28.268580 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 01:36:28.388908 jq[1510]: true Mar 4 01:36:28.268611 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 01:36:28.288609 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 4 01:36:28.294775 systemd[1]: Started update-engine.service - Update Engine. Mar 4 01:36:28.298659 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 01:36:28.367007 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 01:36:28.367265 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 01:36:28.369930 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 01:36:28.558445 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Mar 4 01:36:28.558899 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 01:36:28.559405 systemd-logind[1487]: New seat seat0. Mar 4 01:36:28.562052 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 01:36:28.604065 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 4 01:36:28.607823 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 4 01:36:28.623401 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 4 01:36:28.622892 dbus-daemon[1478]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1505 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 4 01:36:28.623529 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Mar 4 01:36:28.624247 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 01:36:28.643049 systemd[1]: Starting polkit.service - Authorization Manager... Mar 4 01:36:28.657908 systemd[1]: Starting sshkeys.service... Mar 4 01:36:28.689872 systemd-timesyncd[1414]: Contacted time server 178.79.138.215:123 (2.flatcar.pool.ntp.org). Mar 4 01:36:28.690177 systemd-timesyncd[1414]: Initial clock synchronization to Wed 2026-03-04 01:36:28.868340 UTC. Mar 4 01:36:28.702233 extend-filesystems[1514]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 01:36:28.702233 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 4 01:36:28.702233 extend-filesystems[1514]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 4 01:36:28.696560 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 01:36:28.711839 extend-filesystems[1480]: Resized filesystem in /dev/vda9 Mar 4 01:36:28.696884 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 01:36:28.717118 polkitd[1543]: Started polkitd version 121 Mar 4 01:36:28.755799 polkitd[1543]: Loading rules from directory /etc/polkit-1/rules.d Mar 4 01:36:28.755916 polkitd[1543]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 4 01:36:28.756585 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 4 01:36:28.764417 polkitd[1543]: Finished loading, compiling and executing 2 rules Mar 4 01:36:28.767844 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 4 01:36:28.770448 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 4 01:36:28.771164 systemd[1]: Started polkit.service - Authorization Manager. Mar 4 01:36:28.771885 polkitd[1543]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 4 01:36:28.810241 systemd-hostnamed[1505]: Hostname set to (static) Mar 4 01:36:28.872740 containerd[1511]: time="2026-03-04T01:36:28.872509781Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 01:36:28.912107 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 01:36:28.925075 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 01:36:28.943541 containerd[1511]: time="2026-03-04T01:36:28.943454776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:36:28.948500 containerd[1511]: time="2026-03-04T01:36:28.948456390Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:36:28.948563 containerd[1511]: time="2026-03-04T01:36:28.948500714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 01:36:28.948563 containerd[1511]: time="2026-03-04T01:36:28.948524559Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 01:36:28.948852 containerd[1511]: time="2026-03-04T01:36:28.948821815Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 01:36:28.948907 containerd[1511]: time="2026-03-04T01:36:28.948861994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 01:36:28.949012 containerd[1511]: time="2026-03-04T01:36:28.948974244Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:36:28.949012 containerd[1511]: time="2026-03-04T01:36:28.949006115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:36:28.949300 containerd[1511]: time="2026-03-04T01:36:28.949267750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:36:28.949360 containerd[1511]: time="2026-03-04T01:36:28.949299668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 01:36:28.949360 containerd[1511]: time="2026-03-04T01:36:28.949328587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:36:28.949360 containerd[1511]: time="2026-03-04T01:36:28.949347735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 01:36:28.949640 containerd[1511]: time="2026-03-04T01:36:28.949524148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:36:28.950027 containerd[1511]: time="2026-03-04T01:36:28.949926454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:36:28.950386 containerd[1511]: time="2026-03-04T01:36:28.950090485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:36:28.950386 containerd[1511]: time="2026-03-04T01:36:28.950121380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 01:36:28.950386 containerd[1511]: time="2026-03-04T01:36:28.950246960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 01:36:28.950386 containerd[1511]: time="2026-03-04T01:36:28.950332428Z" level=info msg="metadata content store policy set" policy=shared Mar 4 01:36:28.955558 containerd[1511]: time="2026-03-04T01:36:28.955508847Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 01:36:28.955740 containerd[1511]: time="2026-03-04T01:36:28.955592792Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 01:36:28.955740 containerd[1511]: time="2026-03-04T01:36:28.955621072Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 01:36:28.955740 containerd[1511]: time="2026-03-04T01:36:28.955687625Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 01:36:28.956089 containerd[1511]: time="2026-03-04T01:36:28.955742480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 01:36:28.956089 containerd[1511]: time="2026-03-04T01:36:28.955945420Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 01:36:28.957059 containerd[1511]: time="2026-03-04T01:36:28.956819407Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 01:36:28.957059 containerd[1511]: time="2026-03-04T01:36:28.957016186Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 01:36:28.957059 containerd[1511]: time="2026-03-04T01:36:28.957044554Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957065018Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957096367Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957122031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957142621Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957162943Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957182744Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957201262Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957219561Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957236125Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957274085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.957317 containerd[1511]: time="2026-03-04T01:36:28.957311866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957331358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957351546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957390446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957413233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957431846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957449890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957468248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957488648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957506627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957526743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957544570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957565299Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957627179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957650836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.958205 containerd[1511]: time="2026-03-04T01:36:28.957674200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 01:36:28.959655 containerd[1511]: time="2026-03-04T01:36:28.957794061Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 01:36:28.959655 containerd[1511]: time="2026-03-04T01:36:28.957932761Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 01:36:28.959655 containerd[1511]: time="2026-03-04T01:36:28.957957951Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 01:36:28.959655 containerd[1511]: time="2026-03-04T01:36:28.957981382Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 01:36:28.959655 containerd[1511]: time="2026-03-04T01:36:28.957997716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.959655 containerd[1511]: time="2026-03-04T01:36:28.958015977Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 01:36:28.959655 containerd[1511]: time="2026-03-04T01:36:28.958046196Z" level=info msg="NRI interface is disabled by configuration." Mar 4 01:36:28.959655 containerd[1511]: time="2026-03-04T01:36:28.958065591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 01:36:28.959944 containerd[1511]: time="2026-03-04T01:36:28.958457643Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 01:36:28.959944 containerd[1511]: time="2026-03-04T01:36:28.958548397Z" level=info msg="Connect containerd service" Mar 4 01:36:28.959944 containerd[1511]: time="2026-03-04T01:36:28.958607142Z" level=info msg="using legacy CRI server" Mar 4 01:36:28.959944 containerd[1511]: time="2026-03-04T01:36:28.958623425Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 01:36:28.959944 containerd[1511]: time="2026-03-04T01:36:28.958803591Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 01:36:28.959944 containerd[1511]: time="2026-03-04T01:36:28.959740948Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:36:28.961226 containerd[1511]: time="2026-03-04T01:36:28.959902257Z" level=info msg="Start subscribing containerd event" Mar 4 01:36:28.961226 containerd[1511]: time="2026-03-04T01:36:28.960004635Z" level=info msg="Start recovering state" Mar 4 01:36:28.961226 containerd[1511]: time="2026-03-04T01:36:28.960135760Z" level=info msg="Start event monitor" Mar 4 01:36:28.961226 containerd[1511]: time="2026-03-04T01:36:28.960163847Z" level=info msg="Start snapshots syncer" Mar 4 01:36:28.961226 containerd[1511]: time="2026-03-04T01:36:28.960180427Z" level=info msg="Start cni network conf syncer for default" Mar 4 01:36:28.961226 containerd[1511]: time="2026-03-04T01:36:28.960192604Z" level=info msg="Start streaming server" Mar 4 01:36:28.962803 containerd[1511]: time="2026-03-04T01:36:28.962678863Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 01:36:28.964647 containerd[1511]: time="2026-03-04T01:36:28.962878673Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 01:36:28.963088 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 01:36:28.965060 containerd[1511]: time="2026-03-04T01:36:28.964921211Z" level=info msg="containerd successfully booted in 0.096064s" Mar 4 01:36:28.990047 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 01:36:29.002221 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 01:36:29.006163 systemd[1]: Started sshd@0-10.230.15.118:22-20.161.92.111:37330.service - OpenSSH per-connection server daemon (20.161.92.111:37330). Mar 4 01:36:29.023804 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 01:36:29.024096 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 01:36:29.040909 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 01:36:29.073694 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 01:36:29.084721 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 01:36:29.094662 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 01:36:29.096614 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 01:36:29.219951 systemd-networkd[1421]: eth0: Gained IPv6LL Mar 4 01:36:29.226440 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 01:36:29.230949 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 01:36:29.240759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:36:29.251584 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 01:36:29.301705 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 01:36:29.409535 tar[1497]: linux-amd64/README.md Mar 4 01:36:29.425080 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 01:36:29.627335 sshd[1576]: Accepted publickey for core from 20.161.92.111 port 37330 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:36:29.629278 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:36:29.648425 systemd-logind[1487]: New session 1 of user core. Mar 4 01:36:29.652158 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 01:36:29.670029 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 01:36:29.695799 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 01:36:29.705946 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 01:36:29.724225 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 01:36:29.871056 systemd[1602]: Queued start job for default target default.target. Mar 4 01:36:29.878075 systemd[1602]: Created slice app.slice - User Application Slice. Mar 4 01:36:29.878118 systemd[1602]: Reached target paths.target - Paths. Mar 4 01:36:29.878157 systemd[1602]: Reached target timers.target - Timers. Mar 4 01:36:29.880827 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 01:36:29.908857 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 01:36:29.909077 systemd[1602]: Reached target sockets.target - Sockets. Mar 4 01:36:29.909112 systemd[1602]: Reached target basic.target - Basic System. Mar 4 01:36:29.909190 systemd[1602]: Reached target default.target - Main User Target. Mar 4 01:36:29.909257 systemd[1602]: Startup finished in 173ms. Mar 4 01:36:29.909533 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 01:36:29.924764 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 01:36:30.356910 systemd[1]: Started sshd@1-10.230.15.118:22-20.161.92.111:57328.service - OpenSSH per-connection server daemon (20.161.92.111:57328). Mar 4 01:36:30.466510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:36:30.492118 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:36:30.729052 systemd-networkd[1421]: eth0: Ignoring DHCPv6 address 2a02:1348:179:83dd:24:19ff:fee6:f76/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:83dd:24:19ff:fee6:f76/64 assigned by NDisc. Mar 4 01:36:30.729066 systemd-networkd[1421]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 4 01:36:30.933862 sshd[1614]: Accepted publickey for core from 20.161.92.111 port 57328 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:36:30.935933 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:36:30.946816 systemd-logind[1487]: New session 2 of user core. Mar 4 01:36:30.951683 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 01:36:31.245233 kubelet[1620]: E0304 01:36:31.244959 1620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:36:31.249511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:36:31.249757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:36:31.250167 systemd[1]: kubelet.service: Consumed 1.059s CPU time. Mar 4 01:36:31.351012 sshd[1614]: pam_unix(sshd:session): session closed for user core Mar 4 01:36:31.355669 systemd[1]: sshd@1-10.230.15.118:22-20.161.92.111:57328.service: Deactivated successfully. Mar 4 01:36:31.358082 systemd[1]: session-2.scope: Deactivated successfully. Mar 4 01:36:31.359459 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Mar 4 01:36:31.360851 systemd-logind[1487]: Removed session 2. Mar 4 01:36:31.463932 systemd[1]: Started sshd@2-10.230.15.118:22-20.161.92.111:57338.service - OpenSSH per-connection server daemon (20.161.92.111:57338). Mar 4 01:36:32.037206 sshd[1634]: Accepted publickey for core from 20.161.92.111 port 57338 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:36:32.039966 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:36:32.047809 systemd-logind[1487]: New session 3 of user core. Mar 4 01:36:32.058845 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 01:36:32.448239 sshd[1634]: pam_unix(sshd:session): session closed for user core Mar 4 01:36:32.453808 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Mar 4 01:36:32.454658 systemd[1]: sshd@2-10.230.15.118:22-20.161.92.111:57338.service: Deactivated successfully. Mar 4 01:36:32.457030 systemd[1]: session-3.scope: Deactivated successfully. Mar 4 01:36:32.459397 systemd-logind[1487]: Removed session 3. Mar 4 01:36:34.135282 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 4 01:36:34.148217 systemd-logind[1487]: New session 4 of user core. Mar 4 01:36:34.161019 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 01:36:34.163082 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 4 01:36:34.175086 systemd-logind[1487]: New session 5 of user core. Mar 4 01:36:34.187737 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 01:36:35.273978 coreos-metadata[1477]: Mar 04 01:36:35.273 WARN failed to locate config-drive, using the metadata service API instead Mar 4 01:36:35.299972 coreos-metadata[1477]: Mar 04 01:36:35.299 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 4 01:36:35.307270 coreos-metadata[1477]: Mar 04 01:36:35.307 INFO Fetch failed with 404: resource not found Mar 4 01:36:35.307270 coreos-metadata[1477]: Mar 04 01:36:35.307 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 4 01:36:35.308044 coreos-metadata[1477]: Mar 04 01:36:35.308 INFO Fetch successful Mar 4 01:36:35.308231 coreos-metadata[1477]: Mar 04 01:36:35.308 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 4 01:36:35.326601 coreos-metadata[1477]: Mar 04 01:36:35.326 INFO Fetch successful Mar 4 01:36:35.326968 coreos-metadata[1477]: Mar 04 01:36:35.326 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 4 01:36:35.343171 coreos-metadata[1477]: Mar 04 01:36:35.343 INFO Fetch successful Mar 4 01:36:35.343664 coreos-metadata[1477]: Mar 04 01:36:35.343 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 4 01:36:35.358381 coreos-metadata[1477]: Mar 04 01:36:35.358 INFO Fetch successful Mar 4 01:36:35.358800 coreos-metadata[1477]: Mar 04 01:36:35.358 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 4 01:36:35.378650 coreos-metadata[1477]: Mar 04 01:36:35.378 INFO Fetch successful Mar 4 01:36:35.417993 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 4 01:36:35.419821 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 01:36:35.908819 coreos-metadata[1554]: Mar 04 01:36:35.908 WARN failed to locate config-drive, using the metadata service API instead Mar 4 01:36:35.932498 coreos-metadata[1554]: Mar 04 01:36:35.932 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 4 01:36:35.957469 coreos-metadata[1554]: Mar 04 01:36:35.957 INFO Fetch successful Mar 4 01:36:35.957717 coreos-metadata[1554]: Mar 04 01:36:35.957 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 4 01:36:35.984521 coreos-metadata[1554]: Mar 04 01:36:35.984 INFO Fetch successful Mar 4 01:36:35.986826 unknown[1554]: wrote ssh authorized keys file for user: core Mar 4 01:36:36.023912 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Mar 4 01:36:36.024807 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 4 01:36:36.027738 systemd[1]: Finished sshkeys.service. Mar 4 01:36:36.030618 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 01:36:36.033557 systemd[1]: Startup finished in 1.441s (kernel) + 14.462s (initrd) + 11.867s (userspace) = 27.771s. Mar 4 01:36:41.501352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 01:36:41.512809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:36:41.703664 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:36:41.719111 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:36:41.817671 kubelet[1686]: E0304 01:36:41.817413 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:36:41.822279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:36:41.822617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:36:42.648770 systemd[1]: Started sshd@3-10.230.15.118:22-20.161.92.111:49612.service - OpenSSH per-connection server daemon (20.161.92.111:49612). Mar 4 01:36:43.225750 systemd[1]: Started sshd@4-10.230.15.118:22-45.78.206.111:58540.service - OpenSSH per-connection server daemon (45.78.206.111:58540). Mar 4 01:36:43.249568 sshd[1695]: Accepted publickey for core from 20.161.92.111 port 49612 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:36:43.251882 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:36:43.258762 systemd-logind[1487]: New session 6 of user core. Mar 4 01:36:43.274988 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 01:36:43.326718 systemd[1]: Started sshd@5-10.230.15.118:22-103.189.208.13:36278.service - OpenSSH per-connection server daemon (103.189.208.13:36278). Mar 4 01:36:43.675780 sshd[1695]: pam_unix(sshd:session): session closed for user core Mar 4 01:36:43.681457 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Mar 4 01:36:43.682000 systemd[1]: sshd@3-10.230.15.118:22-20.161.92.111:49612.service: Deactivated successfully. Mar 4 01:36:43.684504 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 01:36:43.686775 systemd-logind[1487]: Removed session 6. Mar 4 01:36:43.777775 systemd[1]: Started sshd@6-10.230.15.118:22-20.161.92.111:49616.service - OpenSSH per-connection server daemon (20.161.92.111:49616). Mar 4 01:36:44.381624 sshd[1708]: Accepted publickey for core from 20.161.92.111 port 49616 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:36:44.383968 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:36:44.390726 systemd-logind[1487]: New session 7 of user core. Mar 4 01:36:44.406741 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 01:36:44.519949 sshd[1702]: Received disconnect from 103.189.208.13 port 36278:11: Bye Bye [preauth] Mar 4 01:36:44.519949 sshd[1702]: Disconnected from authenticating user root 103.189.208.13 port 36278 [preauth] Mar 4 01:36:44.522611 systemd[1]: sshd@5-10.230.15.118:22-103.189.208.13:36278.service: Deactivated successfully. Mar 4 01:36:44.796149 sshd[1708]: pam_unix(sshd:session): session closed for user core Mar 4 01:36:44.801690 systemd[1]: sshd@6-10.230.15.118:22-20.161.92.111:49616.service: Deactivated successfully. Mar 4 01:36:44.804510 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 01:36:44.807130 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Mar 4 01:36:44.808941 systemd-logind[1487]: Removed session 7. Mar 4 01:36:44.909719 systemd[1]: Started sshd@7-10.230.15.118:22-20.161.92.111:49620.service - OpenSSH per-connection server daemon (20.161.92.111:49620). Mar 4 01:36:45.512418 sshd[1717]: Accepted publickey for core from 20.161.92.111 port 49620 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:36:45.513330 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:36:45.520087 systemd-logind[1487]: New session 8 of user core. Mar 4 01:36:45.531590 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 01:36:45.933357 sshd[1717]: pam_unix(sshd:session): session closed for user core Mar 4 01:36:45.939276 systemd[1]: sshd@7-10.230.15.118:22-20.161.92.111:49620.service: Deactivated successfully. Mar 4 01:36:45.942092 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 01:36:45.944762 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Mar 4 01:36:45.946322 systemd-logind[1487]: Removed session 8. Mar 4 01:36:46.035837 systemd[1]: Started sshd@8-10.230.15.118:22-20.161.92.111:49630.service - OpenSSH per-connection server daemon (20.161.92.111:49630). Mar 4 01:36:46.600414 sshd[1724]: Accepted publickey for core from 20.161.92.111 port 49630 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:36:46.602492 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:36:46.609959 systemd-logind[1487]: New session 9 of user core. Mar 4 01:36:46.623637 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 01:36:46.930919 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 01:36:46.931476 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:36:46.950511 sudo[1727]: pam_unix(sudo:session): session closed for user root Mar 4 01:36:47.041197 sshd[1724]: pam_unix(sshd:session): session closed for user core Mar 4 01:36:47.046025 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Mar 4 01:36:47.046960 systemd[1]: sshd@8-10.230.15.118:22-20.161.92.111:49630.service: Deactivated successfully. Mar 4 01:36:47.050139 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 01:36:47.052515 systemd-logind[1487]: Removed session 9. Mar 4 01:36:47.157836 systemd[1]: Started sshd@9-10.230.15.118:22-20.161.92.111:49632.service - OpenSSH per-connection server daemon (20.161.92.111:49632). Mar 4 01:36:47.407802 systemd[1]: Started sshd@10-10.230.15.118:22-202.125.94.71:60880.service - OpenSSH per-connection server daemon (202.125.94.71:60880). Mar 4 01:36:47.760415 sshd[1732]: Accepted publickey for core from 20.161.92.111 port 49632 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:36:47.761162 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:36:47.767877 systemd-logind[1487]: New session 10 of user core. Mar 4 01:36:47.778019 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 01:36:48.088310 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 01:36:48.089477 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:36:48.095954 sudo[1739]: pam_unix(sudo:session): session closed for user root Mar 4 01:36:48.103941 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 01:36:48.104435 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:36:48.131254 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 01:36:48.133283 auditctl[1742]: No rules Mar 4 01:36:48.133827 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 01:36:48.134628 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 01:36:48.145263 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:36:48.177304 augenrules[1760]: No rules Mar 4 01:36:48.178319 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:36:48.179698 sudo[1738]: pam_unix(sudo:session): session closed for user root Mar 4 01:36:48.274308 sshd[1732]: pam_unix(sshd:session): session closed for user core Mar 4 01:36:48.279097 systemd[1]: sshd@9-10.230.15.118:22-20.161.92.111:49632.service: Deactivated successfully. Mar 4 01:36:48.281518 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 01:36:48.282549 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Mar 4 01:36:48.284069 systemd-logind[1487]: Removed session 10. Mar 4 01:36:48.387743 systemd[1]: Started sshd@11-10.230.15.118:22-20.161.92.111:49644.service - OpenSSH per-connection server daemon (20.161.92.111:49644). Mar 4 01:36:48.589771 sshd[1735]: Received disconnect from 202.125.94.71 port 60880:11: Bye Bye [preauth] Mar 4 01:36:48.590518 sshd[1735]: Disconnected from authenticating user root 202.125.94.71 port 60880 [preauth] Mar 4 01:36:48.592102 systemd[1]: sshd@10-10.230.15.118:22-202.125.94.71:60880.service: Deactivated successfully. Mar 4 01:36:49.008200 sshd[1768]: Accepted publickey for core from 20.161.92.111 port 49644 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:36:49.010764 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:36:49.018416 systemd-logind[1487]: New session 11 of user core. Mar 4 01:36:49.028591 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 01:36:49.323728 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 01:36:49.324184 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:36:49.798764 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 01:36:49.801554 (dockerd)[1789]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 01:36:50.242311 dockerd[1789]: time="2026-03-04T01:36:50.241998915Z" level=info msg="Starting up" Mar 4 01:36:50.400455 dockerd[1789]: time="2026-03-04T01:36:50.400088453Z" level=info msg="Loading containers: start." Mar 4 01:36:50.570764 kernel: Initializing XFRM netlink socket Mar 4 01:36:50.673381 systemd-networkd[1421]: docker0: Link UP Mar 4 01:36:50.702256 dockerd[1789]: time="2026-03-04T01:36:50.702192509Z" level=info msg="Loading containers: done." Mar 4 01:36:50.721099 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1570508558-merged.mount: Deactivated successfully. Mar 4 01:36:50.727399 dockerd[1789]: time="2026-03-04T01:36:50.726936900Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 01:36:50.727399 dockerd[1789]: time="2026-03-04T01:36:50.727096055Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 01:36:50.727399 dockerd[1789]: time="2026-03-04T01:36:50.727269388Z" level=info msg="Daemon has completed initialization" Mar 4 01:36:50.770403 dockerd[1789]: time="2026-03-04T01:36:50.768943751Z" level=info msg="API listen on /run/docker.sock" Mar 4 01:36:50.770727 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 01:36:51.465145 containerd[1511]: time="2026-03-04T01:36:51.465067126Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 4 01:36:52.073090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 01:36:52.088245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:36:52.249681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:36:52.259828 (kubelet)[1940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:36:52.353946 kubelet[1940]: E0304 01:36:52.353793 1940 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:36:52.357674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:36:52.357946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:36:52.503184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2888125656.mount: Deactivated successfully. Mar 4 01:36:54.465889 containerd[1511]: time="2026-03-04T01:36:54.465673086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:36:54.468110 containerd[1511]: time="2026-03-04T01:36:54.467384556Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 4 01:36:54.468988 containerd[1511]: time="2026-03-04T01:36:54.468893411Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:36:54.474151 containerd[1511]: time="2026-03-04T01:36:54.474092993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:36:54.476700 containerd[1511]: time="2026-03-04T01:36:54.476446171Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 3.011284563s" Mar 4 01:36:54.476700 containerd[1511]: time="2026-03-04T01:36:54.476501779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 4 01:36:54.477616 containerd[1511]: time="2026-03-04T01:36:54.477563109Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 4 01:36:56.877097 containerd[1511]: time="2026-03-04T01:36:56.876953150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:36:56.880441 containerd[1511]: time="2026-03-04T01:36:56.879649557Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 4 01:36:56.880541 containerd[1511]: time="2026-03-04T01:36:56.880488033Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:36:56.886181 containerd[1511]: time="2026-03-04T01:36:56.886096165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:36:56.889069 containerd[1511]: time="2026-03-04T01:36:56.887725575Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.410000004s" Mar 4 01:36:56.889069 containerd[1511]: time="2026-03-04T01:36:56.887797268Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 4 01:36:56.889786 containerd[1511]: time="2026-03-04T01:36:56.889753612Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 4 01:36:58.458456 containerd[1511]: time="2026-03-04T01:36:58.458110677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:36:58.459865 containerd[1511]: time="2026-03-04T01:36:58.459816468Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 4 01:36:58.460516 containerd[1511]: time="2026-03-04T01:36:58.460460981Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:36:58.465184 containerd[1511]: time="2026-03-04T01:36:58.465085253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:36:58.467206 containerd[1511]: time="2026-03-04T01:36:58.466948873Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.577030861s" Mar 4 01:36:58.467206 containerd[1511]: time="2026-03-04T01:36:58.467013080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 4 01:36:58.468232 containerd[1511]: time="2026-03-04T01:36:58.468200199Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 4 01:37:00.056632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333763343.mount: Deactivated successfully. Mar 4 01:37:00.746534 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 4 01:37:00.814515 containerd[1511]: time="2026-03-04T01:37:00.814407998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:00.816444 containerd[1511]: time="2026-03-04T01:37:00.816157931Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 4 01:37:00.817405 containerd[1511]: time="2026-03-04T01:37:00.817239419Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:00.820203 containerd[1511]: time="2026-03-04T01:37:00.820132915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:00.821554 containerd[1511]: time="2026-03-04T01:37:00.821502992Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.353247072s" Mar 4 01:37:00.821640 containerd[1511]: time="2026-03-04T01:37:00.821580079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 4 01:37:00.823045 containerd[1511]: time="2026-03-04T01:37:00.823012397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 4 01:37:00.984440 systemd[1]: Started sshd@12-10.230.15.118:22-128.199.136.229:47284.service - OpenSSH per-connection server daemon (128.199.136.229:47284). Mar 4 01:37:01.396748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941729883.mount: Deactivated successfully. Mar 4 01:37:02.121590 sshd[2026]: Received disconnect from 128.199.136.229 port 47284:11: Bye Bye [preauth] Mar 4 01:37:02.121590 sshd[2026]: Disconnected from authenticating user root 128.199.136.229 port 47284 [preauth] Mar 4 01:37:02.123928 systemd[1]: sshd@12-10.230.15.118:22-128.199.136.229:47284.service: Deactivated successfully. Mar 4 01:37:02.470146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 01:37:02.478649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:37:02.672496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:37:02.687631 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:37:02.781491 kubelet[2090]: E0304 01:37:02.780799 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:37:02.784509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:37:02.784771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:37:03.192455 containerd[1511]: time="2026-03-04T01:37:03.192289077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:03.194006 containerd[1511]: time="2026-03-04T01:37:03.193961348Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 4 01:37:03.195423 containerd[1511]: time="2026-03-04T01:37:03.194608247Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:03.198799 containerd[1511]: time="2026-03-04T01:37:03.198760357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:03.202099 containerd[1511]: time="2026-03-04T01:37:03.201906588Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.378847408s" Mar 4 01:37:03.202099 containerd[1511]: time="2026-03-04T01:37:03.201952561Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 4 01:37:03.203119 containerd[1511]: time="2026-03-04T01:37:03.203088035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 4 01:37:03.221784 systemd[1]: Started sshd@13-10.230.15.118:22-220.149.212.190:44144.service - OpenSSH per-connection server daemon (220.149.212.190:44144). Mar 4 01:37:03.737089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2183715823.mount: Deactivated successfully. Mar 4 01:37:03.744306 containerd[1511]: time="2026-03-04T01:37:03.744240171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:03.745407 containerd[1511]: time="2026-03-04T01:37:03.745328451Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 4 01:37:03.748331 containerd[1511]: time="2026-03-04T01:37:03.746253682Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:03.749391 containerd[1511]: time="2026-03-04T01:37:03.749287356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:03.751281 containerd[1511]: time="2026-03-04T01:37:03.750519293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 547.292645ms" Mar 4 01:37:03.751281 containerd[1511]: time="2026-03-04T01:37:03.750595786Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 4 01:37:03.751281 containerd[1511]: time="2026-03-04T01:37:03.751125004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 4 01:37:04.401838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697292731.mount: Deactivated successfully. Mar 4 01:37:06.178228 containerd[1511]: time="2026-03-04T01:37:06.177915000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:06.181142 containerd[1511]: time="2026-03-04T01:37:06.181060332Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 4 01:37:06.182807 containerd[1511]: time="2026-03-04T01:37:06.182766767Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:06.189257 containerd[1511]: time="2026-03-04T01:37:06.189184122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:06.191687 containerd[1511]: time="2026-03-04T01:37:06.191428907Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.440267077s" Mar 4 01:37:06.191687 containerd[1511]: time="2026-03-04T01:37:06.191493955Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 4 01:37:06.204686 sshd[1698]: Received disconnect from 45.78.206.111 port 58540:11: Bye Bye [preauth] Mar 4 01:37:06.204686 sshd[1698]: Disconnected from 45.78.206.111 port 58540 [preauth] Mar 4 01:37:06.208619 systemd[1]: sshd@4-10.230.15.118:22-45.78.206.111:58540.service: Deactivated successfully. Mar 4 01:37:09.591063 sshd[2099]: Received disconnect from 220.149.212.190 port 44144:11: Bye Bye [preauth] Mar 4 01:37:09.591063 sshd[2099]: Disconnected from authenticating user root 220.149.212.190 port 44144 [preauth] Mar 4 01:37:09.595294 systemd[1]: sshd@13-10.230.15.118:22-220.149.212.190:44144.service: Deactivated successfully. Mar 4 01:37:11.627360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:37:11.637854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:37:11.674702 systemd[1]: Reloading requested from client PID 2200 ('systemctl') (unit session-11.scope)... Mar 4 01:37:11.674751 systemd[1]: Reloading... Mar 4 01:37:11.878405 zram_generator::config[2248]: No configuration found. Mar 4 01:37:12.006316 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:37:12.117965 systemd[1]: Reloading finished in 442 ms. Mar 4 01:37:12.194610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:37:12.199480 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:37:12.199783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:37:12.205745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:37:12.357231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:37:12.379917 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:37:12.470205 kubelet[2307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:37:12.470205 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:37:12.470205 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:37:12.471133 kubelet[2307]: I0304 01:37:12.471060 2307 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:37:13.077496 kubelet[2307]: I0304 01:37:13.077258 2307 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 01:37:13.077496 kubelet[2307]: I0304 01:37:13.077338 2307 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:37:13.079400 kubelet[2307]: I0304 01:37:13.078934 2307 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:37:13.121906 kubelet[2307]: E0304 01:37:13.121442 2307 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.15.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:37:13.122310 kubelet[2307]: I0304 01:37:13.122127 2307 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:37:13.133825 kubelet[2307]: E0304 01:37:13.133402 2307 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:37:13.133825 kubelet[2307]: I0304 01:37:13.133465 2307 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 01:37:13.142095 kubelet[2307]: I0304 01:37:13.142065 2307 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 01:37:13.146303 kubelet[2307]: I0304 01:37:13.146247 2307 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:37:13.150121 kubelet[2307]: I0304 01:37:13.146500 2307 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-g1uyu.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:37:13.151075 kubelet[2307]: I0304 01:37:13.150432 2307 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:37:13.151075 kubelet[2307]: I0304 01:37:13.150457 2307 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 01:37:13.151075 kubelet[2307]: I0304 01:37:13.150681 2307 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:37:13.156890 kubelet[2307]: I0304 01:37:13.156861 2307 kubelet.go:480] "Attempting to sync node with API server" Mar 4 01:37:13.157076 kubelet[2307]: I0304 01:37:13.157053 2307 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:37:13.157246 kubelet[2307]: I0304 01:37:13.157225 2307 kubelet.go:386] "Adding apiserver pod source" Mar 4 01:37:13.157430 kubelet[2307]: I0304 01:37:13.157409 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:37:13.164296 kubelet[2307]: E0304 01:37:13.163944 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.15.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g1uyu.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:37:13.165777 kubelet[2307]: E0304 01:37:13.165622 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.15.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:37:13.166178 kubelet[2307]: I0304 01:37:13.166145 2307 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:37:13.167388 kubelet[2307]: I0304 01:37:13.167000 2307 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:37:13.167894 kubelet[2307]: W0304 01:37:13.167866 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 01:37:13.177924 kubelet[2307]: I0304 01:37:13.177887 2307 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 01:37:13.178088 kubelet[2307]: I0304 01:37:13.177956 2307 server.go:1289] "Started kubelet" Mar 4 01:37:13.181226 kubelet[2307]: I0304 01:37:13.181161 2307 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:37:13.182649 kubelet[2307]: I0304 01:37:13.182625 2307 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:37:13.184409 kubelet[2307]: I0304 01:37:13.183212 2307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:37:13.184409 kubelet[2307]: I0304 01:37:13.184043 2307 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:37:13.186756 kubelet[2307]: E0304 01:37:13.184206 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.15.118:6443/api/v1/namespaces/default/events\": dial tcp 10.230.15.118:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-g1uyu.gb1.brightbox.com.18997f9b439b6298 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-g1uyu.gb1.brightbox.com,UID:srv-g1uyu.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-g1uyu.gb1.brightbox.com,},FirstTimestamp:2026-03-04 01:37:13.177916056 +0000 UTC m=+0.792266140,LastTimestamp:2026-03-04 01:37:13.177916056 +0000 UTC m=+0.792266140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-g1uyu.gb1.brightbox.com,}" Mar 4 01:37:13.188883 kubelet[2307]: I0304 01:37:13.188856 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:37:13.197895 kubelet[2307]: I0304 01:37:13.196904 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:37:13.204580 kubelet[2307]: I0304 01:37:13.204550 2307 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 01:37:13.205130 kubelet[2307]: E0304 01:37:13.205087 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" Mar 4 01:37:13.207849 kubelet[2307]: I0304 01:37:13.207773 2307 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 01:37:13.207951 kubelet[2307]: I0304 01:37:13.207874 2307 reconciler.go:26] "Reconciler: start to sync state" Mar 4 01:37:13.211599 kubelet[2307]: E0304 01:37:13.211561 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.15.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:37:13.211946 kubelet[2307]: E0304 01:37:13.211677 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g1uyu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.118:6443: connect: connection refused" interval="200ms" Mar 4 01:37:13.214217 kubelet[2307]: I0304 01:37:13.213833 2307 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:37:13.214217 kubelet[2307]: I0304 01:37:13.213861 2307 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:37:13.214217 kubelet[2307]: I0304 01:37:13.213984 2307 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:37:13.225331 kubelet[2307]: E0304 01:37:13.225295 2307 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:37:13.235645 kubelet[2307]: I0304 01:37:13.235486 2307 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 01:37:13.238174 kubelet[2307]: I0304 01:37:13.238148 2307 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 01:37:13.238289 kubelet[2307]: I0304 01:37:13.238191 2307 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 01:37:13.238289 kubelet[2307]: I0304 01:37:13.238231 2307 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:37:13.238289 kubelet[2307]: I0304 01:37:13.238251 2307 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 01:37:13.238499 kubelet[2307]: E0304 01:37:13.238308 2307 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:37:13.248472 kubelet[2307]: E0304 01:37:13.248345 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.15.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:37:13.249770 kubelet[2307]: I0304 01:37:13.249737 2307 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:37:13.250474 kubelet[2307]: I0304 01:37:13.250441 2307 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:37:13.250557 kubelet[2307]: I0304 01:37:13.250484 2307 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:37:13.254036 kubelet[2307]: I0304 01:37:13.253715 2307 policy_none.go:49] "None policy: Start" Mar 4 01:37:13.254036 kubelet[2307]: I0304 01:37:13.253753 2307 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 01:37:13.254036 kubelet[2307]: I0304 01:37:13.253783 2307 state_mem.go:35] "Initializing new in-memory state store" Mar 4 01:37:13.262733 update_engine[1488]: I20260304 01:37:13.262599 1488 update_attempter.cc:509] Updating boot flags... Mar 4 01:37:13.265495 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 01:37:13.278984 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 01:37:13.295424 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 01:37:13.304270 kubelet[2307]: E0304 01:37:13.303979 2307 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:37:13.305062 kubelet[2307]: I0304 01:37:13.304998 2307 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:37:13.307679 kubelet[2307]: E0304 01:37:13.305154 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" Mar 4 01:37:13.307679 kubelet[2307]: I0304 01:37:13.305691 2307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:37:13.309642 kubelet[2307]: I0304 01:37:13.309482 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:37:13.310519 kubelet[2307]: E0304 01:37:13.310433 2307 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:37:13.310519 kubelet[2307]: E0304 01:37:13.310504 2307 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-g1uyu.gb1.brightbox.com\" not found" Mar 4 01:37:13.332065 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2353) Mar 4 01:37:13.400264 systemd[1]: Created slice kubepods-burstable-podbe04d1abf83a4a5c00066e4f8d7a16bb.slice - libcontainer container kubepods-burstable-podbe04d1abf83a4a5c00066e4f8d7a16bb.slice. Mar 4 01:37:13.411893 kubelet[2307]: I0304 01:37:13.410430 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be04d1abf83a4a5c00066e4f8d7a16bb-ca-certs\") pod \"kube-apiserver-srv-g1uyu.gb1.brightbox.com\" (UID: \"be04d1abf83a4a5c00066e4f8d7a16bb\") " pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.413947 kubelet[2307]: E0304 01:37:13.413898 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g1uyu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.118:6443: connect: connection refused" interval="400ms" Mar 4 01:37:13.414331 kubelet[2307]: I0304 01:37:13.414305 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.415142 kubelet[2307]: E0304 01:37:13.415108 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.118:6443/api/v1/nodes\": dial tcp 10.230.15.118:6443: connect: connection refused" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.419389 kubelet[2307]: E0304 01:37:13.419138 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.437386 systemd[1]: Created slice kubepods-burstable-pod9a1a49f7d75d7adcd47f1670f63eaa58.slice - libcontainer container kubepods-burstable-pod9a1a49f7d75d7adcd47f1670f63eaa58.slice. Mar 4 01:37:13.450391 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2355) Mar 4 01:37:13.466605 kubelet[2307]: E0304 01:37:13.466566 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.488083 systemd[1]: Created slice kubepods-burstable-pod3a7323e1fc0057b5ae8dd7267d11937a.slice - libcontainer container kubepods-burstable-pod3a7323e1fc0057b5ae8dd7267d11937a.slice. Mar 4 01:37:13.509921 kubelet[2307]: E0304 01:37:13.509878 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.511106 kubelet[2307]: I0304 01:37:13.511065 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be04d1abf83a4a5c00066e4f8d7a16bb-usr-share-ca-certificates\") pod \"kube-apiserver-srv-g1uyu.gb1.brightbox.com\" (UID: \"be04d1abf83a4a5c00066e4f8d7a16bb\") " pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.511320 kubelet[2307]: I0304 01:37:13.511290 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-ca-certs\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.513387 kubelet[2307]: I0304 01:37:13.513146 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-kubeconfig\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.513387 kubelet[2307]: I0304 01:37:13.513200 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a7323e1fc0057b5ae8dd7267d11937a-kubeconfig\") pod \"kube-scheduler-srv-g1uyu.gb1.brightbox.com\" (UID: \"3a7323e1fc0057b5ae8dd7267d11937a\") " pod="kube-system/kube-scheduler-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.513387 kubelet[2307]: I0304 01:37:13.513276 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be04d1abf83a4a5c00066e4f8d7a16bb-k8s-certs\") pod \"kube-apiserver-srv-g1uyu.gb1.brightbox.com\" (UID: \"be04d1abf83a4a5c00066e4f8d7a16bb\") " pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.513387 kubelet[2307]: I0304 01:37:13.513316 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-flexvolume-dir\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.513387 kubelet[2307]: I0304 01:37:13.513344 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-k8s-certs\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.513801 kubelet[2307]: I0304 01:37:13.513745 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.540482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2355) Mar 4 01:37:13.618529 kubelet[2307]: I0304 01:37:13.618491 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.619268 kubelet[2307]: E0304 01:37:13.619236 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.118:6443/api/v1/nodes\": dial tcp 10.230.15.118:6443: connect: connection refused" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:13.721845 containerd[1511]: time="2026-03-04T01:37:13.721742447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-g1uyu.gb1.brightbox.com,Uid:be04d1abf83a4a5c00066e4f8d7a16bb,Namespace:kube-system,Attempt:0,}" Mar 4 01:37:13.776080 containerd[1511]: time="2026-03-04T01:37:13.775974469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-g1uyu.gb1.brightbox.com,Uid:9a1a49f7d75d7adcd47f1670f63eaa58,Namespace:kube-system,Attempt:0,}" Mar 4 01:37:13.812034 containerd[1511]: time="2026-03-04T01:37:13.811981781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-g1uyu.gb1.brightbox.com,Uid:3a7323e1fc0057b5ae8dd7267d11937a,Namespace:kube-system,Attempt:0,}" Mar 4 01:37:13.814542 kubelet[2307]: E0304 01:37:13.814433 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g1uyu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.118:6443: connect: connection refused" interval="800ms" Mar 4 01:37:14.023427 kubelet[2307]: I0304 01:37:14.023278 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:14.024100 kubelet[2307]: E0304 01:37:14.024028 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.118:6443/api/v1/nodes\": dial tcp 10.230.15.118:6443: connect: connection refused" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:14.105159 kubelet[2307]: E0304 01:37:14.105066 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.15.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 01:37:14.249729 kubelet[2307]: E0304 01:37:14.249613 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.15.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 01:37:14.262256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount490992948.mount: Deactivated successfully. Mar 4 01:37:14.271393 containerd[1511]: time="2026-03-04T01:37:14.270124001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:37:14.271539 containerd[1511]: time="2026-03-04T01:37:14.271346184Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:37:14.272736 containerd[1511]: time="2026-03-04T01:37:14.272678029Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 4 01:37:14.272987 containerd[1511]: time="2026-03-04T01:37:14.272934195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:37:14.273950 containerd[1511]: time="2026-03-04T01:37:14.273649163Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:37:14.275685 containerd[1511]: time="2026-03-04T01:37:14.275403723Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:37:14.275685 containerd[1511]: time="2026-03-04T01:37:14.275634040Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:37:14.279475 containerd[1511]: time="2026-03-04T01:37:14.279434927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:37:14.286400 containerd[1511]: time="2026-03-04T01:37:14.285882738Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 473.791646ms" Mar 4 01:37:14.293400 containerd[1511]: time="2026-03-04T01:37:14.293194093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 516.885183ms" Mar 4 01:37:14.301417 containerd[1511]: time="2026-03-04T01:37:14.301358126Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 579.440053ms" Mar 4 01:37:14.451008 kubelet[2307]: E0304 01:37:14.450862 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.15.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 01:37:14.503618 containerd[1511]: time="2026-03-04T01:37:14.502840135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:37:14.503618 containerd[1511]: time="2026-03-04T01:37:14.502924392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:37:14.503618 containerd[1511]: time="2026-03-04T01:37:14.502979065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:14.503618 containerd[1511]: time="2026-03-04T01:37:14.503113376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:14.518242 containerd[1511]: time="2026-03-04T01:37:14.517789819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:37:14.518242 containerd[1511]: time="2026-03-04T01:37:14.517890233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:37:14.518242 containerd[1511]: time="2026-03-04T01:37:14.517910312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:14.518242 containerd[1511]: time="2026-03-04T01:37:14.518062173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:14.520616 containerd[1511]: time="2026-03-04T01:37:14.520330149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:37:14.520616 containerd[1511]: time="2026-03-04T01:37:14.520579842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:37:14.520910 containerd[1511]: time="2026-03-04T01:37:14.520631418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:14.523411 containerd[1511]: time="2026-03-04T01:37:14.520855877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:14.549525 kubelet[2307]: E0304 01:37:14.548430 2307 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.15.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g1uyu.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 01:37:14.552618 systemd[1]: Started cri-containerd-972964a6910bc17acd3b39fe2a4601a511ab085c4ced1cfb3346fd77604a0cab.scope - libcontainer container 972964a6910bc17acd3b39fe2a4601a511ab085c4ced1cfb3346fd77604a0cab. Mar 4 01:37:14.589613 systemd[1]: Started cri-containerd-0cb8a3a34ff3c4b5e2855be859886a8536176e379ea63c5724c94d330937283a.scope - libcontainer container 0cb8a3a34ff3c4b5e2855be859886a8536176e379ea63c5724c94d330937283a. Mar 4 01:37:14.592719 systemd[1]: Started cri-containerd-1022400fa5e2f3a01e09262c0d5c30fc980fa9484a1965a44ee3c3207f501533.scope - libcontainer container 1022400fa5e2f3a01e09262c0d5c30fc980fa9484a1965a44ee3c3207f501533. Mar 4 01:37:14.616590 kubelet[2307]: E0304 01:37:14.616534 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.15.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g1uyu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.15.118:6443: connect: connection refused" interval="1.6s" Mar 4 01:37:14.675346 containerd[1511]: time="2026-03-04T01:37:14.675274067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-g1uyu.gb1.brightbox.com,Uid:9a1a49f7d75d7adcd47f1670f63eaa58,Namespace:kube-system,Attempt:0,} returns sandbox id \"972964a6910bc17acd3b39fe2a4601a511ab085c4ced1cfb3346fd77604a0cab\"" Mar 4 01:37:14.680131 kubelet[2307]: E0304 01:37:14.679991 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.15.118:6443/api/v1/namespaces/default/events\": dial tcp 10.230.15.118:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-g1uyu.gb1.brightbox.com.18997f9b439b6298 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-g1uyu.gb1.brightbox.com,UID:srv-g1uyu.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-g1uyu.gb1.brightbox.com,},FirstTimestamp:2026-03-04 01:37:13.177916056 +0000 UTC m=+0.792266140,LastTimestamp:2026-03-04 01:37:13.177916056 +0000 UTC m=+0.792266140,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-g1uyu.gb1.brightbox.com,}" Mar 4 01:37:14.692967 containerd[1511]: time="2026-03-04T01:37:14.692584712Z" level=info msg="CreateContainer within sandbox \"972964a6910bc17acd3b39fe2a4601a511ab085c4ced1cfb3346fd77604a0cab\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 01:37:14.713962 containerd[1511]: time="2026-03-04T01:37:14.713811199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-g1uyu.gb1.brightbox.com,Uid:be04d1abf83a4a5c00066e4f8d7a16bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1022400fa5e2f3a01e09262c0d5c30fc980fa9484a1965a44ee3c3207f501533\"" Mar 4 01:37:14.720877 containerd[1511]: time="2026-03-04T01:37:14.720272470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-g1uyu.gb1.brightbox.com,Uid:3a7323e1fc0057b5ae8dd7267d11937a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cb8a3a34ff3c4b5e2855be859886a8536176e379ea63c5724c94d330937283a\"" Mar 4 01:37:14.720877 containerd[1511]: time="2026-03-04T01:37:14.720698414Z" level=info msg="CreateContainer within sandbox \"1022400fa5e2f3a01e09262c0d5c30fc980fa9484a1965a44ee3c3207f501533\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 01:37:14.726573 containerd[1511]: time="2026-03-04T01:37:14.726537691Z" level=info msg="CreateContainer within sandbox \"972964a6910bc17acd3b39fe2a4601a511ab085c4ced1cfb3346fd77604a0cab\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc22b8c9613db15796320f2a47afd0013ad8a9b53037463b97b9474fe6548ba0\"" Mar 4 01:37:14.728383 containerd[1511]: time="2026-03-04T01:37:14.727772048Z" level=info msg="StartContainer for \"dc22b8c9613db15796320f2a47afd0013ad8a9b53037463b97b9474fe6548ba0\"" Mar 4 01:37:14.740992 containerd[1511]: time="2026-03-04T01:37:14.740942044Z" level=info msg="CreateContainer within sandbox \"0cb8a3a34ff3c4b5e2855be859886a8536176e379ea63c5724c94d330937283a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 01:37:14.758792 containerd[1511]: time="2026-03-04T01:37:14.758748774Z" level=info msg="CreateContainer within sandbox \"1022400fa5e2f3a01e09262c0d5c30fc980fa9484a1965a44ee3c3207f501533\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e125cc1f7f1b6426e0130aa438f97d77329a21b1e590f092ff5dbce1d2224f48\"" Mar 4 01:37:14.759733 containerd[1511]: time="2026-03-04T01:37:14.759656301Z" level=info msg="StartContainer for \"e125cc1f7f1b6426e0130aa438f97d77329a21b1e590f092ff5dbce1d2224f48\"" Mar 4 01:37:14.768750 systemd[1]: Started cri-containerd-dc22b8c9613db15796320f2a47afd0013ad8a9b53037463b97b9474fe6548ba0.scope - libcontainer container dc22b8c9613db15796320f2a47afd0013ad8a9b53037463b97b9474fe6548ba0. Mar 4 01:37:14.777248 containerd[1511]: time="2026-03-04T01:37:14.777199408Z" level=info msg="CreateContainer within sandbox \"0cb8a3a34ff3c4b5e2855be859886a8536176e379ea63c5724c94d330937283a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f0705cafc927452abef17784e979fa06e3f038a6d80ffc37eafd0b88392f9de\"" Mar 4 01:37:14.778915 containerd[1511]: time="2026-03-04T01:37:14.778880949Z" level=info msg="StartContainer for \"3f0705cafc927452abef17784e979fa06e3f038a6d80ffc37eafd0b88392f9de\"" Mar 4 01:37:14.826515 systemd[1]: Started cri-containerd-3f0705cafc927452abef17784e979fa06e3f038a6d80ffc37eafd0b88392f9de.scope - libcontainer container 3f0705cafc927452abef17784e979fa06e3f038a6d80ffc37eafd0b88392f9de. Mar 4 01:37:14.838073 kubelet[2307]: I0304 01:37:14.837237 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:14.838901 kubelet[2307]: E0304 01:37:14.838713 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.15.118:6443/api/v1/nodes\": dial tcp 10.230.15.118:6443: connect: connection refused" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:14.842967 systemd[1]: Started cri-containerd-e125cc1f7f1b6426e0130aa438f97d77329a21b1e590f092ff5dbce1d2224f48.scope - libcontainer container e125cc1f7f1b6426e0130aa438f97d77329a21b1e590f092ff5dbce1d2224f48. Mar 4 01:37:14.871800 containerd[1511]: time="2026-03-04T01:37:14.871621757Z" level=info msg="StartContainer for \"dc22b8c9613db15796320f2a47afd0013ad8a9b53037463b97b9474fe6548ba0\" returns successfully" Mar 4 01:37:14.940303 containerd[1511]: time="2026-03-04T01:37:14.940152972Z" level=info msg="StartContainer for \"e125cc1f7f1b6426e0130aa438f97d77329a21b1e590f092ff5dbce1d2224f48\" returns successfully" Mar 4 01:37:14.963026 containerd[1511]: time="2026-03-04T01:37:14.962969475Z" level=info msg="StartContainer for \"3f0705cafc927452abef17784e979fa06e3f038a6d80ffc37eafd0b88392f9de\" returns successfully" Mar 4 01:37:15.234922 kubelet[2307]: E0304 01:37:15.234876 2307 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.15.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.15.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:37:15.264444 kubelet[2307]: E0304 01:37:15.262979 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:15.267407 kubelet[2307]: E0304 01:37:15.264851 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:15.273026 kubelet[2307]: E0304 01:37:15.273000 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:15.372732 systemd[1]: Started sshd@14-10.230.15.118:22-172.249.150.82:56808.service - OpenSSH per-connection server daemon (172.249.150.82:56808). Mar 4 01:37:16.276267 kubelet[2307]: E0304 01:37:16.275316 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:16.278005 kubelet[2307]: E0304 01:37:16.277595 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:16.441914 sshd[2607]: Received disconnect from 172.249.150.82 port 56808:11: Bye Bye [preauth] Mar 4 01:37:16.445602 sshd[2607]: Disconnected from authenticating user root 172.249.150.82 port 56808 [preauth] Mar 4 01:37:16.445664 kubelet[2307]: I0304 01:37:16.442291 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:16.447677 systemd[1]: sshd@14-10.230.15.118:22-172.249.150.82:56808.service: Deactivated successfully. Mar 4 01:37:17.550861 kubelet[2307]: E0304 01:37:17.550420 2307 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g1uyu.gb1.brightbox.com\" not found" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:18.097426 kubelet[2307]: I0304 01:37:18.094699 2307 kubelet_node_status.go:78] "Successfully registered node" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:18.108400 kubelet[2307]: I0304 01:37:18.107767 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:18.166160 kubelet[2307]: E0304 01:37:18.166102 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-g1uyu.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:18.166425 kubelet[2307]: I0304 01:37:18.166397 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:18.166709 kubelet[2307]: I0304 01:37:18.166687 2307 apiserver.go:52] "Watching apiserver" Mar 4 01:37:18.173592 kubelet[2307]: E0304 01:37:18.173526 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:18.174560 kubelet[2307]: I0304 01:37:18.174535 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:18.178671 kubelet[2307]: E0304 01:37:18.178641 2307 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-g1uyu.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:18.208903 kubelet[2307]: I0304 01:37:18.208854 2307 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 01:37:20.426999 systemd[1]: Reloading requested from client PID 2617 ('systemctl') (unit session-11.scope)... Mar 4 01:37:20.427624 systemd[1]: Reloading... Mar 4 01:37:20.454807 kubelet[2307]: I0304 01:37:20.454770 2307 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:20.469625 kubelet[2307]: I0304 01:37:20.468984 2307 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:37:20.520451 zram_generator::config[2653]: No configuration found. Mar 4 01:37:20.738672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:37:20.868660 systemd[1]: Reloading finished in 440 ms. Mar 4 01:37:20.938801 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:37:20.954377 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:37:20.954963 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:37:20.955171 systemd[1]: kubelet.service: Consumed 1.310s CPU time, 131.7M memory peak, 0B memory swap peak. Mar 4 01:37:20.961674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:37:21.011495 systemd[1]: Started sshd@15-10.230.15.118:22-42.200.66.164:34276.service - OpenSSH per-connection server daemon (42.200.66.164:34276). Mar 4 01:37:21.256087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:37:21.267846 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:37:21.344522 kubelet[2723]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:37:21.344522 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 01:37:21.344522 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:37:21.345106 kubelet[2723]: I0304 01:37:21.344627 2723 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 01:37:21.355402 kubelet[2723]: I0304 01:37:21.353793 2723 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 01:37:21.355402 kubelet[2723]: I0304 01:37:21.353822 2723 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:37:21.355402 kubelet[2723]: I0304 01:37:21.354091 2723 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 01:37:21.356326 kubelet[2723]: I0304 01:37:21.356303 2723 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 01:37:21.360771 kubelet[2723]: I0304 01:37:21.360718 2723 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:37:21.370044 kubelet[2723]: E0304 01:37:21.370003 2723 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:37:21.370044 kubelet[2723]: I0304 01:37:21.370042 2723 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 01:37:21.377580 kubelet[2723]: I0304 01:37:21.377542 2723 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 01:37:21.378041 kubelet[2723]: I0304 01:37:21.377987 2723 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:37:21.378274 kubelet[2723]: I0304 01:37:21.378036 2723 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-g1uyu.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:37:21.378468 kubelet[2723]: I0304 01:37:21.378292 2723 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 01:37:21.378468 kubelet[2723]: I0304 01:37:21.378310 2723 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 01:37:21.378468 kubelet[2723]: I0304 01:37:21.378440 2723 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:37:21.378735 kubelet[2723]: I0304 01:37:21.378715 2723 kubelet.go:480] "Attempting to sync node with API server" Mar 4 01:37:21.379041 kubelet[2723]: I0304 01:37:21.378754 2723 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:37:21.383559 kubelet[2723]: I0304 01:37:21.383529 2723 kubelet.go:386] "Adding apiserver pod source" Mar 4 01:37:21.388274 kubelet[2723]: I0304 01:37:21.387401 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:37:21.397416 kubelet[2723]: I0304 01:37:21.396318 2723 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:37:21.397416 kubelet[2723]: I0304 01:37:21.397013 2723 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:37:21.411254 kubelet[2723]: I0304 01:37:21.409254 2723 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 01:37:21.411254 kubelet[2723]: I0304 01:37:21.409349 2723 server.go:1289] "Started kubelet" Mar 4 01:37:21.414773 kubelet[2723]: I0304 01:37:21.411520 2723 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:37:21.414773 kubelet[2723]: I0304 01:37:21.412696 2723 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:37:21.414773 kubelet[2723]: I0304 01:37:21.413170 2723 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:37:21.418396 kubelet[2723]: I0304 01:37:21.418050 2723 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:37:21.429390 kubelet[2723]: I0304 01:37:21.428836 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 01:37:21.432106 kubelet[2723]: I0304 01:37:21.431566 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:37:21.444258 kubelet[2723]: I0304 01:37:21.443730 2723 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 01:37:21.444258 kubelet[2723]: I0304 01:37:21.443871 2723 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 01:37:21.444258 kubelet[2723]: I0304 01:37:21.444089 2723 reconciler.go:26] "Reconciler: start to sync state" Mar 4 01:37:21.456463 kubelet[2723]: I0304 01:37:21.456423 2723 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:37:21.456988 kubelet[2723]: I0304 01:37:21.456949 2723 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:37:21.467399 kubelet[2723]: E0304 01:37:21.464724 2723 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:37:21.467399 kubelet[2723]: I0304 01:37:21.465102 2723 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:37:21.477527 sudo[2741]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 4 01:37:21.478095 sudo[2741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 4 01:37:21.483556 kubelet[2723]: I0304 01:37:21.483439 2723 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 01:37:21.517503 kubelet[2723]: I0304 01:37:21.517463 2723 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 01:37:21.524861 kubelet[2723]: I0304 01:37:21.524733 2723 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 01:37:21.524861 kubelet[2723]: I0304 01:37:21.524782 2723 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:37:21.524861 kubelet[2723]: I0304 01:37:21.524798 2723 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 01:37:21.529396 kubelet[2723]: E0304 01:37:21.527443 2723 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:37:21.573408 kubelet[2723]: I0304 01:37:21.573035 2723 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 01:37:21.573408 kubelet[2723]: I0304 01:37:21.573064 2723 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 01:37:21.573408 kubelet[2723]: I0304 01:37:21.573097 2723 state_mem.go:36] "Initialized new in-memory state store" Mar 4 01:37:21.573408 kubelet[2723]: I0304 01:37:21.573329 2723 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 01:37:21.573408 kubelet[2723]: I0304 01:37:21.573349 2723 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 01:37:21.573408 kubelet[2723]: I0304 01:37:21.573412 2723 policy_none.go:49] "None policy: Start" Mar 4 01:37:21.573743 kubelet[2723]: I0304 01:37:21.573441 2723 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 01:37:21.573743 kubelet[2723]: I0304 01:37:21.573472 2723 state_mem.go:35] "Initializing new in-memory state store" Mar 4 01:37:21.573743 kubelet[2723]: I0304 01:37:21.573627 2723 state_mem.go:75] "Updated machine memory state" Mar 4 01:37:21.587770 kubelet[2723]: E0304 01:37:21.587712 2723 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:37:21.588023 kubelet[2723]: I0304 01:37:21.587972 2723 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 01:37:21.588023 kubelet[2723]: I0304 01:37:21.588001 2723 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:37:21.591883 kubelet[2723]: I0304 01:37:21.589644 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 01:37:21.595968 kubelet[2723]: E0304 01:37:21.595939 2723 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:37:21.638080 kubelet[2723]: I0304 01:37:21.628645 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.638080 kubelet[2723]: I0304 01:37:21.631242 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.638080 kubelet[2723]: I0304 01:37:21.634635 2723 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.650781 kubelet[2723]: I0304 01:37:21.650744 2723 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:37:21.650965 kubelet[2723]: E0304 01:37:21.650841 2723 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-g1uyu.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.651527 kubelet[2723]: I0304 01:37:21.651427 2723 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:37:21.656227 kubelet[2723]: I0304 01:37:21.655421 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-ca-certs\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.656227 kubelet[2723]: I0304 01:37:21.655508 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-k8s-certs\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.656227 kubelet[2723]: I0304 01:37:21.655588 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.656227 kubelet[2723]: I0304 01:37:21.655678 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be04d1abf83a4a5c00066e4f8d7a16bb-ca-certs\") pod \"kube-apiserver-srv-g1uyu.gb1.brightbox.com\" (UID: \"be04d1abf83a4a5c00066e4f8d7a16bb\") " pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.656227 kubelet[2723]: I0304 01:37:21.655854 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be04d1abf83a4a5c00066e4f8d7a16bb-usr-share-ca-certificates\") pod \"kube-apiserver-srv-g1uyu.gb1.brightbox.com\" (UID: \"be04d1abf83a4a5c00066e4f8d7a16bb\") " pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.656578 kubelet[2723]: I0304 01:37:21.656331 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-flexvolume-dir\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.658138 kubelet[2723]: I0304 01:37:21.657947 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a1a49f7d75d7adcd47f1670f63eaa58-kubeconfig\") pod \"kube-controller-manager-srv-g1uyu.gb1.brightbox.com\" (UID: \"9a1a49f7d75d7adcd47f1670f63eaa58\") " pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.658138 kubelet[2723]: I0304 01:37:21.658012 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a7323e1fc0057b5ae8dd7267d11937a-kubeconfig\") pod \"kube-scheduler-srv-g1uyu.gb1.brightbox.com\" (UID: \"3a7323e1fc0057b5ae8dd7267d11937a\") " pod="kube-system/kube-scheduler-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.658138 kubelet[2723]: I0304 01:37:21.658050 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be04d1abf83a4a5c00066e4f8d7a16bb-k8s-certs\") pod \"kube-apiserver-srv-g1uyu.gb1.brightbox.com\" (UID: \"be04d1abf83a4a5c00066e4f8d7a16bb\") " pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.658138 kubelet[2723]: I0304 01:37:21.657609 2723 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 01:37:21.708835 kubelet[2723]: I0304 01:37:21.708775 2723 kubelet_node_status.go:75] "Attempting to register node" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.728319 kubelet[2723]: I0304 01:37:21.728246 2723 kubelet_node_status.go:124] "Node was previously registered" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:21.728832 kubelet[2723]: I0304 01:37:21.728787 2723 kubelet_node_status.go:78] "Successfully registered node" node="srv-g1uyu.gb1.brightbox.com" Mar 4 01:37:22.274346 sudo[2741]: pam_unix(sudo:session): session closed for user root Mar 4 01:37:22.388691 kubelet[2723]: I0304 01:37:22.388456 2723 apiserver.go:52] "Watching apiserver" Mar 4 01:37:22.444952 kubelet[2723]: I0304 01:37:22.444858 2723 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 01:37:22.619158 sshd[2716]: Received disconnect from 42.200.66.164 port 34276:11: Bye Bye [preauth] Mar 4 01:37:22.619158 sshd[2716]: Disconnected from authenticating user root 42.200.66.164 port 34276 [preauth] Mar 4 01:37:22.632139 systemd[1]: sshd@15-10.230.15.118:22-42.200.66.164:34276.service: Deactivated successfully. Mar 4 01:37:22.647406 kubelet[2723]: I0304 01:37:22.646878 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-g1uyu.gb1.brightbox.com" podStartSLOduration=1.646764463 podStartE2EDuration="1.646764463s" podCreationTimestamp="2026-03-04 01:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:37:22.606182688 +0000 UTC m=+1.328337132" watchObservedRunningTime="2026-03-04 01:37:22.646764463 +0000 UTC m=+1.368918897" Mar 4 01:37:22.672899 kubelet[2723]: I0304 01:37:22.672815 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-g1uyu.gb1.brightbox.com" podStartSLOduration=1.672792114 podStartE2EDuration="1.672792114s" podCreationTimestamp="2026-03-04 01:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:37:22.649401434 +0000 UTC m=+1.371555871" watchObservedRunningTime="2026-03-04 01:37:22.672792114 +0000 UTC m=+1.394946553" Mar 4 01:37:22.699271 kubelet[2723]: I0304 01:37:22.697909 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-g1uyu.gb1.brightbox.com" podStartSLOduration=2.697889147 podStartE2EDuration="2.697889147s" podCreationTimestamp="2026-03-04 01:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:37:22.674848382 +0000 UTC m=+1.397002825" watchObservedRunningTime="2026-03-04 01:37:22.697889147 +0000 UTC m=+1.420043591" Mar 4 01:37:24.262861 sudo[1773]: pam_unix(sudo:session): session closed for user root Mar 4 01:37:24.355461 sshd[1768]: pam_unix(sshd:session): session closed for user core Mar 4 01:37:24.362512 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Mar 4 01:37:24.363033 systemd[1]: sshd@11-10.230.15.118:22-20.161.92.111:49644.service: Deactivated successfully. Mar 4 01:37:24.366156 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 01:37:24.366489 systemd[1]: session-11.scope: Consumed 8.319s CPU time, 151.7M memory peak, 0B memory swap peak. Mar 4 01:37:24.368498 systemd-logind[1487]: Removed session 11. Mar 4 01:37:25.150391 kubelet[2723]: I0304 01:37:25.149420 2723 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 01:37:25.151521 containerd[1511]: time="2026-03-04T01:37:25.150197691Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 01:37:25.151992 kubelet[2723]: I0304 01:37:25.150513 2723 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 01:37:26.194302 systemd[1]: Created slice kubepods-besteffort-podacfab755_df2d_4f31_905f_4916a0366729.slice - libcontainer container kubepods-besteffort-podacfab755_df2d_4f31_905f_4916a0366729.slice. Mar 4 01:37:26.212940 systemd[1]: Created slice kubepods-burstable-podab2081b0_842a_4e4a_9e87_fbf4ac660aa8.slice - libcontainer container kubepods-burstable-podab2081b0_842a_4e4a_9e87_fbf4ac660aa8.slice. Mar 4 01:37:26.251329 systemd[1]: Created slice kubepods-besteffort-pod144fd3d6_851b_491a_baeb_e95e71ed270e.slice - libcontainer container kubepods-besteffort-pod144fd3d6_851b_491a_baeb_e95e71ed270e.slice. Mar 4 01:37:26.290564 kubelet[2723]: I0304 01:37:26.289637 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acfab755-df2d-4f31-905f-4916a0366729-lib-modules\") pod \"kube-proxy-zsb2f\" (UID: \"acfab755-df2d-4f31-905f-4916a0366729\") " pod="kube-system/kube-proxy-zsb2f" Mar 4 01:37:26.290564 kubelet[2723]: I0304 01:37:26.289697 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-bpf-maps\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.290564 kubelet[2723]: I0304 01:37:26.289774 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-xtables-lock\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.290564 kubelet[2723]: I0304 01:37:26.289803 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5cdm\" (UniqueName: \"kubernetes.io/projected/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-kube-api-access-j5cdm\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.290564 kubelet[2723]: I0304 01:37:26.289834 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-run\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.290564 kubelet[2723]: I0304 01:37:26.289859 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-clustermesh-secrets\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.292398 kubelet[2723]: I0304 01:37:26.289893 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/144fd3d6-851b-491a-baeb-e95e71ed270e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dxx78\" (UID: \"144fd3d6-851b-491a-baeb-e95e71ed270e\") " pod="kube-system/cilium-operator-6c4d7847fc-dxx78" Mar 4 01:37:26.292398 kubelet[2723]: I0304 01:37:26.289922 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/acfab755-df2d-4f31-905f-4916a0366729-kube-proxy\") pod \"kube-proxy-zsb2f\" (UID: \"acfab755-df2d-4f31-905f-4916a0366729\") " pod="kube-system/kube-proxy-zsb2f" Mar 4 01:37:26.292398 kubelet[2723]: I0304 01:37:26.289949 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cni-path\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.292398 kubelet[2723]: I0304 01:37:26.289973 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-host-proc-sys-net\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.292398 kubelet[2723]: I0304 01:37:26.289996 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-cgroup\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.292644 kubelet[2723]: I0304 01:37:26.290020 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-host-proc-sys-kernel\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.292644 kubelet[2723]: I0304 01:37:26.290046 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acfab755-df2d-4f31-905f-4916a0366729-xtables-lock\") pod \"kube-proxy-zsb2f\" (UID: \"acfab755-df2d-4f31-905f-4916a0366729\") " pod="kube-system/kube-proxy-zsb2f" Mar 4 01:37:26.292644 kubelet[2723]: I0304 01:37:26.290096 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-hostproc\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.292644 kubelet[2723]: I0304 01:37:26.290130 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-etc-cni-netd\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.292644 kubelet[2723]: I0304 01:37:26.290157 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj8bx\" (UniqueName: \"kubernetes.io/projected/144fd3d6-851b-491a-baeb-e95e71ed270e-kube-api-access-zj8bx\") pod \"cilium-operator-6c4d7847fc-dxx78\" (UID: \"144fd3d6-851b-491a-baeb-e95e71ed270e\") " pod="kube-system/cilium-operator-6c4d7847fc-dxx78" Mar 4 01:37:26.292918 kubelet[2723]: I0304 01:37:26.290186 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvx72\" (UniqueName: \"kubernetes.io/projected/acfab755-df2d-4f31-905f-4916a0366729-kube-api-access-dvx72\") pod \"kube-proxy-zsb2f\" (UID: \"acfab755-df2d-4f31-905f-4916a0366729\") " pod="kube-system/kube-proxy-zsb2f" Mar 4 01:37:26.292918 kubelet[2723]: I0304 01:37:26.290241 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-config-path\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.292918 kubelet[2723]: I0304 01:37:26.290270 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-hubble-tls\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.292918 kubelet[2723]: I0304 01:37:26.290294 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-lib-modules\") pod \"cilium-8wrwg\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " pod="kube-system/cilium-8wrwg" Mar 4 01:37:26.508076 containerd[1511]: time="2026-03-04T01:37:26.508016868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zsb2f,Uid:acfab755-df2d-4f31-905f-4916a0366729,Namespace:kube-system,Attempt:0,}" Mar 4 01:37:26.523092 containerd[1511]: time="2026-03-04T01:37:26.520982048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8wrwg,Uid:ab2081b0-842a-4e4a-9e87-fbf4ac660aa8,Namespace:kube-system,Attempt:0,}" Mar 4 01:37:26.551846 containerd[1511]: time="2026-03-04T01:37:26.551662552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:37:26.552477 containerd[1511]: time="2026-03-04T01:37:26.551822748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:37:26.552477 containerd[1511]: time="2026-03-04T01:37:26.552455150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:26.552898 containerd[1511]: time="2026-03-04T01:37:26.552832593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:26.559116 containerd[1511]: time="2026-03-04T01:37:26.557904630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dxx78,Uid:144fd3d6-851b-491a-baeb-e95e71ed270e,Namespace:kube-system,Attempt:0,}" Mar 4 01:37:26.578183 containerd[1511]: time="2026-03-04T01:37:26.577534758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:37:26.578183 containerd[1511]: time="2026-03-04T01:37:26.577645701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:37:26.578183 containerd[1511]: time="2026-03-04T01:37:26.577669025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:26.578183 containerd[1511]: time="2026-03-04T01:37:26.577798229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:26.602900 systemd[1]: Started cri-containerd-8105fe56286ad199fb58fc585a916db2e8c689246caefc2de94635f3ebe12040.scope - libcontainer container 8105fe56286ad199fb58fc585a916db2e8c689246caefc2de94635f3ebe12040. Mar 4 01:37:26.625563 systemd[1]: Started cri-containerd-db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5.scope - libcontainer container db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5. Mar 4 01:37:26.680844 containerd[1511]: time="2026-03-04T01:37:26.680510071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:37:26.680844 containerd[1511]: time="2026-03-04T01:37:26.680601197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:37:26.680844 containerd[1511]: time="2026-03-04T01:37:26.680635799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:26.680844 containerd[1511]: time="2026-03-04T01:37:26.680767995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:26.688301 containerd[1511]: time="2026-03-04T01:37:26.687922934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zsb2f,Uid:acfab755-df2d-4f31-905f-4916a0366729,Namespace:kube-system,Attempt:0,} returns sandbox id \"8105fe56286ad199fb58fc585a916db2e8c689246caefc2de94635f3ebe12040\"" Mar 4 01:37:26.695699 containerd[1511]: time="2026-03-04T01:37:26.695640420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8wrwg,Uid:ab2081b0-842a-4e4a-9e87-fbf4ac660aa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\"" Mar 4 01:37:26.699824 containerd[1511]: time="2026-03-04T01:37:26.699480265Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 4 01:37:26.700043 containerd[1511]: time="2026-03-04T01:37:26.700019847Z" level=info msg="CreateContainer within sandbox \"8105fe56286ad199fb58fc585a916db2e8c689246caefc2de94635f3ebe12040\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 01:37:26.723209 containerd[1511]: time="2026-03-04T01:37:26.723165508Z" level=info msg="CreateContainer within sandbox \"8105fe56286ad199fb58fc585a916db2e8c689246caefc2de94635f3ebe12040\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"52ea819c81fc518d764d95278095ca0ace15f07bd34aeb0b4351a52a01155419\"" Mar 4 01:37:26.724618 containerd[1511]: time="2026-03-04T01:37:26.724253107Z" level=info msg="StartContainer for \"52ea819c81fc518d764d95278095ca0ace15f07bd34aeb0b4351a52a01155419\"" Mar 4 01:37:26.726595 systemd[1]: Started cri-containerd-81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd.scope - libcontainer container 81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd. Mar 4 01:37:26.779586 systemd[1]: Started cri-containerd-52ea819c81fc518d764d95278095ca0ace15f07bd34aeb0b4351a52a01155419.scope - libcontainer container 52ea819c81fc518d764d95278095ca0ace15f07bd34aeb0b4351a52a01155419. Mar 4 01:37:26.808231 containerd[1511]: time="2026-03-04T01:37:26.808183186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dxx78,Uid:144fd3d6-851b-491a-baeb-e95e71ed270e,Namespace:kube-system,Attempt:0,} returns sandbox id \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\"" Mar 4 01:37:26.841016 containerd[1511]: time="2026-03-04T01:37:26.840931758Z" level=info msg="StartContainer for \"52ea819c81fc518d764d95278095ca0ace15f07bd34aeb0b4351a52a01155419\" returns successfully" Mar 4 01:37:29.291142 kubelet[2723]: I0304 01:37:29.290352 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zsb2f" podStartSLOduration=3.290312228 podStartE2EDuration="3.290312228s" podCreationTimestamp="2026-03-04 01:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:37:27.603045425 +0000 UTC m=+6.325199869" watchObservedRunningTime="2026-03-04 01:37:29.290312228 +0000 UTC m=+8.012466667" Mar 4 01:37:34.716623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231954974.mount: Deactivated successfully. Mar 4 01:37:38.055874 containerd[1511]: time="2026-03-04T01:37:38.055711234Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:38.058563 containerd[1511]: time="2026-03-04T01:37:38.058479922Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 4 01:37:38.059851 containerd[1511]: time="2026-03-04T01:37:38.059776268Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:38.061981 containerd[1511]: time="2026-03-04T01:37:38.061894409Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.362353551s" Mar 4 01:37:38.061981 containerd[1511]: time="2026-03-04T01:37:38.061963287Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 4 01:37:38.066609 containerd[1511]: time="2026-03-04T01:37:38.066562351Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 4 01:37:38.072993 containerd[1511]: time="2026-03-04T01:37:38.072299162Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 4 01:37:38.170960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962716890.mount: Deactivated successfully. Mar 4 01:37:38.174656 containerd[1511]: time="2026-03-04T01:37:38.174613239Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\"" Mar 4 01:37:38.175736 containerd[1511]: time="2026-03-04T01:37:38.175662875Z" level=info msg="StartContainer for \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\"" Mar 4 01:37:38.418959 systemd[1]: Started cri-containerd-df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3.scope - libcontainer container df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3. Mar 4 01:37:38.465802 containerd[1511]: time="2026-03-04T01:37:38.465758289Z" level=info msg="StartContainer for \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\" returns successfully" Mar 4 01:37:38.490745 systemd[1]: cri-containerd-df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3.scope: Deactivated successfully. Mar 4 01:37:38.813629 containerd[1511]: time="2026-03-04T01:37:38.793847360Z" level=info msg="shim disconnected" id=df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3 namespace=k8s.io Mar 4 01:37:38.813629 containerd[1511]: time="2026-03-04T01:37:38.813619474Z" level=warning msg="cleaning up after shim disconnected" id=df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3 namespace=k8s.io Mar 4 01:37:38.813629 containerd[1511]: time="2026-03-04T01:37:38.813639266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:37:39.160823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3-rootfs.mount: Deactivated successfully. Mar 4 01:37:39.533855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104390418.mount: Deactivated successfully. Mar 4 01:37:39.628306 containerd[1511]: time="2026-03-04T01:37:39.628115494Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 4 01:37:39.661164 containerd[1511]: time="2026-03-04T01:37:39.660478994Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\"" Mar 4 01:37:39.662313 containerd[1511]: time="2026-03-04T01:37:39.662283235Z" level=info msg="StartContainer for \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\"" Mar 4 01:37:39.729626 systemd[1]: Started cri-containerd-ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd.scope - libcontainer container ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd. Mar 4 01:37:39.776879 containerd[1511]: time="2026-03-04T01:37:39.776314359Z" level=info msg="StartContainer for \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\" returns successfully" Mar 4 01:37:39.829290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:37:39.830841 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:37:39.831003 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:37:39.840358 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:37:39.840752 systemd[1]: cri-containerd-ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd.scope: Deactivated successfully. Mar 4 01:37:39.932605 containerd[1511]: time="2026-03-04T01:37:39.931508032Z" level=info msg="shim disconnected" id=ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd namespace=k8s.io Mar 4 01:37:39.932605 containerd[1511]: time="2026-03-04T01:37:39.931587216Z" level=warning msg="cleaning up after shim disconnected" id=ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd namespace=k8s.io Mar 4 01:37:39.932605 containerd[1511]: time="2026-03-04T01:37:39.931616483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:37:39.935604 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:37:40.529927 containerd[1511]: time="2026-03-04T01:37:40.528892561Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:40.529927 containerd[1511]: time="2026-03-04T01:37:40.529815633Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 4 01:37:40.531034 containerd[1511]: time="2026-03-04T01:37:40.530592159Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:37:40.533890 containerd[1511]: time="2026-03-04T01:37:40.533851886Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.467235875s" Mar 4 01:37:40.534044 containerd[1511]: time="2026-03-04T01:37:40.534016490Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 4 01:37:40.539115 containerd[1511]: time="2026-03-04T01:37:40.539065544Z" level=info msg="CreateContainer within sandbox \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 4 01:37:40.553708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168378531.mount: Deactivated successfully. Mar 4 01:37:40.559881 containerd[1511]: time="2026-03-04T01:37:40.559832646Z" level=info msg="CreateContainer within sandbox \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\"" Mar 4 01:37:40.561096 containerd[1511]: time="2026-03-04T01:37:40.561030013Z" level=info msg="StartContainer for \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\"" Mar 4 01:37:40.622617 systemd[1]: Started cri-containerd-ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4.scope - libcontainer container ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4. Mar 4 01:37:40.654939 containerd[1511]: time="2026-03-04T01:37:40.653282745Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 4 01:37:40.687291 containerd[1511]: time="2026-03-04T01:37:40.687245215Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\"" Mar 4 01:37:40.688591 containerd[1511]: time="2026-03-04T01:37:40.688435431Z" level=info msg="StartContainer for \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\"" Mar 4 01:37:40.702304 containerd[1511]: time="2026-03-04T01:37:40.702189761Z" level=info msg="StartContainer for \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\" returns successfully" Mar 4 01:37:40.746608 systemd[1]: Started cri-containerd-a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38.scope - libcontainer container a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38. Mar 4 01:37:40.811168 containerd[1511]: time="2026-03-04T01:37:40.810996707Z" level=info msg="StartContainer for \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\" returns successfully" Mar 4 01:37:40.817265 systemd[1]: cri-containerd-a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38.scope: Deactivated successfully. Mar 4 01:37:40.963179 containerd[1511]: time="2026-03-04T01:37:40.963079842Z" level=info msg="shim disconnected" id=a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38 namespace=k8s.io Mar 4 01:37:40.963179 containerd[1511]: time="2026-03-04T01:37:40.963154256Z" level=warning msg="cleaning up after shim disconnected" id=a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38 namespace=k8s.io Mar 4 01:37:40.963179 containerd[1511]: time="2026-03-04T01:37:40.963175295Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:37:41.687563 containerd[1511]: time="2026-03-04T01:37:41.687435753Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 4 01:37:41.708051 containerd[1511]: time="2026-03-04T01:37:41.706580221Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\"" Mar 4 01:37:41.709979 containerd[1511]: time="2026-03-04T01:37:41.708855714Z" level=info msg="StartContainer for \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\"" Mar 4 01:37:41.795584 systemd[1]: Started cri-containerd-f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9.scope - libcontainer container f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9. Mar 4 01:37:41.882569 kubelet[2723]: I0304 01:37:41.878073 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dxx78" podStartSLOduration=2.152865955 podStartE2EDuration="15.878005228s" podCreationTimestamp="2026-03-04 01:37:26 +0000 UTC" firstStartedPulling="2026-03-04 01:37:26.809912932 +0000 UTC m=+5.532067369" lastFinishedPulling="2026-03-04 01:37:40.535052208 +0000 UTC m=+19.257206642" observedRunningTime="2026-03-04 01:37:41.695755695 +0000 UTC m=+20.417910156" watchObservedRunningTime="2026-03-04 01:37:41.878005228 +0000 UTC m=+20.600159708" Mar 4 01:37:41.896555 systemd[1]: cri-containerd-f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9.scope: Deactivated successfully. Mar 4 01:37:41.925140 containerd[1511]: time="2026-03-04T01:37:41.925085360Z" level=info msg="StartContainer for \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\" returns successfully" Mar 4 01:37:41.990395 containerd[1511]: time="2026-03-04T01:37:41.990245416Z" level=info msg="shim disconnected" id=f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9 namespace=k8s.io Mar 4 01:37:41.990395 containerd[1511]: time="2026-03-04T01:37:41.990318886Z" level=warning msg="cleaning up after shim disconnected" id=f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9 namespace=k8s.io Mar 4 01:37:41.990395 containerd[1511]: time="2026-03-04T01:37:41.990336563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:37:42.162009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9-rootfs.mount: Deactivated successfully. Mar 4 01:37:42.670329 containerd[1511]: time="2026-03-04T01:37:42.670226161Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 4 01:37:42.710454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782867355.mount: Deactivated successfully. Mar 4 01:37:42.717428 containerd[1511]: time="2026-03-04T01:37:42.717332867Z" level=info msg="CreateContainer within sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\"" Mar 4 01:37:42.720419 containerd[1511]: time="2026-03-04T01:37:42.719925613Z" level=info msg="StartContainer for \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\"" Mar 4 01:37:42.776627 systemd[1]: Started cri-containerd-867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040.scope - libcontainer container 867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040. Mar 4 01:37:42.871716 containerd[1511]: time="2026-03-04T01:37:42.871660481Z" level=info msg="StartContainer for \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\" returns successfully" Mar 4 01:37:43.181416 kubelet[2723]: I0304 01:37:43.181339 2723 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 4 01:37:43.242155 systemd[1]: Created slice kubepods-burstable-pod5dd01954_11bf_46d1_a0ef_9e456c99fa26.slice - libcontainer container kubepods-burstable-pod5dd01954_11bf_46d1_a0ef_9e456c99fa26.slice. Mar 4 01:37:43.252852 systemd[1]: Created slice kubepods-burstable-pod8e622b2a_f5f8_4425_8c54_db58a14745e1.slice - libcontainer container kubepods-burstable-pod8e622b2a_f5f8_4425_8c54_db58a14745e1.slice. Mar 4 01:37:43.320762 kubelet[2723]: I0304 01:37:43.320689 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e622b2a-f5f8-4425-8c54-db58a14745e1-config-volume\") pod \"coredns-674b8bbfcf-fz6t9\" (UID: \"8e622b2a-f5f8-4425-8c54-db58a14745e1\") " pod="kube-system/coredns-674b8bbfcf-fz6t9" Mar 4 01:37:43.321256 kubelet[2723]: I0304 01:37:43.321024 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsnwl\" (UniqueName: \"kubernetes.io/projected/8e622b2a-f5f8-4425-8c54-db58a14745e1-kube-api-access-hsnwl\") pod \"coredns-674b8bbfcf-fz6t9\" (UID: \"8e622b2a-f5f8-4425-8c54-db58a14745e1\") " pod="kube-system/coredns-674b8bbfcf-fz6t9" Mar 4 01:37:43.321256 kubelet[2723]: I0304 01:37:43.321089 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5dd01954-11bf-46d1-a0ef-9e456c99fa26-config-volume\") pod \"coredns-674b8bbfcf-xpj48\" (UID: \"5dd01954-11bf-46d1-a0ef-9e456c99fa26\") " pod="kube-system/coredns-674b8bbfcf-xpj48" Mar 4 01:37:43.321256 kubelet[2723]: I0304 01:37:43.321137 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6clz\" (UniqueName: \"kubernetes.io/projected/5dd01954-11bf-46d1-a0ef-9e456c99fa26-kube-api-access-s6clz\") pod \"coredns-674b8bbfcf-xpj48\" (UID: \"5dd01954-11bf-46d1-a0ef-9e456c99fa26\") " pod="kube-system/coredns-674b8bbfcf-xpj48" Mar 4 01:37:43.551790 containerd[1511]: time="2026-03-04T01:37:43.551723861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpj48,Uid:5dd01954-11bf-46d1-a0ef-9e456c99fa26,Namespace:kube-system,Attempt:0,}" Mar 4 01:37:43.558169 containerd[1511]: time="2026-03-04T01:37:43.557660558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fz6t9,Uid:8e622b2a-f5f8-4425-8c54-db58a14745e1,Namespace:kube-system,Attempt:0,}" Mar 4 01:37:43.773351 kubelet[2723]: I0304 01:37:43.772351 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8wrwg" podStartSLOduration=6.405804348 podStartE2EDuration="17.77232383s" podCreationTimestamp="2026-03-04 01:37:26 +0000 UTC" firstStartedPulling="2026-03-04 01:37:26.698485114 +0000 UTC m=+5.420639544" lastFinishedPulling="2026-03-04 01:37:38.065004581 +0000 UTC m=+16.787159026" observedRunningTime="2026-03-04 01:37:43.76927225 +0000 UTC m=+22.491426703" watchObservedRunningTime="2026-03-04 01:37:43.77232383 +0000 UTC m=+22.494478283" Mar 4 01:37:45.737766 systemd-networkd[1421]: cilium_host: Link UP Mar 4 01:37:45.738814 systemd-networkd[1421]: cilium_net: Link UP Mar 4 01:37:45.740568 systemd-networkd[1421]: cilium_net: Gained carrier Mar 4 01:37:45.741277 systemd-networkd[1421]: cilium_host: Gained carrier Mar 4 01:37:45.899604 systemd-networkd[1421]: cilium_net: Gained IPv6LL Mar 4 01:37:45.940036 systemd-networkd[1421]: cilium_vxlan: Link UP Mar 4 01:37:45.940047 systemd-networkd[1421]: cilium_vxlan: Gained carrier Mar 4 01:37:46.092621 systemd-networkd[1421]: cilium_host: Gained IPv6LL Mar 4 01:37:46.513416 kernel: NET: Registered PF_ALG protocol family Mar 4 01:37:47.171663 systemd-networkd[1421]: cilium_vxlan: Gained IPv6LL Mar 4 01:37:47.622824 systemd-networkd[1421]: lxc_health: Link UP Mar 4 01:37:47.631363 systemd-networkd[1421]: lxc_health: Gained carrier Mar 4 01:37:48.217108 systemd-networkd[1421]: lxc3ac444273bd3: Link UP Mar 4 01:37:48.224395 kernel: eth0: renamed from tmpac58c Mar 4 01:37:48.233590 systemd-networkd[1421]: lxc3ac444273bd3: Gained carrier Mar 4 01:37:48.253562 kernel: eth0: renamed from tmp768ce Mar 4 01:37:48.256644 systemd-networkd[1421]: lxcdd4795e5279c: Link UP Mar 4 01:37:48.267556 systemd-networkd[1421]: lxcdd4795e5279c: Gained carrier Mar 4 01:37:48.964664 systemd-networkd[1421]: lxc_health: Gained IPv6LL Mar 4 01:37:49.603589 systemd-networkd[1421]: lxcdd4795e5279c: Gained IPv6LL Mar 4 01:37:50.116534 systemd-networkd[1421]: lxc3ac444273bd3: Gained IPv6LL Mar 4 01:37:54.156458 containerd[1511]: time="2026-03-04T01:37:54.156145002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:37:54.156458 containerd[1511]: time="2026-03-04T01:37:54.156258805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:37:54.156458 containerd[1511]: time="2026-03-04T01:37:54.156319077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:54.157943 containerd[1511]: time="2026-03-04T01:37:54.156503923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:54.190652 containerd[1511]: time="2026-03-04T01:37:54.188685490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:37:54.190652 containerd[1511]: time="2026-03-04T01:37:54.188812933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:37:54.190652 containerd[1511]: time="2026-03-04T01:37:54.188838351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:54.190652 containerd[1511]: time="2026-03-04T01:37:54.188980265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:37:54.242308 systemd[1]: run-containerd-runc-k8s.io-768ce113e7ee484de77b185d649362192a3131a72f0490f93d5fb36d42b28e4a-runc.jDlyIv.mount: Deactivated successfully. Mar 4 01:37:54.267446 systemd[1]: Started cri-containerd-768ce113e7ee484de77b185d649362192a3131a72f0490f93d5fb36d42b28e4a.scope - libcontainer container 768ce113e7ee484de77b185d649362192a3131a72f0490f93d5fb36d42b28e4a. Mar 4 01:37:54.275641 systemd[1]: Started cri-containerd-ac58c9f9237e80b882ece5ff03b85f1646e6b073b479721050bbb215326e4b62.scope - libcontainer container ac58c9f9237e80b882ece5ff03b85f1646e6b073b479721050bbb215326e4b62. Mar 4 01:37:54.438633 containerd[1511]: time="2026-03-04T01:37:54.438441465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xpj48,Uid:5dd01954-11bf-46d1-a0ef-9e456c99fa26,Namespace:kube-system,Attempt:0,} returns sandbox id \"768ce113e7ee484de77b185d649362192a3131a72f0490f93d5fb36d42b28e4a\"" Mar 4 01:37:54.449973 containerd[1511]: time="2026-03-04T01:37:54.449921280Z" level=info msg="CreateContainer within sandbox \"768ce113e7ee484de77b185d649362192a3131a72f0490f93d5fb36d42b28e4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:37:54.460420 containerd[1511]: time="2026-03-04T01:37:54.459799372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fz6t9,Uid:8e622b2a-f5f8-4425-8c54-db58a14745e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac58c9f9237e80b882ece5ff03b85f1646e6b073b479721050bbb215326e4b62\"" Mar 4 01:37:54.475147 containerd[1511]: time="2026-03-04T01:37:54.474890752Z" level=info msg="CreateContainer within sandbox \"ac58c9f9237e80b882ece5ff03b85f1646e6b073b479721050bbb215326e4b62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:37:54.507598 containerd[1511]: time="2026-03-04T01:37:54.507390449Z" level=info msg="CreateContainer within sandbox \"768ce113e7ee484de77b185d649362192a3131a72f0490f93d5fb36d42b28e4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fbcb8bb8db42fe84bb68ebdb113a2d63ab9099502697f253a545f21362d6dba\"" Mar 4 01:37:54.509289 containerd[1511]: time="2026-03-04T01:37:54.508604372Z" level=info msg="StartContainer for \"5fbcb8bb8db42fe84bb68ebdb113a2d63ab9099502697f253a545f21362d6dba\"" Mar 4 01:37:54.509974 containerd[1511]: time="2026-03-04T01:37:54.509783834Z" level=info msg="CreateContainer within sandbox \"ac58c9f9237e80b882ece5ff03b85f1646e6b073b479721050bbb215326e4b62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1af9d4324a1ffd8345686d795cc92b7fdbcf6e0d2dab0b06f2fa6e70bacc37a1\"" Mar 4 01:37:54.510271 containerd[1511]: time="2026-03-04T01:37:54.510240671Z" level=info msg="StartContainer for \"1af9d4324a1ffd8345686d795cc92b7fdbcf6e0d2dab0b06f2fa6e70bacc37a1\"" Mar 4 01:37:54.552579 systemd[1]: Started cri-containerd-1af9d4324a1ffd8345686d795cc92b7fdbcf6e0d2dab0b06f2fa6e70bacc37a1.scope - libcontainer container 1af9d4324a1ffd8345686d795cc92b7fdbcf6e0d2dab0b06f2fa6e70bacc37a1. Mar 4 01:37:54.569554 systemd[1]: Started cri-containerd-5fbcb8bb8db42fe84bb68ebdb113a2d63ab9099502697f253a545f21362d6dba.scope - libcontainer container 5fbcb8bb8db42fe84bb68ebdb113a2d63ab9099502697f253a545f21362d6dba. Mar 4 01:37:54.635910 containerd[1511]: time="2026-03-04T01:37:54.635842424Z" level=info msg="StartContainer for \"1af9d4324a1ffd8345686d795cc92b7fdbcf6e0d2dab0b06f2fa6e70bacc37a1\" returns successfully" Mar 4 01:37:54.636443 containerd[1511]: time="2026-03-04T01:37:54.636256233Z" level=info msg="StartContainer for \"5fbcb8bb8db42fe84bb68ebdb113a2d63ab9099502697f253a545f21362d6dba\" returns successfully" Mar 4 01:37:54.801715 kubelet[2723]: I0304 01:37:54.801269 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fz6t9" podStartSLOduration=28.80122139 podStartE2EDuration="28.80122139s" podCreationTimestamp="2026-03-04 01:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:37:54.798199932 +0000 UTC m=+33.520354385" watchObservedRunningTime="2026-03-04 01:37:54.80122139 +0000 UTC m=+33.523375828" Mar 4 01:37:54.829055 kubelet[2723]: I0304 01:37:54.828888 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xpj48" podStartSLOduration=28.82886685 podStartE2EDuration="28.82886685s" podCreationTimestamp="2026-03-04 01:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:37:54.826474598 +0000 UTC m=+33.548629054" watchObservedRunningTime="2026-03-04 01:37:54.82886685 +0000 UTC m=+33.551021284" Mar 4 01:38:26.059632 systemd[1]: Started sshd@16-10.230.15.118:22-20.161.92.111:50214.service - OpenSSH per-connection server daemon (20.161.92.111:50214). Mar 4 01:38:26.699516 sshd[4121]: Accepted publickey for core from 20.161.92.111 port 50214 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:26.703086 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:26.710882 systemd-logind[1487]: New session 12 of user core. Mar 4 01:38:26.725694 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 01:38:27.741445 sshd[4121]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:27.749875 systemd[1]: sshd@16-10.230.15.118:22-20.161.92.111:50214.service: Deactivated successfully. Mar 4 01:38:27.753201 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 01:38:27.755341 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Mar 4 01:38:27.757444 systemd-logind[1487]: Removed session 12. Mar 4 01:38:32.851734 systemd[1]: Started sshd@17-10.230.15.118:22-20.161.92.111:36436.service - OpenSSH per-connection server daemon (20.161.92.111:36436). Mar 4 01:38:33.467632 sshd[4140]: Accepted publickey for core from 20.161.92.111 port 36436 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:33.469836 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:33.478632 systemd-logind[1487]: New session 13 of user core. Mar 4 01:38:33.485598 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 01:38:33.996399 sshd[4140]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:34.001528 systemd[1]: sshd@17-10.230.15.118:22-20.161.92.111:36436.service: Deactivated successfully. Mar 4 01:38:34.004973 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 01:38:34.006539 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Mar 4 01:38:34.007861 systemd-logind[1487]: Removed session 13. Mar 4 01:38:39.105776 systemd[1]: Started sshd@18-10.230.15.118:22-20.161.92.111:36444.service - OpenSSH per-connection server daemon (20.161.92.111:36444). Mar 4 01:38:39.445803 systemd[1]: Started sshd@19-10.230.15.118:22-103.189.235.30:38136.service - OpenSSH per-connection server daemon (103.189.235.30:38136). Mar 4 01:38:39.684722 sshd[4154]: Accepted publickey for core from 20.161.92.111 port 36444 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:39.687202 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:39.700144 systemd-logind[1487]: New session 14 of user core. Mar 4 01:38:39.706919 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 01:38:40.179788 sshd[4154]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:40.183991 systemd[1]: sshd@18-10.230.15.118:22-20.161.92.111:36444.service: Deactivated successfully. Mar 4 01:38:40.187326 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 01:38:40.189453 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Mar 4 01:38:40.190848 systemd-logind[1487]: Removed session 14. Mar 4 01:38:40.786116 sshd[4157]: Received disconnect from 103.189.235.30 port 38136:11: Bye Bye [preauth] Mar 4 01:38:40.786116 sshd[4157]: Disconnected from authenticating user root 103.189.235.30 port 38136 [preauth] Mar 4 01:38:40.789039 systemd[1]: sshd@19-10.230.15.118:22-103.189.235.30:38136.service: Deactivated successfully. Mar 4 01:38:42.385790 systemd[1]: Started sshd@20-10.230.15.118:22-222.108.0.231:49572.service - OpenSSH per-connection server daemon (222.108.0.231:49572). Mar 4 01:38:44.080775 sshd[4173]: Received disconnect from 222.108.0.231 port 49572:11: Bye Bye [preauth] Mar 4 01:38:44.080775 sshd[4173]: Disconnected from authenticating user root 222.108.0.231 port 49572 [preauth] Mar 4 01:38:44.084028 systemd[1]: sshd@20-10.230.15.118:22-222.108.0.231:49572.service: Deactivated successfully. Mar 4 01:38:45.291804 systemd[1]: Started sshd@21-10.230.15.118:22-20.161.92.111:40432.service - OpenSSH per-connection server daemon (20.161.92.111:40432). Mar 4 01:38:45.890818 sshd[4179]: Accepted publickey for core from 20.161.92.111 port 40432 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:45.895891 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:45.905389 systemd-logind[1487]: New session 15 of user core. Mar 4 01:38:45.915658 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 01:38:46.401172 sshd[4179]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:46.406529 systemd[1]: sshd@21-10.230.15.118:22-20.161.92.111:40432.service: Deactivated successfully. Mar 4 01:38:46.409348 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 01:38:46.411011 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Mar 4 01:38:46.413359 systemd-logind[1487]: Removed session 15. Mar 4 01:38:46.519763 systemd[1]: Started sshd@22-10.230.15.118:22-20.161.92.111:40442.service - OpenSSH per-connection server daemon (20.161.92.111:40442). Mar 4 01:38:47.104712 sshd[4192]: Accepted publickey for core from 20.161.92.111 port 40442 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:47.106854 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:47.113497 systemd-logind[1487]: New session 16 of user core. Mar 4 01:38:47.118581 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 01:38:47.680641 sshd[4192]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:47.686917 systemd[1]: sshd@22-10.230.15.118:22-20.161.92.111:40442.service: Deactivated successfully. Mar 4 01:38:47.690011 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 01:38:47.691248 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Mar 4 01:38:47.692784 systemd-logind[1487]: Removed session 16. Mar 4 01:38:47.789753 systemd[1]: Started sshd@23-10.230.15.118:22-20.161.92.111:40448.service - OpenSSH per-connection server daemon (20.161.92.111:40448). Mar 4 01:38:48.374886 sshd[4203]: Accepted publickey for core from 20.161.92.111 port 40448 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:48.375817 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:48.384529 systemd-logind[1487]: New session 17 of user core. Mar 4 01:38:48.391570 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 01:38:48.884912 sshd[4203]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:48.890091 systemd[1]: sshd@23-10.230.15.118:22-20.161.92.111:40448.service: Deactivated successfully. Mar 4 01:38:48.892476 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 01:38:48.893777 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Mar 4 01:38:48.895130 systemd-logind[1487]: Removed session 17. Mar 4 01:38:53.997771 systemd[1]: Started sshd@24-10.230.15.118:22-20.161.92.111:54502.service - OpenSSH per-connection server daemon (20.161.92.111:54502). Mar 4 01:38:54.612345 sshd[4216]: Accepted publickey for core from 20.161.92.111 port 54502 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:54.613786 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:54.621815 systemd-logind[1487]: New session 18 of user core. Mar 4 01:38:54.628853 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 01:38:55.125640 sshd[4216]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:55.129430 systemd[1]: sshd@24-10.230.15.118:22-20.161.92.111:54502.service: Deactivated successfully. Mar 4 01:38:55.132258 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 01:38:55.134592 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Mar 4 01:38:55.136542 systemd-logind[1487]: Removed session 18. Mar 4 01:38:55.220969 systemd[1]: Started sshd@25-10.230.15.118:22-20.161.92.111:54514.service - OpenSSH per-connection server daemon (20.161.92.111:54514). Mar 4 01:38:55.798406 sshd[4229]: Accepted publickey for core from 20.161.92.111 port 54514 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:55.800402 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:55.807198 systemd-logind[1487]: New session 19 of user core. Mar 4 01:38:55.812584 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 01:38:56.561549 sshd[4229]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:56.575034 systemd[1]: sshd@25-10.230.15.118:22-20.161.92.111:54514.service: Deactivated successfully. Mar 4 01:38:56.577067 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Mar 4 01:38:56.581346 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 01:38:56.589140 systemd-logind[1487]: Removed session 19. Mar 4 01:38:56.680205 systemd[1]: Started sshd@26-10.230.15.118:22-20.161.92.111:54518.service - OpenSSH per-connection server daemon (20.161.92.111:54518). Mar 4 01:38:57.286766 sshd[4240]: Accepted publickey for core from 20.161.92.111 port 54518 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:57.288985 sshd[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:57.297904 systemd-logind[1487]: New session 20 of user core. Mar 4 01:38:57.303553 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 01:38:58.531281 sshd[4240]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:58.544041 systemd[1]: sshd@26-10.230.15.118:22-20.161.92.111:54518.service: Deactivated successfully. Mar 4 01:38:58.548297 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 01:38:58.549754 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Mar 4 01:38:58.551291 systemd-logind[1487]: Removed session 20. Mar 4 01:38:58.635736 systemd[1]: Started sshd@27-10.230.15.118:22-20.161.92.111:54530.service - OpenSSH per-connection server daemon (20.161.92.111:54530). Mar 4 01:38:59.201631 sshd[4260]: Accepted publickey for core from 20.161.92.111 port 54530 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:38:59.203775 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:38:59.210618 systemd-logind[1487]: New session 21 of user core. Mar 4 01:38:59.218628 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 01:38:59.885787 sshd[4260]: pam_unix(sshd:session): session closed for user core Mar 4 01:38:59.891331 systemd[1]: sshd@27-10.230.15.118:22-20.161.92.111:54530.service: Deactivated successfully. Mar 4 01:38:59.893684 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 01:38:59.894762 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Mar 4 01:38:59.896359 systemd-logind[1487]: Removed session 21. Mar 4 01:38:59.995548 systemd[1]: Started sshd@28-10.230.15.118:22-20.161.92.111:54544.service - OpenSSH per-connection server daemon (20.161.92.111:54544). Mar 4 01:39:00.659637 sshd[4271]: Accepted publickey for core from 20.161.92.111 port 54544 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:39:00.660196 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:39:00.668499 systemd-logind[1487]: New session 22 of user core. Mar 4 01:39:00.674588 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 01:39:01.162677 sshd[4271]: pam_unix(sshd:session): session closed for user core Mar 4 01:39:01.167836 systemd[1]: sshd@28-10.230.15.118:22-20.161.92.111:54544.service: Deactivated successfully. Mar 4 01:39:01.171582 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 01:39:01.173450 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Mar 4 01:39:01.174906 systemd-logind[1487]: Removed session 22. Mar 4 01:39:06.267767 systemd[1]: Started sshd@29-10.230.15.118:22-20.161.92.111:33874.service - OpenSSH per-connection server daemon (20.161.92.111:33874). Mar 4 01:39:06.849424 sshd[4285]: Accepted publickey for core from 20.161.92.111 port 33874 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:39:06.851782 sshd[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:39:06.859002 systemd-logind[1487]: New session 23 of user core. Mar 4 01:39:06.865633 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 01:39:07.346596 sshd[4285]: pam_unix(sshd:session): session closed for user core Mar 4 01:39:07.352742 systemd[1]: sshd@29-10.230.15.118:22-20.161.92.111:33874.service: Deactivated successfully. Mar 4 01:39:07.355572 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 01:39:07.356818 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Mar 4 01:39:07.359761 systemd-logind[1487]: Removed session 23. Mar 4 01:39:12.459731 systemd[1]: Started sshd@30-10.230.15.118:22-20.161.92.111:34010.service - OpenSSH per-connection server daemon (20.161.92.111:34010). Mar 4 01:39:13.060463 sshd[4298]: Accepted publickey for core from 20.161.92.111 port 34010 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:39:13.063424 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:39:13.070843 systemd-logind[1487]: New session 24 of user core. Mar 4 01:39:13.074669 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 01:39:13.569125 sshd[4298]: pam_unix(sshd:session): session closed for user core Mar 4 01:39:13.580412 systemd[1]: sshd@30-10.230.15.118:22-20.161.92.111:34010.service: Deactivated successfully. Mar 4 01:39:13.583835 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 01:39:13.585295 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Mar 4 01:39:13.587048 systemd-logind[1487]: Removed session 24. Mar 4 01:39:13.677751 systemd[1]: Started sshd@31-10.230.15.118:22-20.161.92.111:34016.service - OpenSSH per-connection server daemon (20.161.92.111:34016). Mar 4 01:39:14.270521 sshd[4310]: Accepted publickey for core from 20.161.92.111 port 34016 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:39:14.272689 sshd[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:39:14.279726 systemd-logind[1487]: New session 25 of user core. Mar 4 01:39:14.290662 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 01:39:16.400424 containerd[1511]: time="2026-03-04T01:39:16.399647753Z" level=info msg="StopContainer for \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\" with timeout 30 (s)" Mar 4 01:39:16.404078 containerd[1511]: time="2026-03-04T01:39:16.403672397Z" level=info msg="Stop container \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\" with signal terminated" Mar 4 01:39:16.437741 systemd[1]: cri-containerd-ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4.scope: Deactivated successfully. Mar 4 01:39:16.483804 containerd[1511]: time="2026-03-04T01:39:16.483711192Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:39:16.490424 containerd[1511]: time="2026-03-04T01:39:16.490287422Z" level=info msg="StopContainer for \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\" with timeout 2 (s)" Mar 4 01:39:16.490800 containerd[1511]: time="2026-03-04T01:39:16.490770920Z" level=info msg="Stop container \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\" with signal terminated" Mar 4 01:39:16.497607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4-rootfs.mount: Deactivated successfully. Mar 4 01:39:16.504424 systemd-networkd[1421]: lxc_health: Link DOWN Mar 4 01:39:16.506446 systemd-networkd[1421]: lxc_health: Lost carrier Mar 4 01:39:16.514989 containerd[1511]: time="2026-03-04T01:39:16.513464610Z" level=info msg="shim disconnected" id=ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4 namespace=k8s.io Mar 4 01:39:16.514989 containerd[1511]: time="2026-03-04T01:39:16.513599751Z" level=warning msg="cleaning up after shim disconnected" id=ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4 namespace=k8s.io Mar 4 01:39:16.514989 containerd[1511]: time="2026-03-04T01:39:16.513625309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:39:16.536921 systemd[1]: cri-containerd-867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040.scope: Deactivated successfully. Mar 4 01:39:16.537313 systemd[1]: cri-containerd-867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040.scope: Consumed 10.443s CPU time. Mar 4 01:39:16.556714 containerd[1511]: time="2026-03-04T01:39:16.556659529Z" level=info msg="StopContainer for \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\" returns successfully" Mar 4 01:39:16.561424 containerd[1511]: time="2026-03-04T01:39:16.560511184Z" level=info msg="StopPodSandbox for \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\"" Mar 4 01:39:16.561424 containerd[1511]: time="2026-03-04T01:39:16.560587067Z" level=info msg="Container to stop \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:39:16.568908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd-shm.mount: Deactivated successfully. Mar 4 01:39:16.587084 systemd[1]: cri-containerd-81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd.scope: Deactivated successfully. Mar 4 01:39:16.593231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040-rootfs.mount: Deactivated successfully. Mar 4 01:39:16.607127 containerd[1511]: time="2026-03-04T01:39:16.606815416Z" level=info msg="shim disconnected" id=867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040 namespace=k8s.io Mar 4 01:39:16.607127 containerd[1511]: time="2026-03-04T01:39:16.606882732Z" level=warning msg="cleaning up after shim disconnected" id=867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040 namespace=k8s.io Mar 4 01:39:16.607127 containerd[1511]: time="2026-03-04T01:39:16.606900424Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:39:16.632975 containerd[1511]: time="2026-03-04T01:39:16.632916331Z" level=warning msg="cleanup warnings time=\"2026-03-04T01:39:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 01:39:16.636911 containerd[1511]: time="2026-03-04T01:39:16.636784163Z" level=info msg="StopContainer for \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\" returns successfully" Mar 4 01:39:16.638941 containerd[1511]: time="2026-03-04T01:39:16.638703673Z" level=info msg="StopPodSandbox for \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\"" Mar 4 01:39:16.638941 containerd[1511]: time="2026-03-04T01:39:16.638744667Z" level=info msg="Container to stop \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:39:16.638941 containerd[1511]: time="2026-03-04T01:39:16.638764193Z" level=info msg="Container to stop \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:39:16.638941 containerd[1511]: time="2026-03-04T01:39:16.638779624Z" level=info msg="Container to stop \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:39:16.638941 containerd[1511]: time="2026-03-04T01:39:16.638796001Z" level=info msg="Container to stop \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:39:16.638941 containerd[1511]: time="2026-03-04T01:39:16.638811043Z" level=info msg="Container to stop \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:39:16.642860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5-shm.mount: Deactivated successfully. Mar 4 01:39:16.657725 kubelet[2723]: E0304 01:39:16.656952 2723 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 4 01:39:16.665259 containerd[1511]: time="2026-03-04T01:39:16.665175907Z" level=info msg="shim disconnected" id=81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd namespace=k8s.io Mar 4 01:39:16.665724 containerd[1511]: time="2026-03-04T01:39:16.665595544Z" level=warning msg="cleaning up after shim disconnected" id=81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd namespace=k8s.io Mar 4 01:39:16.665724 containerd[1511]: time="2026-03-04T01:39:16.665620561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:39:16.668922 systemd[1]: cri-containerd-db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5.scope: Deactivated successfully. Mar 4 01:39:16.709070 containerd[1511]: time="2026-03-04T01:39:16.708864460Z" level=info msg="TearDown network for sandbox \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\" successfully" Mar 4 01:39:16.709070 containerd[1511]: time="2026-03-04T01:39:16.708950275Z" level=info msg="StopPodSandbox for \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\" returns successfully" Mar 4 01:39:16.716861 containerd[1511]: time="2026-03-04T01:39:16.714997346Z" level=info msg="shim disconnected" id=db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5 namespace=k8s.io Mar 4 01:39:16.717005 containerd[1511]: time="2026-03-04T01:39:16.715617170Z" level=warning msg="cleaning up after shim disconnected" id=db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5 namespace=k8s.io Mar 4 01:39:16.717127 containerd[1511]: time="2026-03-04T01:39:16.717102524Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:39:16.743760 containerd[1511]: time="2026-03-04T01:39:16.743710741Z" level=info msg="TearDown network for sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" successfully" Mar 4 01:39:16.744089 containerd[1511]: time="2026-03-04T01:39:16.744061664Z" level=info msg="StopPodSandbox for \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" returns successfully" Mar 4 01:39:16.757857 kubelet[2723]: I0304 01:39:16.757819 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/144fd3d6-851b-491a-baeb-e95e71ed270e-cilium-config-path\") pod \"144fd3d6-851b-491a-baeb-e95e71ed270e\" (UID: \"144fd3d6-851b-491a-baeb-e95e71ed270e\") " Mar 4 01:39:16.758289 kubelet[2723]: I0304 01:39:16.758091 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj8bx\" (UniqueName: \"kubernetes.io/projected/144fd3d6-851b-491a-baeb-e95e71ed270e-kube-api-access-zj8bx\") pod \"144fd3d6-851b-491a-baeb-e95e71ed270e\" (UID: \"144fd3d6-851b-491a-baeb-e95e71ed270e\") " Mar 4 01:39:16.773158 kubelet[2723]: I0304 01:39:16.772973 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144fd3d6-851b-491a-baeb-e95e71ed270e-kube-api-access-zj8bx" (OuterVolumeSpecName: "kube-api-access-zj8bx") pod "144fd3d6-851b-491a-baeb-e95e71ed270e" (UID: "144fd3d6-851b-491a-baeb-e95e71ed270e"). InnerVolumeSpecName "kube-api-access-zj8bx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:39:16.773604 kubelet[2723]: I0304 01:39:16.770587 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144fd3d6-851b-491a-baeb-e95e71ed270e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "144fd3d6-851b-491a-baeb-e95e71ed270e" (UID: "144fd3d6-851b-491a-baeb-e95e71ed270e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:39:16.859417 kubelet[2723]: I0304 01:39:16.859161 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-xtables-lock\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.859417 kubelet[2723]: I0304 01:39:16.859223 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-run\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.859417 kubelet[2723]: I0304 01:39:16.859279 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-hubble-tls\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.859417 kubelet[2723]: I0304 01:39:16.859281 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.859417 kubelet[2723]: I0304 01:39:16.859309 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-lib-modules\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.859417 kubelet[2723]: I0304 01:39:16.859334 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-cgroup\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.860168 kubelet[2723]: I0304 01:39:16.859359 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cni-path\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.860168 kubelet[2723]: I0304 01:39:16.859414 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-host-proc-sys-net\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.860168 kubelet[2723]: I0304 01:39:16.859452 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-host-proc-sys-kernel\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.860168 kubelet[2723]: I0304 01:39:16.859492 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-config-path\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.860168 kubelet[2723]: I0304 01:39:16.859522 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5cdm\" (UniqueName: \"kubernetes.io/projected/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-kube-api-access-j5cdm\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.860168 kubelet[2723]: I0304 01:39:16.859561 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-hostproc\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.860491 kubelet[2723]: I0304 01:39:16.859589 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-etc-cni-netd\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.860491 kubelet[2723]: I0304 01:39:16.859622 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-bpf-maps\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.860491 kubelet[2723]: I0304 01:39:16.859658 2723 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-clustermesh-secrets\") pod \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\" (UID: \"ab2081b0-842a-4e4a-9e87-fbf4ac660aa8\") " Mar 4 01:39:16.862394 kubelet[2723]: I0304 01:39:16.861096 2723 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zj8bx\" (UniqueName: \"kubernetes.io/projected/144fd3d6-851b-491a-baeb-e95e71ed270e-kube-api-access-zj8bx\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.862394 kubelet[2723]: I0304 01:39:16.861134 2723 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-xtables-lock\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.862394 kubelet[2723]: I0304 01:39:16.861166 2723 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/144fd3d6-851b-491a-baeb-e95e71ed270e-cilium-config-path\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.862394 kubelet[2723]: I0304 01:39:16.861669 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.862394 kubelet[2723]: I0304 01:39:16.861725 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.862394 kubelet[2723]: I0304 01:39:16.861763 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.862753 kubelet[2723]: I0304 01:39:16.861791 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.862753 kubelet[2723]: I0304 01:39:16.861819 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cni-path" (OuterVolumeSpecName: "cni-path") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.862753 kubelet[2723]: I0304 01:39:16.861844 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.862753 kubelet[2723]: I0304 01:39:16.861872 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-hostproc" (OuterVolumeSpecName: "hostproc") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.864574 kubelet[2723]: I0304 01:39:16.864538 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.864762 kubelet[2723]: I0304 01:39:16.864708 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:39:16.866334 kubelet[2723]: I0304 01:39:16.866300 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 01:39:16.868875 kubelet[2723]: I0304 01:39:16.868805 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-kube-api-access-j5cdm" (OuterVolumeSpecName: "kube-api-access-j5cdm") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "kube-api-access-j5cdm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:39:16.869601 kubelet[2723]: I0304 01:39:16.869506 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:39:16.870085 kubelet[2723]: I0304 01:39:16.870010 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" (UID: "ab2081b0-842a-4e4a-9e87-fbf4ac660aa8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:39:16.961781 kubelet[2723]: I0304 01:39:16.961352 2723 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cni-path\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.961781 kubelet[2723]: I0304 01:39:16.961466 2723 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-host-proc-sys-net\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.961781 kubelet[2723]: I0304 01:39:16.961486 2723 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-host-proc-sys-kernel\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.961781 kubelet[2723]: I0304 01:39:16.961504 2723 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-config-path\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.961781 kubelet[2723]: I0304 01:39:16.961522 2723 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j5cdm\" (UniqueName: \"kubernetes.io/projected/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-kube-api-access-j5cdm\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.961781 kubelet[2723]: I0304 01:39:16.961537 2723 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-hostproc\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.961781 kubelet[2723]: I0304 01:39:16.961551 2723 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-etc-cni-netd\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.961781 kubelet[2723]: I0304 01:39:16.961565 2723 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-bpf-maps\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.962346 kubelet[2723]: I0304 01:39:16.961579 2723 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-clustermesh-secrets\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.962346 kubelet[2723]: I0304 01:39:16.961593 2723 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-run\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.962346 kubelet[2723]: I0304 01:39:16.961608 2723 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-hubble-tls\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.962346 kubelet[2723]: I0304 01:39:16.961622 2723 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-lib-modules\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:16.962346 kubelet[2723]: I0304 01:39:16.961639 2723 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8-cilium-cgroup\") on node \"srv-g1uyu.gb1.brightbox.com\" DevicePath \"\"" Mar 4 01:39:17.007919 systemd[1]: Removed slice kubepods-besteffort-pod144fd3d6_851b_491a_baeb_e95e71ed270e.slice - libcontainer container kubepods-besteffort-pod144fd3d6_851b_491a_baeb_e95e71ed270e.slice. Mar 4 01:39:17.024452 kubelet[2723]: I0304 01:39:17.017405 2723 scope.go:117] "RemoveContainer" containerID="ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4" Mar 4 01:39:17.030284 containerd[1511]: time="2026-03-04T01:39:17.029614931Z" level=info msg="RemoveContainer for \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\"" Mar 4 01:39:17.037754 systemd[1]: Removed slice kubepods-burstable-podab2081b0_842a_4e4a_9e87_fbf4ac660aa8.slice - libcontainer container kubepods-burstable-podab2081b0_842a_4e4a_9e87_fbf4ac660aa8.slice. Mar 4 01:39:17.037918 systemd[1]: kubepods-burstable-podab2081b0_842a_4e4a_9e87_fbf4ac660aa8.slice: Consumed 10.576s CPU time. Mar 4 01:39:17.040949 containerd[1511]: time="2026-03-04T01:39:17.040462020Z" level=info msg="RemoveContainer for \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\" returns successfully" Mar 4 01:39:17.050183 kubelet[2723]: I0304 01:39:17.049886 2723 scope.go:117] "RemoveContainer" containerID="ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4" Mar 4 01:39:17.062493 containerd[1511]: time="2026-03-04T01:39:17.053946471Z" level=error msg="ContainerStatus for \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\": not found" Mar 4 01:39:17.071832 kubelet[2723]: E0304 01:39:17.071631 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\": not found" containerID="ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4" Mar 4 01:39:17.083272 kubelet[2723]: I0304 01:39:17.071729 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4"} err="failed to get container status \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab53b79a3646f3addd0ae7522fcc65f7da84fe33597ac6cf34bd755e68e3b2f4\": not found" Mar 4 01:39:17.083272 kubelet[2723]: I0304 01:39:17.083147 2723 scope.go:117] "RemoveContainer" containerID="867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040" Mar 4 01:39:17.088184 containerd[1511]: time="2026-03-04T01:39:17.088119003Z" level=info msg="RemoveContainer for \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\"" Mar 4 01:39:17.092409 containerd[1511]: time="2026-03-04T01:39:17.092095750Z" level=info msg="RemoveContainer for \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\" returns successfully" Mar 4 01:39:17.092511 kubelet[2723]: I0304 01:39:17.092263 2723 scope.go:117] "RemoveContainer" containerID="f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9" Mar 4 01:39:17.096208 containerd[1511]: time="2026-03-04T01:39:17.096154819Z" level=info msg="RemoveContainer for \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\"" Mar 4 01:39:17.099715 containerd[1511]: time="2026-03-04T01:39:17.099682328Z" level=info msg="RemoveContainer for \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\" returns successfully" Mar 4 01:39:17.100128 kubelet[2723]: I0304 01:39:17.100100 2723 scope.go:117] "RemoveContainer" containerID="a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38" Mar 4 01:39:17.101691 containerd[1511]: time="2026-03-04T01:39:17.101652488Z" level=info msg="RemoveContainer for \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\"" Mar 4 01:39:17.111964 containerd[1511]: time="2026-03-04T01:39:17.111923134Z" level=info msg="RemoveContainer for \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\" returns successfully" Mar 4 01:39:17.112463 kubelet[2723]: I0304 01:39:17.112249 2723 scope.go:117] "RemoveContainer" containerID="ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd" Mar 4 01:39:17.114415 containerd[1511]: time="2026-03-04T01:39:17.114121793Z" level=info msg="RemoveContainer for \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\"" Mar 4 01:39:17.117233 containerd[1511]: time="2026-03-04T01:39:17.117189327Z" level=info msg="RemoveContainer for \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\" returns successfully" Mar 4 01:39:17.117419 kubelet[2723]: I0304 01:39:17.117383 2723 scope.go:117] "RemoveContainer" containerID="df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3" Mar 4 01:39:17.118922 containerd[1511]: time="2026-03-04T01:39:17.118890619Z" level=info msg="RemoveContainer for \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\"" Mar 4 01:39:17.122116 containerd[1511]: time="2026-03-04T01:39:17.122084844Z" level=info msg="RemoveContainer for \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\" returns successfully" Mar 4 01:39:17.122445 kubelet[2723]: I0304 01:39:17.122418 2723 scope.go:117] "RemoveContainer" containerID="867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040" Mar 4 01:39:17.122740 containerd[1511]: time="2026-03-04T01:39:17.122646454Z" level=error msg="ContainerStatus for \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\": not found" Mar 4 01:39:17.122849 kubelet[2723]: E0304 01:39:17.122812 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\": not found" containerID="867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040" Mar 4 01:39:17.122922 kubelet[2723]: I0304 01:39:17.122855 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040"} err="failed to get container status \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\": rpc error: code = NotFound desc = an error occurred when try to find container \"867a55202450012d71a2fbaccd9a14de867da352c80f64cf94c15c2d94512040\": not found" Mar 4 01:39:17.122922 kubelet[2723]: I0304 01:39:17.122884 2723 scope.go:117] "RemoveContainer" containerID="f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9" Mar 4 01:39:17.123442 containerd[1511]: time="2026-03-04T01:39:17.123267003Z" level=error msg="ContainerStatus for \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\": not found" Mar 4 01:39:17.123561 kubelet[2723]: E0304 01:39:17.123523 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\": not found" containerID="f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9" Mar 4 01:39:17.123653 kubelet[2723]: I0304 01:39:17.123565 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9"} err="failed to get container status \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9b75e55e69682593db323e9789c2697c3247f553a133a06205d64c18de674f9\": not found" Mar 4 01:39:17.123653 kubelet[2723]: I0304 01:39:17.123587 2723 scope.go:117] "RemoveContainer" containerID="a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38" Mar 4 01:39:17.124114 containerd[1511]: time="2026-03-04T01:39:17.123919540Z" level=error msg="ContainerStatus for \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\": not found" Mar 4 01:39:17.124262 kubelet[2723]: E0304 01:39:17.124228 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\": not found" containerID="a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38" Mar 4 01:39:17.124324 kubelet[2723]: I0304 01:39:17.124287 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38"} err="failed to get container status \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\": rpc error: code = NotFound desc = an error occurred when try to find container \"a90f84c9fc869192174d1c2636f8beb9192ce33a810d5e247ec775eb97e89b38\": not found" Mar 4 01:39:17.124324 kubelet[2723]: I0304 01:39:17.124314 2723 scope.go:117] "RemoveContainer" containerID="ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd" Mar 4 01:39:17.124608 containerd[1511]: time="2026-03-04T01:39:17.124567087Z" level=error msg="ContainerStatus for \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\": not found" Mar 4 01:39:17.124851 kubelet[2723]: E0304 01:39:17.124822 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\": not found" containerID="ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd" Mar 4 01:39:17.125101 kubelet[2723]: I0304 01:39:17.124961 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd"} err="failed to get container status \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac81cc410e7bbbcddad1421e5c76d3fcb57398f9707d8fc5de65418dd1393fdd\": not found" Mar 4 01:39:17.125101 kubelet[2723]: I0304 01:39:17.124997 2723 scope.go:117] "RemoveContainer" containerID="df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3" Mar 4 01:39:17.125497 containerd[1511]: time="2026-03-04T01:39:17.125357208Z" level=error msg="ContainerStatus for \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\": not found" Mar 4 01:39:17.125627 kubelet[2723]: E0304 01:39:17.125584 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\": not found" containerID="df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3" Mar 4 01:39:17.125627 kubelet[2723]: I0304 01:39:17.125612 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3"} err="failed to get container status \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\": rpc error: code = NotFound desc = an error occurred when try to find container \"df4befd8f9474823acd7491d8e7a281893a78c1f35bd133e4569b98ef105ffa3\": not found" Mar 4 01:39:17.452927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd-rootfs.mount: Deactivated successfully. Mar 4 01:39:17.453132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5-rootfs.mount: Deactivated successfully. Mar 4 01:39:17.453257 systemd[1]: var-lib-kubelet-pods-144fd3d6\x2d851b\x2d491a\x2dbaeb\x2de95e71ed270e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzj8bx.mount: Deactivated successfully. Mar 4 01:39:17.453422 systemd[1]: var-lib-kubelet-pods-ab2081b0\x2d842a\x2d4e4a\x2d9e87\x2dfbf4ac660aa8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 4 01:39:17.453566 systemd[1]: var-lib-kubelet-pods-ab2081b0\x2d842a\x2d4e4a\x2d9e87\x2dfbf4ac660aa8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj5cdm.mount: Deactivated successfully. Mar 4 01:39:17.453699 systemd[1]: var-lib-kubelet-pods-ab2081b0\x2d842a\x2d4e4a\x2d9e87\x2dfbf4ac660aa8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 4 01:39:17.529093 kubelet[2723]: I0304 01:39:17.529026 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144fd3d6-851b-491a-baeb-e95e71ed270e" path="/var/lib/kubelet/pods/144fd3d6-851b-491a-baeb-e95e71ed270e/volumes" Mar 4 01:39:17.530074 kubelet[2723]: I0304 01:39:17.530044 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2081b0-842a-4e4a-9e87-fbf4ac660aa8" path="/var/lib/kubelet/pods/ab2081b0-842a-4e4a-9e87-fbf4ac660aa8/volumes" Mar 4 01:39:18.416150 sshd[4310]: pam_unix(sshd:session): session closed for user core Mar 4 01:39:18.421809 systemd[1]: sshd@31-10.230.15.118:22-20.161.92.111:34016.service: Deactivated successfully. Mar 4 01:39:18.425647 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 01:39:18.426213 systemd[1]: session-25.scope: Consumed 1.118s CPU time. Mar 4 01:39:18.428707 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Mar 4 01:39:18.430883 systemd-logind[1487]: Removed session 25. Mar 4 01:39:18.528696 systemd[1]: Started sshd@32-10.230.15.118:22-20.161.92.111:34028.service - OpenSSH per-connection server daemon (20.161.92.111:34028). Mar 4 01:39:19.137917 sshd[4476]: Accepted publickey for core from 20.161.92.111 port 34028 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:39:19.140017 sshd[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:39:19.148424 systemd-logind[1487]: New session 26 of user core. Mar 4 01:39:19.156639 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 01:39:20.511230 systemd[1]: Created slice kubepods-burstable-podb3e8403b_f94b_4b9c_b337_0bc75585e8a5.slice - libcontainer container kubepods-burstable-podb3e8403b_f94b_4b9c_b337_0bc75585e8a5.slice. Mar 4 01:39:20.558843 sshd[4476]: pam_unix(sshd:session): session closed for user core Mar 4 01:39:20.568845 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Mar 4 01:39:20.570442 systemd[1]: sshd@32-10.230.15.118:22-20.161.92.111:34028.service: Deactivated successfully. Mar 4 01:39:20.577739 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 01:39:20.582566 systemd-logind[1487]: Removed session 26. Mar 4 01:39:20.585879 kubelet[2723]: I0304 01:39:20.585821 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-hostproc\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.586537 kubelet[2723]: I0304 01:39:20.585884 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-cilium-run\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.586537 kubelet[2723]: I0304 01:39:20.585922 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-xtables-lock\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.586537 kubelet[2723]: I0304 01:39:20.585947 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-clustermesh-secrets\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.586537 kubelet[2723]: I0304 01:39:20.585971 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-cilium-cgroup\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.586537 kubelet[2723]: I0304 01:39:20.586008 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-cilium-config-path\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.586537 kubelet[2723]: I0304 01:39:20.586035 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5gq\" (UniqueName: \"kubernetes.io/projected/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-kube-api-access-dw5gq\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.587115 kubelet[2723]: I0304 01:39:20.586104 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-cni-path\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.587115 kubelet[2723]: I0304 01:39:20.586133 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-host-proc-sys-net\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.587115 kubelet[2723]: I0304 01:39:20.586159 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-bpf-maps\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.587115 kubelet[2723]: I0304 01:39:20.586189 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-hubble-tls\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.587115 kubelet[2723]: I0304 01:39:20.586218 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-etc-cni-netd\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.587115 kubelet[2723]: I0304 01:39:20.586249 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-cilium-ipsec-secrets\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.587464 kubelet[2723]: I0304 01:39:20.586276 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-lib-modules\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.587464 kubelet[2723]: I0304 01:39:20.586301 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b3e8403b-f94b-4b9c-b337-0bc75585e8a5-host-proc-sys-kernel\") pod \"cilium-pd7fw\" (UID: \"b3e8403b-f94b-4b9c-b337-0bc75585e8a5\") " pod="kube-system/cilium-pd7fw" Mar 4 01:39:20.657244 systemd[1]: Started sshd@33-10.230.15.118:22-20.161.92.111:56308.service - OpenSSH per-connection server daemon (20.161.92.111:56308). Mar 4 01:39:20.820309 containerd[1511]: time="2026-03-04T01:39:20.819499787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pd7fw,Uid:b3e8403b-f94b-4b9c-b337-0bc75585e8a5,Namespace:kube-system,Attempt:0,}" Mar 4 01:39:20.853020 containerd[1511]: time="2026-03-04T01:39:20.852454851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:39:20.853020 containerd[1511]: time="2026-03-04T01:39:20.852553167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:39:20.853020 containerd[1511]: time="2026-03-04T01:39:20.852576977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:39:20.853020 containerd[1511]: time="2026-03-04T01:39:20.852751642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:39:20.883629 systemd[1]: Started cri-containerd-561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef.scope - libcontainer container 561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef. Mar 4 01:39:20.917818 containerd[1511]: time="2026-03-04T01:39:20.917761575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pd7fw,Uid:b3e8403b-f94b-4b9c-b337-0bc75585e8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\"" Mar 4 01:39:20.925353 containerd[1511]: time="2026-03-04T01:39:20.925207840Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 4 01:39:20.946970 containerd[1511]: time="2026-03-04T01:39:20.946895752Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"96e6e266840ef7ad0d458ccc1e09cc43c1a38fb41d3ec30a531a7833b8dd6586\"" Mar 4 01:39:20.947928 containerd[1511]: time="2026-03-04T01:39:20.947895011Z" level=info msg="StartContainer for \"96e6e266840ef7ad0d458ccc1e09cc43c1a38fb41d3ec30a531a7833b8dd6586\"" Mar 4 01:39:20.984593 systemd[1]: Started cri-containerd-96e6e266840ef7ad0d458ccc1e09cc43c1a38fb41d3ec30a531a7833b8dd6586.scope - libcontainer container 96e6e266840ef7ad0d458ccc1e09cc43c1a38fb41d3ec30a531a7833b8dd6586. Mar 4 01:39:21.025009 containerd[1511]: time="2026-03-04T01:39:21.024826801Z" level=info msg="StartContainer for \"96e6e266840ef7ad0d458ccc1e09cc43c1a38fb41d3ec30a531a7833b8dd6586\" returns successfully" Mar 4 01:39:21.027786 systemd[1]: Started sshd@34-10.230.15.118:22-191.37.78.62:44074.service - OpenSSH per-connection server daemon (191.37.78.62:44074). Mar 4 01:39:21.048932 systemd[1]: cri-containerd-96e6e266840ef7ad0d458ccc1e09cc43c1a38fb41d3ec30a531a7833b8dd6586.scope: Deactivated successfully. Mar 4 01:39:21.113021 containerd[1511]: time="2026-03-04T01:39:21.112615724Z" level=info msg="shim disconnected" id=96e6e266840ef7ad0d458ccc1e09cc43c1a38fb41d3ec30a531a7833b8dd6586 namespace=k8s.io Mar 4 01:39:21.113021 containerd[1511]: time="2026-03-04T01:39:21.112711592Z" level=warning msg="cleaning up after shim disconnected" id=96e6e266840ef7ad0d458ccc1e09cc43c1a38fb41d3ec30a531a7833b8dd6586 namespace=k8s.io Mar 4 01:39:21.113021 containerd[1511]: time="2026-03-04T01:39:21.112730756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:39:21.240090 sshd[4488]: Accepted publickey for core from 20.161.92.111 port 56308 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:39:21.242347 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:39:21.250466 systemd-logind[1487]: New session 27 of user core. Mar 4 01:39:21.264562 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 4 01:39:21.456415 containerd[1511]: time="2026-03-04T01:39:21.456043455Z" level=info msg="StopPodSandbox for \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\"" Mar 4 01:39:21.456415 containerd[1511]: time="2026-03-04T01:39:21.456192319Z" level=info msg="TearDown network for sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" successfully" Mar 4 01:39:21.456415 containerd[1511]: time="2026-03-04T01:39:21.456213576Z" level=info msg="StopPodSandbox for \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" returns successfully" Mar 4 01:39:21.457145 containerd[1511]: time="2026-03-04T01:39:21.456883954Z" level=info msg="RemovePodSandbox for \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\"" Mar 4 01:39:21.457145 containerd[1511]: time="2026-03-04T01:39:21.456941144Z" level=info msg="Forcibly stopping sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\"" Mar 4 01:39:21.457145 containerd[1511]: time="2026-03-04T01:39:21.457018737Z" level=info msg="TearDown network for sandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" successfully" Mar 4 01:39:21.460568 containerd[1511]: time="2026-03-04T01:39:21.460527691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:39:21.460772 containerd[1511]: time="2026-03-04T01:39:21.460587100Z" level=info msg="RemovePodSandbox \"db35b47098b64e16163fcfca42a16eb39f8ab54796b2750f13ca796d10ab17f5\" returns successfully" Mar 4 01:39:21.461458 containerd[1511]: time="2026-03-04T01:39:21.461229007Z" level=info msg="StopPodSandbox for \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\"" Mar 4 01:39:21.461458 containerd[1511]: time="2026-03-04T01:39:21.461318222Z" level=info msg="TearDown network for sandbox \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\" successfully" Mar 4 01:39:21.461458 containerd[1511]: time="2026-03-04T01:39:21.461338197Z" level=info msg="StopPodSandbox for \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\" returns successfully" Mar 4 01:39:21.463192 containerd[1511]: time="2026-03-04T01:39:21.462080934Z" level=info msg="RemovePodSandbox for \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\"" Mar 4 01:39:21.463192 containerd[1511]: time="2026-03-04T01:39:21.462114796Z" level=info msg="Forcibly stopping sandbox \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\"" Mar 4 01:39:21.463192 containerd[1511]: time="2026-03-04T01:39:21.462180195Z" level=info msg="TearDown network for sandbox \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\" successfully" Mar 4 01:39:21.465530 containerd[1511]: time="2026-03-04T01:39:21.465497199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 01:39:21.465688 containerd[1511]: time="2026-03-04T01:39:21.465660082Z" level=info msg="RemovePodSandbox \"81038f357f34e963d0c31f3295e02db3494cca623068eef8402de4857d67cecd\" returns successfully" Mar 4 01:39:21.641632 sshd[4488]: pam_unix(sshd:session): session closed for user core Mar 4 01:39:21.645752 systemd[1]: sshd@33-10.230.15.118:22-20.161.92.111:56308.service: Deactivated successfully. Mar 4 01:39:21.648278 systemd[1]: session-27.scope: Deactivated successfully. Mar 4 01:39:21.650728 systemd-logind[1487]: Session 27 logged out. Waiting for processes to exit. Mar 4 01:39:21.652278 systemd-logind[1487]: Removed session 27. Mar 4 01:39:21.659049 kubelet[2723]: E0304 01:39:21.658962 2723 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 4 01:39:21.753716 systemd[1]: Started sshd@35-10.230.15.118:22-20.161.92.111:56318.service - OpenSSH per-connection server daemon (20.161.92.111:56318). Mar 4 01:39:22.063639 containerd[1511]: time="2026-03-04T01:39:22.063293130Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 4 01:39:22.091862 containerd[1511]: time="2026-03-04T01:39:22.091811540Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2\"" Mar 4 01:39:22.093267 containerd[1511]: time="2026-03-04T01:39:22.092979264Z" level=info msg="StartContainer for \"394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2\"" Mar 4 01:39:22.137623 systemd[1]: Started cri-containerd-394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2.scope - libcontainer container 394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2. Mar 4 01:39:22.179556 containerd[1511]: time="2026-03-04T01:39:22.179496833Z" level=info msg="StartContainer for \"394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2\" returns successfully" Mar 4 01:39:22.195247 systemd[1]: cri-containerd-394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2.scope: Deactivated successfully. Mar 4 01:39:22.224736 containerd[1511]: time="2026-03-04T01:39:22.224602652Z" level=info msg="shim disconnected" id=394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2 namespace=k8s.io Mar 4 01:39:22.224736 containerd[1511]: time="2026-03-04T01:39:22.224688147Z" level=warning msg="cleaning up after shim disconnected" id=394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2 namespace=k8s.io Mar 4 01:39:22.224736 containerd[1511]: time="2026-03-04T01:39:22.224706266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:39:22.358177 sshd[4566]: Received disconnect from 191.37.78.62 port 44074:11: Bye Bye [preauth] Mar 4 01:39:22.358177 sshd[4566]: Disconnected from authenticating user root 191.37.78.62 port 44074 [preauth] Mar 4 01:39:22.359933 systemd[1]: sshd@34-10.230.15.118:22-191.37.78.62:44074.service: Deactivated successfully. Mar 4 01:39:22.368975 sshd[4607]: Accepted publickey for core from 20.161.92.111 port 56318 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 01:39:22.371063 sshd[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:39:22.377417 systemd-logind[1487]: New session 28 of user core. Mar 4 01:39:22.385571 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 4 01:39:22.706192 systemd[1]: run-containerd-runc-k8s.io-394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2-runc.WlKGzm.mount: Deactivated successfully. Mar 4 01:39:22.706385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-394c55d0450375dfac3f95d5d3b08d5d824dd379fee8254faa1ad6ce4224b0d2-rootfs.mount: Deactivated successfully. Mar 4 01:39:23.083207 containerd[1511]: time="2026-03-04T01:39:23.080809243Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 4 01:39:23.105519 containerd[1511]: time="2026-03-04T01:39:23.104912816Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f9fadf96ba0c857215e49c347da48053d2cb440dfcc638c96bfda662d5cde004\"" Mar 4 01:39:23.105302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402244194.mount: Deactivated successfully. Mar 4 01:39:23.109574 containerd[1511]: time="2026-03-04T01:39:23.109521600Z" level=info msg="StartContainer for \"f9fadf96ba0c857215e49c347da48053d2cb440dfcc638c96bfda662d5cde004\"" Mar 4 01:39:23.164610 systemd[1]: Started cri-containerd-f9fadf96ba0c857215e49c347da48053d2cb440dfcc638c96bfda662d5cde004.scope - libcontainer container f9fadf96ba0c857215e49c347da48053d2cb440dfcc638c96bfda662d5cde004. Mar 4 01:39:23.207809 containerd[1511]: time="2026-03-04T01:39:23.207736198Z" level=info msg="StartContainer for \"f9fadf96ba0c857215e49c347da48053d2cb440dfcc638c96bfda662d5cde004\" returns successfully" Mar 4 01:39:23.219071 systemd[1]: cri-containerd-f9fadf96ba0c857215e49c347da48053d2cb440dfcc638c96bfda662d5cde004.scope: Deactivated successfully. Mar 4 01:39:23.253873 containerd[1511]: time="2026-03-04T01:39:23.253800479Z" level=info msg="shim disconnected" id=f9fadf96ba0c857215e49c347da48053d2cb440dfcc638c96bfda662d5cde004 namespace=k8s.io Mar 4 01:39:23.254527 containerd[1511]: time="2026-03-04T01:39:23.254214784Z" level=warning msg="cleaning up after shim disconnected" id=f9fadf96ba0c857215e49c347da48053d2cb440dfcc638c96bfda662d5cde004 namespace=k8s.io Mar 4 01:39:23.254527 containerd[1511]: time="2026-03-04T01:39:23.254237780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:39:23.706731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9fadf96ba0c857215e49c347da48053d2cb440dfcc638c96bfda662d5cde004-rootfs.mount: Deactivated successfully. Mar 4 01:39:23.887273 kubelet[2723]: I0304 01:39:23.885627 2723 setters.go:618] "Node became not ready" node="srv-g1uyu.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-04T01:39:23Z","lastTransitionTime":"2026-03-04T01:39:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 4 01:39:24.077312 containerd[1511]: time="2026-03-04T01:39:24.077245624Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 4 01:39:24.096732 containerd[1511]: time="2026-03-04T01:39:24.096663213Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"97f699a142cd5cd4608a85dc61244d8ed040909afde1295f9d9272032e0cf1ff\"" Mar 4 01:39:24.099861 containerd[1511]: time="2026-03-04T01:39:24.098661672Z" level=info msg="StartContainer for \"97f699a142cd5cd4608a85dc61244d8ed040909afde1295f9d9272032e0cf1ff\"" Mar 4 01:39:24.160679 systemd[1]: Started cri-containerd-97f699a142cd5cd4608a85dc61244d8ed040909afde1295f9d9272032e0cf1ff.scope - libcontainer container 97f699a142cd5cd4608a85dc61244d8ed040909afde1295f9d9272032e0cf1ff. Mar 4 01:39:24.199832 systemd[1]: cri-containerd-97f699a142cd5cd4608a85dc61244d8ed040909afde1295f9d9272032e0cf1ff.scope: Deactivated successfully. Mar 4 01:39:24.202794 containerd[1511]: time="2026-03-04T01:39:24.202743455Z" level=info msg="StartContainer for \"97f699a142cd5cd4608a85dc61244d8ed040909afde1295f9d9272032e0cf1ff\" returns successfully" Mar 4 01:39:24.236621 containerd[1511]: time="2026-03-04T01:39:24.236532108Z" level=info msg="shim disconnected" id=97f699a142cd5cd4608a85dc61244d8ed040909afde1295f9d9272032e0cf1ff namespace=k8s.io Mar 4 01:39:24.236621 containerd[1511]: time="2026-03-04T01:39:24.236614170Z" level=warning msg="cleaning up after shim disconnected" id=97f699a142cd5cd4608a85dc61244d8ed040909afde1295f9d9272032e0cf1ff namespace=k8s.io Mar 4 01:39:24.236621 containerd[1511]: time="2026-03-04T01:39:24.236631510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:39:24.706982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97f699a142cd5cd4608a85dc61244d8ed040909afde1295f9d9272032e0cf1ff-rootfs.mount: Deactivated successfully. Mar 4 01:39:25.084120 containerd[1511]: time="2026-03-04T01:39:25.083524626Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 4 01:39:25.112268 containerd[1511]: time="2026-03-04T01:39:25.112072966Z" level=info msg="CreateContainer within sandbox \"561ed967cf284c242164fb53044febbcee088f225a513487d10740349554b4ef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4fbf21faeccb2d5163430d4075fc8e96ffd39e2409f0e56b8eb8fb99ad195485\"" Mar 4 01:39:25.114295 containerd[1511]: time="2026-03-04T01:39:25.113115052Z" level=info msg="StartContainer for \"4fbf21faeccb2d5163430d4075fc8e96ffd39e2409f0e56b8eb8fb99ad195485\"" Mar 4 01:39:25.166662 systemd[1]: Started cri-containerd-4fbf21faeccb2d5163430d4075fc8e96ffd39e2409f0e56b8eb8fb99ad195485.scope - libcontainer container 4fbf21faeccb2d5163430d4075fc8e96ffd39e2409f0e56b8eb8fb99ad195485. Mar 4 01:39:25.214641 containerd[1511]: time="2026-03-04T01:39:25.214026910Z" level=info msg="StartContainer for \"4fbf21faeccb2d5163430d4075fc8e96ffd39e2409f0e56b8eb8fb99ad195485\" returns successfully" Mar 4 01:39:25.421113 systemd[1]: Started sshd@36-10.230.15.118:22-202.125.94.71:53056.service - OpenSSH per-connection server daemon (202.125.94.71:53056). Mar 4 01:39:25.635473 systemd[1]: Started sshd@37-10.230.15.118:22-103.189.208.13:41724.service - OpenSSH per-connection server daemon (103.189.208.13:41724). Mar 4 01:39:26.001487 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 4 01:39:26.115343 kubelet[2723]: I0304 01:39:26.114961 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pd7fw" podStartSLOduration=6.114894486 podStartE2EDuration="6.114894486s" podCreationTimestamp="2026-03-04 01:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:39:26.110211893 +0000 UTC m=+124.832366351" watchObservedRunningTime="2026-03-04 01:39:26.114894486 +0000 UTC m=+124.837048923" Mar 4 01:39:26.636418 sshd[4859]: Received disconnect from 202.125.94.71 port 53056:11: Bye Bye [preauth] Mar 4 01:39:26.636418 sshd[4859]: Disconnected from authenticating user root 202.125.94.71 port 53056 [preauth] Mar 4 01:39:26.640593 systemd[1]: sshd@36-10.230.15.118:22-202.125.94.71:53056.service: Deactivated successfully. Mar 4 01:39:26.856657 sshd[4865]: Received disconnect from 103.189.208.13 port 41724:11: Bye Bye [preauth] Mar 4 01:39:26.856657 sshd[4865]: Disconnected from authenticating user root 103.189.208.13 port 41724 [preauth] Mar 4 01:39:26.858243 systemd[1]: sshd@37-10.230.15.118:22-103.189.208.13:41724.service: Deactivated successfully. Mar 4 01:39:29.796504 systemd-networkd[1421]: lxc_health: Link UP Mar 4 01:39:29.812180 systemd-networkd[1421]: lxc_health: Gained carrier Mar 4 01:39:30.979697 systemd-networkd[1421]: lxc_health: Gained IPv6LL Mar 4 01:39:31.690622 systemd[1]: run-containerd-runc-k8s.io-4fbf21faeccb2d5163430d4075fc8e96ffd39e2409f0e56b8eb8fb99ad195485-runc.1PfjZT.mount: Deactivated successfully. Mar 4 01:39:33.965616 systemd[1]: run-containerd-runc-k8s.io-4fbf21faeccb2d5163430d4075fc8e96ffd39e2409f0e56b8eb8fb99ad195485-runc.KP0krK.mount: Deactivated successfully. Mar 4 01:39:36.228502 systemd[1]: run-containerd-runc-k8s.io-4fbf21faeccb2d5163430d4075fc8e96ffd39e2409f0e56b8eb8fb99ad195485-runc.nxRiGY.mount: Deactivated successfully. Mar 4 01:39:36.399460 sshd[4607]: pam_unix(sshd:session): session closed for user core Mar 4 01:39:36.407329 systemd[1]: sshd@35-10.230.15.118:22-20.161.92.111:56318.service: Deactivated successfully. Mar 4 01:39:36.411236 systemd[1]: session-28.scope: Deactivated successfully. Mar 4 01:39:36.414506 systemd-logind[1487]: Session 28 logged out. Waiting for processes to exit. Mar 4 01:39:36.416861 systemd-logind[1487]: Removed session 28. Mar 4 01:39:38.981729 systemd[1]: Started sshd@38-10.230.15.118:22-172.249.150.82:56056.service - OpenSSH per-connection server daemon (172.249.150.82:56056).