Feb 13 21:23:35.940598 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 21:23:35.940654 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:23:35.940665 kernel: BIOS-provided physical RAM map: Feb 13 21:23:35.940675 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 21:23:35.940682 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 21:23:35.940690 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 21:23:35.940699 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 21:23:35.940706 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 21:23:35.940713 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 21:23:35.940721 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 21:23:35.940728 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 21:23:35.940736 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 21:23:35.940746 kernel: NX (Execute Disable) protection: active Feb 13 21:23:35.940754 kernel: APIC: Static calls initialized Feb 13 21:23:35.940763 kernel: SMBIOS 2.8 present. Feb 13 21:23:35.940772 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 21:23:35.940780 kernel: Hypervisor detected: KVM Feb 13 21:23:35.940791 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 21:23:35.940800 kernel: kvm-clock: using sched offset of 3900194567 cycles Feb 13 21:23:35.940809 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 21:23:35.940818 kernel: tsc: Detected 2294.576 MHz processor Feb 13 21:23:35.940827 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 21:23:35.940836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 21:23:35.940845 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 21:23:35.940853 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 21:23:35.940862 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 21:23:35.940873 kernel: Using GB pages for direct mapping Feb 13 21:23:35.940882 kernel: ACPI: Early table checksum verification disabled Feb 13 21:23:35.940890 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 21:23:35.940899 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:23:35.940907 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:23:35.940916 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:23:35.940924 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 21:23:35.940933 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:23:35.940941 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:23:35.940952 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:23:35.940961 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 21:23:35.940970 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 21:23:35.940978 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 21:23:35.940987 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 21:23:35.940999 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 21:23:35.941008 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 21:23:35.941019 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 21:23:35.941029 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 21:23:35.941038 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 21:23:35.941047 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 21:23:35.941056 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 21:23:35.941064 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 21:23:35.941073 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 21:23:35.941082 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 21:23:35.941093 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 21:23:35.941102 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 21:23:35.941111 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 21:23:35.941120 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 21:23:35.941129 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 21:23:35.941138 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 21:23:35.941147 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 21:23:35.941155 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 21:23:35.941164 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 21:23:35.941175 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 21:23:35.941184 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 21:23:35.941193 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 21:23:35.941202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 21:23:35.941212 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 21:23:35.941221 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 21:23:35.941230 kernel: Zone ranges: Feb 13 21:23:35.941239 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 21:23:35.941247 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 21:23:35.941256 kernel: Normal empty Feb 13 21:23:35.941268 kernel: Movable zone start for each node Feb 13 21:23:35.941277 kernel: Early memory node ranges Feb 13 21:23:35.941286 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 21:23:35.941295 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 21:23:35.941304 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 21:23:35.941313 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 21:23:35.941322 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 21:23:35.941331 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 21:23:35.941340 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 21:23:35.941351 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 21:23:35.941360 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 21:23:35.941369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 21:23:35.941378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 21:23:35.941387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 21:23:35.941396 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 21:23:35.941405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 21:23:35.941414 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 21:23:35.941423 kernel: TSC deadline timer available Feb 13 21:23:35.941434 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 21:23:35.941443 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 21:23:35.941452 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 21:23:35.941461 kernel: Booting paravirtualized kernel on KVM Feb 13 21:23:35.941470 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 21:23:35.941480 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 21:23:35.941489 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 21:23:35.941498 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 21:23:35.941507 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 21:23:35.941518 kernel: kvm-guest: PV spinlocks enabled Feb 13 21:23:35.941527 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 21:23:35.941537 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:23:35.941547 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 21:23:35.941556 kernel: random: crng init done Feb 13 21:23:35.941565 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 21:23:35.941574 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 21:23:35.941583 kernel: Fallback order for Node 0: 0 Feb 13 21:23:35.941594 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 21:23:35.941701 kernel: Policy zone: DMA32 Feb 13 21:23:35.941712 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 21:23:35.941721 kernel: software IO TLB: area num 16. Feb 13 21:23:35.941731 kernel: Memory: 1899480K/2096616K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 196876K reserved, 0K cma-reserved) Feb 13 21:23:35.941740 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 21:23:35.941749 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 21:23:35.941758 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 21:23:35.941767 kernel: Dynamic Preempt: voluntary Feb 13 21:23:35.941780 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 21:23:35.941791 kernel: rcu: RCU event tracing is enabled. Feb 13 21:23:35.941800 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 21:23:35.941809 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 21:23:35.941819 kernel: Rude variant of Tasks RCU enabled. Feb 13 21:23:35.941836 kernel: Tracing variant of Tasks RCU enabled. Feb 13 21:23:35.941848 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 21:23:35.941857 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 21:23:35.941867 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 21:23:35.941877 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 21:23:35.941886 kernel: Console: colour VGA+ 80x25 Feb 13 21:23:35.941895 kernel: printk: console [tty0] enabled Feb 13 21:23:35.941907 kernel: printk: console [ttyS0] enabled Feb 13 21:23:35.941917 kernel: ACPI: Core revision 20230628 Feb 13 21:23:35.941927 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 21:23:35.941937 kernel: x2apic enabled Feb 13 21:23:35.941946 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 21:23:35.941958 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Feb 13 21:23:35.941968 kernel: Calibrating delay loop (skipped) preset value.. 4589.15 BogoMIPS (lpj=2294576) Feb 13 21:23:35.941978 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 21:23:35.941988 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 21:23:35.941997 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 21:23:35.942006 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 21:23:35.942016 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 21:23:35.942025 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 21:23:35.942035 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 21:23:35.942044 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 21:23:35.942056 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 21:23:35.942065 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 21:23:35.942075 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 21:23:35.942084 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 21:23:35.942094 kernel: TAA: Mitigation: Clear CPU buffers Feb 13 21:23:35.942103 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 21:23:35.942113 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 21:23:35.942122 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 21:23:35.942132 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 21:23:35.942141 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 21:23:35.942150 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 21:23:35.942162 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 21:23:35.942172 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 21:23:35.942181 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 21:23:35.942191 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 21:23:35.942200 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 21:23:35.942209 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 21:23:35.942219 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 21:23:35.942228 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Feb 13 21:23:35.942238 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Feb 13 21:23:35.942247 kernel: Freeing SMP alternatives memory: 32K Feb 13 21:23:35.942256 kernel: pid_max: default: 32768 minimum: 301 Feb 13 21:23:35.942268 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 21:23:35.942278 kernel: landlock: Up and running. Feb 13 21:23:35.942287 kernel: SELinux: Initializing. Feb 13 21:23:35.942297 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 21:23:35.942306 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 21:23:35.942316 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Feb 13 21:23:35.942325 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:23:35.942335 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:23:35.942345 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 21:23:35.942354 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 21:23:35.942367 kernel: signal: max sigframe size: 3632 Feb 13 21:23:35.942376 kernel: rcu: Hierarchical SRCU implementation. Feb 13 21:23:35.942386 kernel: rcu: Max phase no-delay instances is 400. Feb 13 21:23:35.942396 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 21:23:35.942405 kernel: smp: Bringing up secondary CPUs ... Feb 13 21:23:35.942415 kernel: smpboot: x86: Booting SMP configuration: Feb 13 21:23:35.942425 kernel: .... node #0, CPUs: #1 Feb 13 21:23:35.942434 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 21:23:35.942444 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 21:23:35.942454 kernel: smpboot: Max logical packages: 16 Feb 13 21:23:35.942466 kernel: smpboot: Total of 2 processors activated (9178.30 BogoMIPS) Feb 13 21:23:35.942476 kernel: devtmpfs: initialized Feb 13 21:23:35.942485 kernel: x86/mm: Memory block size: 128MB Feb 13 21:23:35.942495 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 21:23:35.942505 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 21:23:35.942515 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 21:23:35.942524 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 21:23:35.942534 kernel: audit: initializing netlink subsys (disabled) Feb 13 21:23:35.942544 kernel: audit: type=2000 audit(1739481815.269:1): state=initialized audit_enabled=0 res=1 Feb 13 21:23:35.942556 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 21:23:35.942565 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 21:23:35.942575 kernel: cpuidle: using governor menu Feb 13 21:23:35.942585 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 21:23:35.942594 kernel: dca service started, version 1.12.1 Feb 13 21:23:35.942624 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 21:23:35.942634 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 21:23:35.942644 kernel: PCI: Using configuration type 1 for base access Feb 13 21:23:35.942657 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 21:23:35.942667 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 21:23:35.942677 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 21:23:35.942687 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 21:23:35.942696 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 21:23:35.942706 kernel: ACPI: Added _OSI(Module Device) Feb 13 21:23:35.942715 kernel: ACPI: Added _OSI(Processor Device) Feb 13 21:23:35.942725 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 21:23:35.942735 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 21:23:35.942747 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 21:23:35.942756 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 21:23:35.942766 kernel: ACPI: Interpreter enabled Feb 13 21:23:35.942775 kernel: ACPI: PM: (supports S0 S5) Feb 13 21:23:35.942785 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 21:23:35.942795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 21:23:35.942805 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 21:23:35.942814 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 21:23:35.942824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 21:23:35.942994 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 21:23:35.943099 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 21:23:35.943189 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 21:23:35.943202 kernel: PCI host bridge to bus 0000:00 Feb 13 21:23:35.943313 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 21:23:35.943404 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 21:23:35.943493 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 21:23:35.943582 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 21:23:35.943683 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 21:23:35.943765 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 21:23:35.943846 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 21:23:35.943960 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 21:23:35.944069 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 21:23:35.944167 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 21:23:35.944259 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 21:23:35.944349 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 21:23:35.944439 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 21:23:35.944546 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 21:23:35.944674 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 21:23:35.944792 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 21:23:35.944891 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 21:23:35.944991 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 21:23:35.945084 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 21:23:35.945189 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 21:23:35.945282 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 21:23:35.945386 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 21:23:35.945482 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 21:23:35.945589 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 21:23:35.945699 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 21:23:35.945802 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 21:23:35.945895 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 21:23:35.945994 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 21:23:35.946091 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 21:23:35.946194 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 21:23:35.946286 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 21:23:35.946378 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 21:23:35.946470 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 21:23:35.946562 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 21:23:35.946690 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 21:23:35.946789 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 21:23:35.946882 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 21:23:35.946974 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 21:23:35.947077 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 21:23:35.947170 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 21:23:35.947270 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 21:23:35.947361 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 21:23:35.947458 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 21:23:35.947561 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 21:23:35.947744 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 21:23:35.947847 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 21:23:35.947941 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 21:23:35.948039 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 21:23:35.948129 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 21:23:35.948219 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 21:23:35.948328 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 21:23:35.948433 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 21:23:35.948534 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 21:23:35.948657 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 21:23:35.948759 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 21:23:35.948870 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 21:23:35.948966 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 21:23:35.949059 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 21:23:35.949150 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 21:23:35.949242 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 21:23:35.949348 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 21:23:35.949449 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 21:23:35.949543 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 21:23:35.949660 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 21:23:35.949752 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 21:23:35.949845 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 21:23:35.949936 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 21:23:35.950026 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 21:23:35.950118 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 21:23:35.950213 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 21:23:35.950304 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 21:23:35.950396 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 21:23:35.950486 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 21:23:35.950577 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 21:23:35.950747 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 21:23:35.950839 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 21:23:35.950928 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 21:23:35.951026 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 21:23:35.951116 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 21:23:35.951206 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 21:23:35.951219 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 21:23:35.951229 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 21:23:35.951239 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 21:23:35.951249 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 21:23:35.951259 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 21:23:35.951269 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 21:23:35.951282 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 21:23:35.951292 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 21:23:35.951315 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 21:23:35.951326 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 21:23:35.951336 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 21:23:35.951345 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 21:23:35.951355 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 21:23:35.951365 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 21:23:35.951375 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 21:23:35.951388 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 21:23:35.951398 kernel: iommu: Default domain type: Translated Feb 13 21:23:35.951407 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 21:23:35.951417 kernel: PCI: Using ACPI for IRQ routing Feb 13 21:23:35.951427 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 21:23:35.951436 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 21:23:35.951446 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 21:23:35.951538 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 21:23:35.951665 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 21:23:35.951757 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 21:23:35.951770 kernel: vgaarb: loaded Feb 13 21:23:35.951780 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 21:23:35.951790 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 21:23:35.951800 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 21:23:35.951810 kernel: pnp: PnP ACPI init Feb 13 21:23:35.951909 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 21:23:35.951927 kernel: pnp: PnP ACPI: found 5 devices Feb 13 21:23:35.951937 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 21:23:35.951947 kernel: NET: Registered PF_INET protocol family Feb 13 21:23:35.951957 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 21:23:35.951967 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 21:23:35.951977 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 21:23:35.951987 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 21:23:35.951997 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 21:23:35.952006 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 21:23:35.952019 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 21:23:35.952029 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 21:23:35.952039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 21:23:35.952049 kernel: NET: Registered PF_XDP protocol family Feb 13 21:23:35.952138 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 21:23:35.952232 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 21:23:35.952323 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 21:23:35.952416 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 21:23:35.952512 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 21:23:35.952602 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 21:23:35.952732 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 21:23:35.952845 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 21:23:35.952935 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 21:23:35.953029 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 21:23:35.953119 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 21:23:35.953211 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 21:23:35.953315 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 21:23:35.953407 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 21:23:35.953498 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 21:23:35.953589 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 21:23:35.953701 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 21:23:35.953797 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 21:23:35.953893 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 21:23:35.953983 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 21:23:35.954073 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 21:23:35.954169 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 21:23:35.954264 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 21:23:35.954358 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 21:23:35.954448 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 21:23:35.954539 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 21:23:35.954655 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 21:23:35.954747 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 21:23:35.954837 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 21:23:35.954931 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 21:23:35.955023 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 21:23:35.955114 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 21:23:35.955209 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 21:23:35.955299 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 21:23:35.955390 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 21:23:35.955480 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 21:23:35.955572 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 21:23:35.955710 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 21:23:35.955802 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 21:23:35.955892 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 21:23:35.955983 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 21:23:35.956079 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 21:23:35.956169 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 21:23:35.956261 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 21:23:35.956353 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 21:23:35.956447 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 21:23:35.956542 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 21:23:35.956656 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 21:23:35.956747 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 21:23:35.956838 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 21:23:35.956926 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 21:23:35.957009 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 21:23:35.957092 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 21:23:35.957173 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 21:23:35.957256 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 21:23:35.957342 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 21:23:35.957438 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 21:23:35.957526 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 21:23:35.957691 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 21:23:35.957788 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 21:23:35.957889 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 21:23:35.957979 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 21:23:35.958064 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 21:23:35.958153 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 21:23:35.958237 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 21:23:35.958321 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 21:23:35.958413 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 21:23:35.958498 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 21:23:35.958587 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 21:23:35.958705 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 21:23:35.958792 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 21:23:35.958877 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 21:23:35.958968 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 21:23:35.959054 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 21:23:35.959138 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 21:23:35.959237 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 21:23:35.959323 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 21:23:35.959407 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 21:23:35.959522 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 21:23:35.961674 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 21:23:35.961792 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 21:23:35.961808 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 21:23:35.961825 kernel: PCI: CLS 0 bytes, default 64 Feb 13 21:23:35.961836 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 21:23:35.961847 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 21:23:35.961858 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 21:23:35.961868 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Feb 13 21:23:35.961879 kernel: Initialise system trusted keyrings Feb 13 21:23:35.961890 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 21:23:35.961900 kernel: Key type asymmetric registered Feb 13 21:23:35.961911 kernel: Asymmetric key parser 'x509' registered Feb 13 21:23:35.961924 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 21:23:35.961935 kernel: io scheduler mq-deadline registered Feb 13 21:23:35.961945 kernel: io scheduler kyber registered Feb 13 21:23:35.961955 kernel: io scheduler bfq registered Feb 13 21:23:35.962059 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 21:23:35.962157 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 21:23:35.962254 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:23:35.962352 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 21:23:35.962449 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 21:23:35.962867 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:23:35.962973 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 21:23:35.963067 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 21:23:35.963161 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:23:35.963261 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 21:23:35.963363 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 21:23:35.963460 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:23:35.963557 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 21:23:35.964740 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 21:23:35.964847 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:23:35.964945 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 21:23:35.965095 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 21:23:35.965202 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:23:35.965302 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 21:23:35.965398 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 21:23:35.965494 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:23:35.965597 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 21:23:35.965720 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 21:23:35.965817 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 21:23:35.965831 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 21:23:35.965843 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 21:23:35.965854 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 21:23:35.965865 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 21:23:35.965875 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 21:23:35.965890 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 21:23:35.965900 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 21:23:35.965911 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 21:23:35.965922 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 21:23:35.966026 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 21:23:35.966116 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 21:23:35.966206 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T21:23:35 UTC (1739481815) Feb 13 21:23:35.966298 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 21:23:35.966311 kernel: intel_pstate: CPU model not supported Feb 13 21:23:35.966322 kernel: NET: Registered PF_INET6 protocol family Feb 13 21:23:35.966332 kernel: Segment Routing with IPv6 Feb 13 21:23:35.966343 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 21:23:35.966353 kernel: NET: Registered PF_PACKET protocol family Feb 13 21:23:35.966364 kernel: Key type dns_resolver registered Feb 13 21:23:35.966374 kernel: IPI shorthand broadcast: enabled Feb 13 21:23:35.966385 kernel: sched_clock: Marking stable (967004123, 127797913)->(1192512962, -97710926) Feb 13 21:23:35.966396 kernel: registered taskstats version 1 Feb 13 21:23:35.966409 kernel: Loading compiled-in X.509 certificates Feb 13 21:23:35.966420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 21:23:35.966430 kernel: Key type .fscrypt registered Feb 13 21:23:35.966440 kernel: Key type fscrypt-provisioning registered Feb 13 21:23:35.966450 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 21:23:35.966461 kernel: ima: Allocated hash algorithm: sha1 Feb 13 21:23:35.966471 kernel: ima: No architecture policies found Feb 13 21:23:35.966482 kernel: clk: Disabling unused clocks Feb 13 21:23:35.966495 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 21:23:35.966505 kernel: Write protecting the kernel read-only data: 38912k Feb 13 21:23:35.966516 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 21:23:35.966526 kernel: Run /init as init process Feb 13 21:23:35.966536 kernel: with arguments: Feb 13 21:23:35.966547 kernel: /init Feb 13 21:23:35.966557 kernel: with environment: Feb 13 21:23:35.966567 kernel: HOME=/ Feb 13 21:23:35.966578 kernel: TERM=linux Feb 13 21:23:35.966588 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 21:23:35.967811 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:23:35.967835 systemd[1]: Detected virtualization kvm. Feb 13 21:23:35.967847 systemd[1]: Detected architecture x86-64. Feb 13 21:23:35.967858 systemd[1]: Running in initrd. Feb 13 21:23:35.967869 systemd[1]: No hostname configured, using default hostname. Feb 13 21:23:35.967879 systemd[1]: Hostname set to . Feb 13 21:23:35.967891 systemd[1]: Initializing machine ID from VM UUID. Feb 13 21:23:35.967906 systemd[1]: Queued start job for default target initrd.target. Feb 13 21:23:35.967917 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:23:35.967929 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:23:35.967941 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 21:23:35.967952 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:23:35.967963 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 21:23:35.967974 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 21:23:35.967989 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 21:23:35.968000 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 21:23:35.968011 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:23:35.968022 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:23:35.968033 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:23:35.968044 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:23:35.968055 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:23:35.968066 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:23:35.968080 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:23:35.968091 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:23:35.968102 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 21:23:35.968113 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 21:23:35.968124 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:23:35.968135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:23:35.968146 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:23:35.968157 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:23:35.968170 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 21:23:35.968181 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:23:35.968192 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 21:23:35.968202 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 21:23:35.968213 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:23:35.968224 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:23:35.968235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:35.968275 systemd-journald[201]: Collecting audit messages is disabled. Feb 13 21:23:35.968306 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 21:23:35.968317 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:23:35.968328 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 21:23:35.968342 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 21:23:35.968358 systemd-journald[201]: Journal started Feb 13 21:23:35.968382 systemd-journald[201]: Runtime Journal (/run/log/journal/a598b8e6b8e546f09678fe7f3022c50e) is 4.7M, max 37.9M, 33.2M free. Feb 13 21:23:35.962883 systemd-modules-load[202]: Inserted module 'overlay' Feb 13 21:23:35.980746 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:23:35.997640 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 21:23:35.999054 systemd-modules-load[202]: Inserted module 'br_netfilter' Feb 13 21:23:36.018248 kernel: Bridge firewalling registered Feb 13 21:23:36.018886 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:23:36.020387 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:36.021159 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 21:23:36.032873 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:23:36.035468 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:23:36.038786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:23:36.041470 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:23:36.055795 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:23:36.058972 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:23:36.063310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:36.069783 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 21:23:36.071024 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:23:36.074752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:23:36.082787 dracut-cmdline[234]: dracut-dracut-053 Feb 13 21:23:36.088328 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 21:23:36.119727 systemd-resolved[239]: Positive Trust Anchors: Feb 13 21:23:36.119745 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:23:36.119787 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:23:36.126049 systemd-resolved[239]: Defaulting to hostname 'linux'. Feb 13 21:23:36.128100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:23:36.129530 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:23:36.184653 kernel: SCSI subsystem initialized Feb 13 21:23:36.194657 kernel: Loading iSCSI transport class v2.0-870. Feb 13 21:23:36.205674 kernel: iscsi: registered transport (tcp) Feb 13 21:23:36.228790 kernel: iscsi: registered transport (qla4xxx) Feb 13 21:23:36.228889 kernel: QLogic iSCSI HBA Driver Feb 13 21:23:36.300025 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 21:23:36.306760 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 21:23:36.334794 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 21:23:36.334874 kernel: device-mapper: uevent: version 1.0.3 Feb 13 21:23:36.336597 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 21:23:36.391768 kernel: raid6: avx512x4 gen() 27673 MB/s Feb 13 21:23:36.408730 kernel: raid6: avx512x2 gen() 26780 MB/s Feb 13 21:23:36.425662 kernel: raid6: avx512x1 gen() 26030 MB/s Feb 13 21:23:36.442684 kernel: raid6: avx2x4 gen() 18459 MB/s Feb 13 21:23:36.459690 kernel: raid6: avx2x2 gen() 20147 MB/s Feb 13 21:23:36.476725 kernel: raid6: avx2x1 gen() 17601 MB/s Feb 13 21:23:36.476868 kernel: raid6: using algorithm avx512x4 gen() 27673 MB/s Feb 13 21:23:36.494797 kernel: raid6: .... xor() 5912 MB/s, rmw enabled Feb 13 21:23:36.494943 kernel: raid6: using avx512x2 recovery algorithm Feb 13 21:23:36.526651 kernel: xor: automatically using best checksumming function avx Feb 13 21:23:36.697690 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 21:23:36.715738 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:23:36.721947 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:23:36.737261 systemd-udevd[420]: Using default interface naming scheme 'v255'. Feb 13 21:23:36.742331 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:23:36.752736 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 21:23:36.775953 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Feb 13 21:23:36.821883 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:23:36.832970 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:23:36.905540 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:23:36.918199 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 21:23:36.937300 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 21:23:36.938475 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:23:36.939494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:23:36.939916 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:23:36.944210 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 21:23:36.966991 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:23:37.006620 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 21:23:37.051345 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 21:23:37.051506 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 21:23:37.051523 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 21:23:37.051561 kernel: GPT:17805311 != 125829119 Feb 13 21:23:37.051576 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 21:23:37.051591 kernel: GPT:17805311 != 125829119 Feb 13 21:23:37.051629 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 21:23:37.051645 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 21:23:37.051660 kernel: libata version 3.00 loaded. Feb 13 21:23:37.051675 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 21:23:37.023561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:23:37.023704 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:37.024252 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:23:37.024665 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:23:37.024781 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:37.025191 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:37.037837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:37.061625 kernel: AES CTR mode by8 optimization enabled Feb 13 21:23:37.080817 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 21:23:37.090486 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 21:23:37.090516 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 21:23:37.090715 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 21:23:37.091891 kernel: scsi host0: ahci Feb 13 21:23:37.092041 kernel: scsi host1: ahci Feb 13 21:23:37.092180 kernel: ACPI: bus type USB registered Feb 13 21:23:37.092196 kernel: scsi host2: ahci Feb 13 21:23:37.092322 kernel: usbcore: registered new interface driver usbfs Feb 13 21:23:37.092338 kernel: scsi host3: ahci Feb 13 21:23:37.092464 kernel: scsi host4: ahci Feb 13 21:23:37.094837 kernel: scsi host5: ahci Feb 13 21:23:37.094980 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Feb 13 21:23:37.095004 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Feb 13 21:23:37.095018 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Feb 13 21:23:37.095032 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Feb 13 21:23:37.095046 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Feb 13 21:23:37.095060 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Feb 13 21:23:37.095074 kernel: usbcore: registered new interface driver hub Feb 13 21:23:37.096646 kernel: usbcore: registered new device driver usb Feb 13 21:23:37.120627 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (482) Feb 13 21:23:37.138628 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (474) Feb 13 21:23:37.142656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:37.149739 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 21:23:37.155644 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 21:23:37.161423 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 21:23:37.166367 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 21:23:37.167686 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 21:23:37.173984 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 21:23:37.176841 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 21:23:37.186088 disk-uuid[558]: Primary Header is updated. Feb 13 21:23:37.186088 disk-uuid[558]: Secondary Entries is updated. Feb 13 21:23:37.186088 disk-uuid[558]: Secondary Header is updated. Feb 13 21:23:37.190261 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 21:23:37.193622 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 21:23:37.215097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:37.398743 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:37.398823 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:37.399643 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:37.410695 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:37.410775 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:37.411683 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 21:23:37.443634 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 21:23:37.453180 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 21:23:37.453328 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 21:23:37.453450 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 21:23:37.453583 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 21:23:37.454001 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 21:23:37.454123 kernel: hub 1-0:1.0: USB hub found Feb 13 21:23:37.454253 kernel: hub 1-0:1.0: 4 ports detected Feb 13 21:23:37.454364 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 21:23:37.454494 kernel: hub 2-0:1.0: USB hub found Feb 13 21:23:37.454644 kernel: hub 2-0:1.0: 4 ports detected Feb 13 21:23:37.689742 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 21:23:37.836777 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 21:23:37.840812 kernel: usbcore: registered new interface driver usbhid Feb 13 21:23:37.840865 kernel: usbhid: USB HID core driver Feb 13 21:23:37.845963 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 13 21:23:37.846005 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 21:23:38.198679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 21:23:38.200230 disk-uuid[559]: The operation has completed successfully. Feb 13 21:23:38.249952 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 21:23:38.250071 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 21:23:38.263750 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 21:23:38.281678 sh[585]: Success Feb 13 21:23:38.299701 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 21:23:38.360803 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 21:23:38.373780 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 21:23:38.375772 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 21:23:38.393652 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 21:23:38.393723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:38.393751 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 21:23:38.395911 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 21:23:38.395944 kernel: BTRFS info (device dm-0): using free space tree Feb 13 21:23:38.404296 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 21:23:38.406814 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 21:23:38.422976 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 21:23:38.427973 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 21:23:38.443316 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:23:38.443360 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:38.443375 kernel: BTRFS info (device vda6): using free space tree Feb 13 21:23:38.447627 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 21:23:38.460045 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:23:38.459708 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 21:23:38.466055 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 21:23:38.470770 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 21:23:38.573948 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:23:38.582821 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:23:38.599354 ignition[673]: Ignition 2.20.0 Feb 13 21:23:38.599369 ignition[673]: Stage: fetch-offline Feb 13 21:23:38.600705 ignition[673]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:38.600722 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:23:38.601056 ignition[673]: parsed url from cmdline: "" Feb 13 21:23:38.601060 ignition[673]: no config URL provided Feb 13 21:23:38.601066 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 21:23:38.604111 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:23:38.601076 ignition[673]: no config at "/usr/lib/ignition/user.ign" Feb 13 21:23:38.601082 ignition[673]: failed to fetch config: resource requires networking Feb 13 21:23:38.601296 ignition[673]: Ignition finished successfully Feb 13 21:23:38.607122 systemd-networkd[774]: lo: Link UP Feb 13 21:23:38.607126 systemd-networkd[774]: lo: Gained carrier Feb 13 21:23:38.608518 systemd-networkd[774]: Enumeration completed Feb 13 21:23:38.609106 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 21:23:38.609110 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:38.609892 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:23:38.610504 systemd[1]: Reached target network.target - Network. Feb 13 21:23:38.610792 systemd-networkd[774]: eth0: Link UP Feb 13 21:23:38.610797 systemd-networkd[774]: eth0: Gained carrier Feb 13 21:23:38.610805 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 21:23:38.619821 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 21:23:38.632588 ignition[777]: Ignition 2.20.0 Feb 13 21:23:38.633225 ignition[777]: Stage: fetch Feb 13 21:23:38.633414 ignition[777]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:38.633435 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:23:38.633542 ignition[777]: parsed url from cmdline: "" Feb 13 21:23:38.633546 ignition[777]: no config URL provided Feb 13 21:23:38.633551 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 21:23:38.633559 ignition[777]: no config at "/usr/lib/ignition/user.ign" Feb 13 21:23:38.633660 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 21:23:38.633683 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 21:23:38.633698 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 21:23:38.633903 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 21:23:38.652687 systemd-networkd[774]: eth0: DHCPv4 address 10.244.102.222/30, gateway 10.244.102.221 acquired from 10.244.102.221 Feb 13 21:23:38.834684 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Feb 13 21:23:38.851121 ignition[777]: GET result: OK Feb 13 21:23:38.851697 ignition[777]: parsing config with SHA512: 7540b1e5dfc4313d5f7207ad40223bdd4335dac7bcf7d64749741128329459bc416380176f598641074d7a7084f17ed4062d99198a454400ea53c09f8f68989b Feb 13 21:23:38.856170 unknown[777]: fetched base config from "system" Feb 13 21:23:38.856181 unknown[777]: fetched base config from "system" Feb 13 21:23:38.856624 ignition[777]: fetch: fetch complete Feb 13 21:23:38.856187 unknown[777]: fetched user config from "openstack" Feb 13 21:23:38.856630 ignition[777]: fetch: fetch passed Feb 13 21:23:38.856676 ignition[777]: Ignition finished successfully Feb 13 21:23:38.858830 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 21:23:38.864891 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 21:23:38.883197 ignition[784]: Ignition 2.20.0 Feb 13 21:23:38.883663 ignition[784]: Stage: kargs Feb 13 21:23:38.884031 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:38.884053 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:23:38.886015 ignition[784]: kargs: kargs passed Feb 13 21:23:38.886115 ignition[784]: Ignition finished successfully Feb 13 21:23:38.887722 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 21:23:38.893734 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 21:23:38.908638 ignition[790]: Ignition 2.20.0 Feb 13 21:23:38.909691 ignition[790]: Stage: disks Feb 13 21:23:38.909908 ignition[790]: no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:38.909920 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:23:38.911893 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 21:23:38.910809 ignition[790]: disks: disks passed Feb 13 21:23:38.914903 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 21:23:38.910857 ignition[790]: Ignition finished successfully Feb 13 21:23:38.915872 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 21:23:38.917679 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:23:38.919096 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:23:38.921486 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:23:38.929902 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 21:23:38.951883 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 21:23:38.955296 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 21:23:38.962764 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 21:23:39.081620 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 21:23:39.081747 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 21:23:39.082739 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 21:23:39.093841 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:23:39.097484 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 21:23:39.098895 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 21:23:39.102826 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 21:23:39.103923 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 21:23:39.103952 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:23:39.110631 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806) Feb 13 21:23:39.113718 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:23:39.113752 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:39.113767 kernel: BTRFS info (device vda6): using free space tree Feb 13 21:23:39.113540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 21:23:39.117629 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 21:23:39.120451 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 21:23:39.123490 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:23:39.176025 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 21:23:39.184102 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Feb 13 21:23:39.190295 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 21:23:39.197355 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 21:23:39.326844 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 21:23:39.336760 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 21:23:39.340765 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 21:23:39.349700 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:23:39.376748 ignition[923]: INFO : Ignition 2.20.0 Feb 13 21:23:39.377582 ignition[923]: INFO : Stage: mount Feb 13 21:23:39.376748 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 21:23:39.379171 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:39.379171 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:23:39.380255 ignition[923]: INFO : mount: mount passed Feb 13 21:23:39.380255 ignition[923]: INFO : Ignition finished successfully Feb 13 21:23:39.381254 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 21:23:39.393464 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 21:23:40.653092 systemd-networkd[774]: eth0: Gained IPv6LL Feb 13 21:23:42.165325 systemd-networkd[774]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:19b7:24:19ff:fef4:66de/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:19b7:24:19ff:fef4:66de/64 assigned by NDisc. Feb 13 21:23:42.165359 systemd-networkd[774]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 21:23:46.265059 coreos-metadata[808]: Feb 13 21:23:46.264 WARN failed to locate config-drive, using the metadata service API instead Feb 13 21:23:46.282342 coreos-metadata[808]: Feb 13 21:23:46.282 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 21:23:46.297470 coreos-metadata[808]: Feb 13 21:23:46.297 INFO Fetch successful Feb 13 21:23:46.304330 coreos-metadata[808]: Feb 13 21:23:46.300 INFO wrote hostname srv-9zhep.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 21:23:46.304782 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 21:23:46.305046 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 21:23:46.315712 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 21:23:46.326465 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 21:23:46.338626 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Feb 13 21:23:46.342093 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 21:23:46.342137 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 21:23:46.342152 kernel: BTRFS info (device vda6): using free space tree Feb 13 21:23:46.345622 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 21:23:46.350549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 21:23:46.378751 ignition[958]: INFO : Ignition 2.20.0 Feb 13 21:23:46.378751 ignition[958]: INFO : Stage: files Feb 13 21:23:46.379805 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:46.379805 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:23:46.380760 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Feb 13 21:23:46.381395 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 21:23:46.381395 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 21:23:46.384705 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 21:23:46.386297 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 21:23:46.386297 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 21:23:46.385146 unknown[958]: wrote ssh authorized keys file for user: core Feb 13 21:23:46.397227 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 21:23:46.397227 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 21:23:46.604261 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 21:23:47.140860 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 21:23:47.142231 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 21:23:47.142231 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 21:23:47.803402 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 21:23:48.108866 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 21:23:48.108866 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:48.111706 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 21:23:48.621025 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 21:23:50.190996 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 21:23:50.190996 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 21:23:50.199283 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:23:50.199283 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 21:23:50.199283 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 21:23:50.199283 ignition[958]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 21:23:50.199283 ignition[958]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 21:23:50.199283 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:23:50.199283 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 21:23:50.199283 ignition[958]: INFO : files: files passed Feb 13 21:23:50.199283 ignition[958]: INFO : Ignition finished successfully Feb 13 21:23:50.198036 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 21:23:50.210727 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 21:23:50.212551 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 21:23:50.214141 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 21:23:50.214246 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 21:23:50.239649 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:23:50.239649 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:23:50.242454 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 21:23:50.243583 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:23:50.245402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 21:23:50.253775 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 21:23:50.294237 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 21:23:50.294351 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 21:23:50.295446 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 21:23:50.298232 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 21:23:50.300097 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 21:23:50.306897 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 21:23:50.335226 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:23:50.341789 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 21:23:50.354199 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:23:50.355422 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:23:50.356737 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 21:23:50.357776 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 21:23:50.357930 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 21:23:50.359808 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 21:23:50.360293 systemd[1]: Stopped target basic.target - Basic System. Feb 13 21:23:50.362113 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 21:23:50.363966 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 21:23:50.365813 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 21:23:50.367445 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 21:23:50.368994 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 21:23:50.370476 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 21:23:50.371855 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 21:23:50.373201 systemd[1]: Stopped target swap.target - Swaps. Feb 13 21:23:50.374323 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 21:23:50.374619 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 21:23:50.376107 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:23:50.377176 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:23:50.378133 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 21:23:50.378254 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:23:50.379144 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 21:23:50.379265 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 21:23:50.380296 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 21:23:50.380406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 21:23:50.384214 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 21:23:50.384311 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 21:23:50.389774 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 21:23:50.390660 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 21:23:50.391180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:23:50.393804 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 21:23:50.394678 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 21:23:50.394805 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:23:50.396779 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 21:23:50.396892 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 21:23:50.407318 ignition[1010]: INFO : Ignition 2.20.0 Feb 13 21:23:50.407318 ignition[1010]: INFO : Stage: umount Feb 13 21:23:50.407318 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 21:23:50.407318 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 21:23:50.407318 ignition[1010]: INFO : umount: umount passed Feb 13 21:23:50.407318 ignition[1010]: INFO : Ignition finished successfully Feb 13 21:23:50.408344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 21:23:50.408450 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 21:23:50.411186 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 21:23:50.411270 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 21:23:50.414196 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 21:23:50.414254 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 21:23:50.414715 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 21:23:50.414761 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 21:23:50.415145 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 21:23:50.415178 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 21:23:50.415596 systemd[1]: Stopped target network.target - Network. Feb 13 21:23:50.415936 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 21:23:50.415975 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 21:23:50.416372 systemd[1]: Stopped target paths.target - Path Units. Feb 13 21:23:50.416747 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 21:23:50.420656 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:23:50.421865 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 21:23:50.423169 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 21:23:50.424028 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 21:23:50.424066 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 21:23:50.424708 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 21:23:50.424742 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 21:23:50.425465 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 21:23:50.425506 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 21:23:50.426326 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 21:23:50.426363 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 21:23:50.427156 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 21:23:50.428151 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 21:23:50.429846 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 21:23:50.430429 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 21:23:50.430528 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 21:23:50.431504 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 21:23:50.431631 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 21:23:50.432727 systemd-networkd[774]: eth0: DHCPv6 lease lost Feb 13 21:23:50.434845 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 21:23:50.434951 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 21:23:50.435677 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 21:23:50.435711 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:23:50.442906 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 21:23:50.443700 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 21:23:50.443758 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 21:23:50.445350 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:23:50.446057 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 21:23:50.446167 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 21:23:50.449302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 21:23:50.449410 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:23:50.450203 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 21:23:50.450245 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 21:23:50.451652 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 21:23:50.451692 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:23:50.458661 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 21:23:50.459312 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:23:50.460016 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 21:23:50.460099 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 21:23:50.462679 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 21:23:50.462828 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 21:23:50.464750 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 21:23:50.464836 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:23:50.466033 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 21:23:50.466134 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 21:23:50.467826 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 21:23:50.467894 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 21:23:50.469321 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 21:23:50.469390 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 21:23:50.483975 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 21:23:50.485153 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 21:23:50.485281 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:23:50.489010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 21:23:50.489102 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:50.491240 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 21:23:50.491453 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 21:23:50.493837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 21:23:50.500776 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 21:23:50.510781 systemd[1]: Switching root. Feb 13 21:23:50.549332 systemd-journald[201]: Journal stopped Feb 13 21:23:51.553373 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Feb 13 21:23:51.553463 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 21:23:51.553485 kernel: SELinux: policy capability open_perms=1 Feb 13 21:23:51.553510 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 21:23:51.553524 kernel: SELinux: policy capability always_check_network=0 Feb 13 21:23:51.553536 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 21:23:51.553553 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 21:23:51.553566 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 21:23:51.553579 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 21:23:51.553593 systemd[1]: Successfully loaded SELinux policy in 38.071ms. Feb 13 21:23:51.560374 kernel: audit: type=1403 audit(1739481830.656:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 21:23:51.560408 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.909ms. Feb 13 21:23:51.560425 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 21:23:51.560440 systemd[1]: Detected virtualization kvm. Feb 13 21:23:51.560459 systemd[1]: Detected architecture x86-64. Feb 13 21:23:51.560473 systemd[1]: Detected first boot. Feb 13 21:23:51.560492 systemd[1]: Hostname set to . Feb 13 21:23:51.560517 systemd[1]: Initializing machine ID from VM UUID. Feb 13 21:23:51.560533 zram_generator::config[1054]: No configuration found. Feb 13 21:23:51.560547 systemd[1]: Populated /etc with preset unit settings. Feb 13 21:23:51.560562 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 21:23:51.560575 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 21:23:51.560593 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 21:23:51.560658 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 21:23:51.560674 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 21:23:51.560689 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 21:23:51.560703 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 21:23:51.560717 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 21:23:51.560736 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 21:23:51.560751 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 21:23:51.560768 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 21:23:51.560782 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 21:23:51.560796 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 21:23:51.560810 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 21:23:51.560824 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 21:23:51.560844 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 21:23:51.560871 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 21:23:51.560884 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 21:23:51.560898 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 21:23:51.560913 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 21:23:51.560927 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 21:23:51.560941 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 21:23:51.560958 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 21:23:51.560977 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 21:23:51.560992 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 21:23:51.561005 systemd[1]: Reached target slices.target - Slice Units. Feb 13 21:23:51.561019 systemd[1]: Reached target swap.target - Swaps. Feb 13 21:23:51.561033 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 21:23:51.561047 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 21:23:51.561063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 21:23:51.561077 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 21:23:51.561091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 21:23:51.561110 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 21:23:51.561124 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 21:23:51.561138 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 21:23:51.561152 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 21:23:51.561166 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:51.561184 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 21:23:51.561208 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 21:23:51.561220 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 21:23:51.561242 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 21:23:51.561255 systemd[1]: Reached target machines.target - Containers. Feb 13 21:23:51.561268 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 21:23:51.561281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:23:51.561293 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 21:23:51.561306 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 21:23:51.561319 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:23:51.561332 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 21:23:51.561345 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:23:51.561361 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 21:23:51.561374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:23:51.561388 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 21:23:51.561401 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 21:23:51.561414 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 21:23:51.561427 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 21:23:51.561440 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 21:23:51.561452 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 21:23:51.561468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 21:23:51.561481 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 21:23:51.561495 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 21:23:51.561514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 21:23:51.561528 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 21:23:51.561540 systemd[1]: Stopped verity-setup.service. Feb 13 21:23:51.561553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:51.561567 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 21:23:51.561579 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 21:23:51.561598 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 21:23:51.565848 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 21:23:51.565871 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 21:23:51.565893 kernel: ACPI: bus type drm_connector registered Feb 13 21:23:51.565912 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 21:23:51.565926 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 21:23:51.565941 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 21:23:51.565955 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 21:23:51.565969 kernel: loop: module loaded Feb 13 21:23:51.565983 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:23:51.565997 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:23:51.566012 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 21:23:51.566027 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 21:23:51.566043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:23:51.566059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:23:51.566081 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:23:51.566095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:23:51.566110 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 21:23:51.566131 kernel: fuse: init (API version 7.39) Feb 13 21:23:51.566145 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 21:23:51.566188 systemd-journald[1140]: Collecting audit messages is disabled. Feb 13 21:23:51.566217 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 21:23:51.566236 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:23:51.566251 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 21:23:51.566268 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 21:23:51.566283 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 21:23:51.566305 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 21:23:51.566320 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 21:23:51.566335 systemd-journald[1140]: Journal started Feb 13 21:23:51.566363 systemd-journald[1140]: Runtime Journal (/run/log/journal/a598b8e6b8e546f09678fe7f3022c50e) is 4.7M, max 37.9M, 33.2M free. Feb 13 21:23:51.233149 systemd[1]: Queued start job for default target multi-user.target. Feb 13 21:23:51.571748 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 21:23:51.262224 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 21:23:51.263227 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 21:23:51.570242 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 21:23:51.574742 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 21:23:51.576091 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 21:23:51.576119 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 21:23:51.578640 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 21:23:51.581760 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 21:23:51.584787 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 21:23:51.585789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:23:51.591150 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 21:23:51.595724 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 21:23:51.596182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 21:23:51.604763 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 21:23:51.612152 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 21:23:51.617178 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 21:23:51.617791 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 21:23:51.647088 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 21:23:51.648711 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 21:23:51.660732 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 21:23:51.663960 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 21:23:51.675814 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 21:23:51.686841 systemd-journald[1140]: Time spent on flushing to /var/log/journal/a598b8e6b8e546f09678fe7f3022c50e is 56.549ms for 1159 entries. Feb 13 21:23:51.686841 systemd-journald[1140]: System Journal (/var/log/journal/a598b8e6b8e546f09678fe7f3022c50e) is 8.0M, max 584.8M, 576.8M free. Feb 13 21:23:51.762713 systemd-journald[1140]: Received client request to flush runtime journal. Feb 13 21:23:51.762756 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 21:23:51.762832 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 21:23:51.682790 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 21:23:51.683557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:23:51.739091 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 21:23:51.741641 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 21:23:51.750519 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 21:23:51.765543 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 21:23:51.767102 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 21:23:51.787646 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 21:23:51.814956 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 21:23:51.835016 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 21:23:51.845358 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 21:23:51.864632 kernel: loop3: detected capacity change from 0 to 8 Feb 13 21:23:51.895696 kernel: loop4: detected capacity change from 0 to 141000 Feb 13 21:23:51.919367 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 21:23:51.920344 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Feb 13 21:23:51.920364 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Feb 13 21:23:51.934316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 21:23:51.944685 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 21:23:51.964657 kernel: loop7: detected capacity change from 0 to 8 Feb 13 21:23:51.966882 (sd-merge)[1211]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 21:23:51.970128 (sd-merge)[1211]: Merged extensions into '/usr'. Feb 13 21:23:51.977418 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 21:23:51.977434 systemd[1]: Reloading... Feb 13 21:23:52.060628 zram_generator::config[1236]: No configuration found. Feb 13 21:23:52.186585 ldconfig[1166]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 21:23:52.283828 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:23:52.336265 systemd[1]: Reloading finished in 358 ms. Feb 13 21:23:52.365324 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 21:23:52.368708 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 21:23:52.379881 systemd[1]: Starting ensure-sysext.service... Feb 13 21:23:52.382059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 21:23:52.397650 systemd[1]: Reloading requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Feb 13 21:23:52.397670 systemd[1]: Reloading... Feb 13 21:23:52.423798 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 21:23:52.424088 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 21:23:52.425378 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 21:23:52.426963 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Feb 13 21:23:52.427106 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Feb 13 21:23:52.436295 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 21:23:52.437715 systemd-tmpfiles[1295]: Skipping /boot Feb 13 21:23:52.462797 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 21:23:52.462812 systemd-tmpfiles[1295]: Skipping /boot Feb 13 21:23:52.493633 zram_generator::config[1323]: No configuration found. Feb 13 21:23:52.644279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:23:52.696353 systemd[1]: Reloading finished in 298 ms. Feb 13 21:23:52.710033 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 21:23:52.721960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 21:23:52.739877 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 21:23:52.744861 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 21:23:52.747863 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 21:23:52.750876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 21:23:52.754141 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 21:23:52.755837 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 21:23:52.759332 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:52.759540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:23:52.767885 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 21:23:52.772897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 21:23:52.776911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 21:23:52.777770 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:23:52.777904 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:52.781761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:52.781963 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:23:52.782121 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:23:52.782214 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:52.786691 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:52.786956 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 21:23:52.800329 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 21:23:52.801871 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 21:23:52.802059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 21:23:52.804664 systemd[1]: Finished ensure-sysext.service. Feb 13 21:23:52.810880 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 21:23:52.826815 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 21:23:52.827922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 21:23:52.828700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 21:23:52.829647 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 21:23:52.834652 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 21:23:52.835096 systemd-udevd[1388]: Using default interface naming scheme 'v255'. Feb 13 21:23:52.837579 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 21:23:52.838894 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 21:23:52.851488 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 21:23:52.854508 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 21:23:52.855684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 21:23:52.856898 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 21:23:52.862555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 21:23:52.862738 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 21:23:52.863750 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 21:23:52.866368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 21:23:52.876802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 21:23:52.886078 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 21:23:52.893809 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 21:23:52.902311 augenrules[1432]: No rules Feb 13 21:23:52.904540 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 21:23:52.905683 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 21:23:52.940657 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 21:23:52.954707 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 21:23:53.002380 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 21:23:53.049635 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1416) Feb 13 21:23:53.088344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 21:23:53.090213 systemd-networkd[1419]: lo: Link UP Feb 13 21:23:53.097844 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 21:23:53.099202 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 21:23:53.099924 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 21:23:53.103493 systemd-networkd[1419]: lo: Gained carrier Feb 13 21:23:53.105019 systemd-timesyncd[1398]: No network connectivity, watching for changes. Feb 13 21:23:53.105156 systemd-networkd[1419]: Enumeration completed Feb 13 21:23:53.105220 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 21:23:53.105678 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 21:23:53.105683 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 21:23:53.106481 systemd-networkd[1419]: eth0: Link UP Feb 13 21:23:53.106485 systemd-networkd[1419]: eth0: Gained carrier Feb 13 21:23:53.106502 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 21:23:53.114429 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 21:23:53.133024 systemd-resolved[1384]: Positive Trust Anchors: Feb 13 21:23:53.134786 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 21:23:53.135257 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 21:23:53.135301 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 21:23:53.144692 systemd-resolved[1384]: Using system hostname 'srv-9zhep.gb1.brightbox.com'. Feb 13 21:23:53.145690 systemd-networkd[1419]: eth0: DHCPv4 address 10.244.102.222/30, gateway 10.244.102.221 acquired from 10.244.102.221 Feb 13 21:23:53.147476 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 21:23:53.148015 systemd[1]: Reached target network.target - Network. Feb 13 21:23:53.148307 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Feb 13 21:23:53.148365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 21:23:53.164643 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 21:23:53.181808 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 21:23:53.197654 kernel: ACPI: button: Power Button [PWRF] Feb 13 21:23:53.224635 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 21:23:53.230990 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 21:23:53.236096 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 21:23:53.236249 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 21:23:53.268872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 21:23:53.394296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 21:23:53.415833 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 21:23:53.422757 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 21:23:53.444686 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 21:23:53.474031 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 21:23:53.475020 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 21:23:53.475595 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 21:23:53.476296 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 21:23:53.476929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 21:23:53.477934 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 21:23:53.478593 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 21:23:53.479162 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 21:23:53.479729 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 21:23:53.479775 systemd[1]: Reached target paths.target - Path Units. Feb 13 21:23:53.480222 systemd[1]: Reached target timers.target - Timer Units. Feb 13 21:23:53.482030 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 21:23:53.483926 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 21:23:53.489000 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 21:23:53.493197 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 21:23:53.495275 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 21:23:53.496393 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 21:23:53.497298 systemd[1]: Reached target basic.target - Basic System. Feb 13 21:23:53.498250 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 21:23:53.498480 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 21:23:53.517743 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 21:23:53.522425 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 21:23:53.525807 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 21:23:53.529993 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 21:23:53.531748 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 21:23:53.534778 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 21:23:53.535198 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 21:23:53.538884 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 21:23:53.547735 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 21:23:53.550841 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 21:23:53.553774 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 21:23:53.559312 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 21:23:53.560308 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 21:23:53.561936 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 21:23:53.566803 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 21:23:53.573776 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 21:23:53.576011 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 21:23:53.594101 jq[1480]: false Feb 13 21:23:53.604093 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 21:23:53.604283 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 21:23:53.611142 dbus-daemon[1479]: [system] SELinux support is enabled Feb 13 21:23:53.618875 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 21:23:53.621984 dbus-daemon[1479]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1419 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 21:23:53.626796 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 21:23:53.639552 jq[1491]: true Feb 13 21:23:53.655759 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 21:23:53.657477 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 21:23:53.655804 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 21:23:53.656321 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 21:23:53.656338 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 21:23:53.661497 update_engine[1488]: I20250213 21:23:53.661389 1488 main.cc:92] Flatcar Update Engine starting Feb 13 21:23:53.666777 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 21:23:53.668952 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 21:23:53.669129 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 21:23:53.676287 update_engine[1488]: I20250213 21:23:53.676111 1488 update_check_scheduler.cc:74] Next update check in 7m46s Feb 13 21:23:53.682002 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 21:23:53.682192 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 21:23:53.683477 systemd[1]: Started update-engine.service - Update Engine. Feb 13 21:23:53.685786 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 21:23:53.685809 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 21:23:53.687476 extend-filesystems[1481]: Found loop4 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found loop5 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found loop6 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found loop7 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found vda Feb 13 21:23:53.687476 extend-filesystems[1481]: Found vda1 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found vda2 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found vda3 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found usr Feb 13 21:23:53.687476 extend-filesystems[1481]: Found vda4 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found vda6 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found vda7 Feb 13 21:23:53.687476 extend-filesystems[1481]: Found vda9 Feb 13 21:23:53.687476 extend-filesystems[1481]: Checking size of /dev/vda9 Feb 13 21:23:53.689956 systemd-logind[1487]: New seat seat0. Feb 13 21:23:53.697213 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 21:23:53.715989 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 21:23:53.726775 tar[1505]: linux-amd64/helm Feb 13 21:23:53.732745 extend-filesystems[1481]: Resized partition /dev/vda9 Feb 13 21:23:53.740226 jq[1506]: true Feb 13 21:23:53.740451 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Feb 13 21:23:53.753642 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 21:23:53.801679 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1440) Feb 13 21:23:53.807432 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 21:23:53.808638 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 21:23:53.815965 dbus-daemon[1479]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1509 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 21:23:53.842942 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 21:23:53.862967 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Feb 13 21:23:53.864641 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 21:23:53.876377 systemd[1]: Starting sshkeys.service... Feb 13 21:23:53.915133 polkitd[1539]: Started polkitd version 121 Feb 13 21:23:53.920741 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 21:23:53.934022 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 21:23:53.934022 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 21:23:53.934022 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 21:23:53.952115 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Feb 13 21:23:53.936046 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 21:23:53.937630 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 21:23:53.965887 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 21:23:53.976307 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 21:23:53.980274 polkitd[1539]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 21:23:53.980366 polkitd[1539]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 21:23:53.984650 polkitd[1539]: Finished loading, compiling and executing 2 rules Feb 13 21:23:53.992524 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 21:23:53.992783 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 21:23:53.996725 polkitd[1539]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 21:23:54.047047 systemd-hostnamed[1509]: Hostname set to (static) Feb 13 21:23:54.104780 containerd[1497]: time="2025-02-13T21:23:54.104588854Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 21:23:54.146437 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 21:23:54.178468 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 21:23:54.179162 containerd[1497]: time="2025-02-13T21:23:54.179122788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:54.181870 containerd[1497]: time="2025-02-13T21:23:54.181807405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:54.181870 containerd[1497]: time="2025-02-13T21:23:54.181867360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 21:23:54.181981 containerd[1497]: time="2025-02-13T21:23:54.181889522Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 21:23:54.182051 containerd[1497]: time="2025-02-13T21:23:54.182037082Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 21:23:54.182079 containerd[1497]: time="2025-02-13T21:23:54.182056210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182128 containerd[1497]: time="2025-02-13T21:23:54.182113251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182155 containerd[1497]: time="2025-02-13T21:23:54.182128252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182312 containerd[1497]: time="2025-02-13T21:23:54.182292758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182360 containerd[1497]: time="2025-02-13T21:23:54.182312185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182360 containerd[1497]: time="2025-02-13T21:23:54.182334954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182360 containerd[1497]: time="2025-02-13T21:23:54.182345092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182433 containerd[1497]: time="2025-02-13T21:23:54.182412104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182791 containerd[1497]: time="2025-02-13T21:23:54.182594882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182791 containerd[1497]: time="2025-02-13T21:23:54.182723972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 21:23:54.182791 containerd[1497]: time="2025-02-13T21:23:54.182738325Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 21:23:54.182887 containerd[1497]: time="2025-02-13T21:23:54.182815589Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 21:23:54.182887 containerd[1497]: time="2025-02-13T21:23:54.182855827Z" level=info msg="metadata content store policy set" policy=shared Feb 13 21:23:54.187766 containerd[1497]: time="2025-02-13T21:23:54.187726883Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 21:23:54.187856 containerd[1497]: time="2025-02-13T21:23:54.187804694Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 21:23:54.187856 containerd[1497]: time="2025-02-13T21:23:54.187822486Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 21:23:54.187856 containerd[1497]: time="2025-02-13T21:23:54.187839654Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 21:23:54.187856 containerd[1497]: time="2025-02-13T21:23:54.187854667Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 21:23:54.188020 containerd[1497]: time="2025-02-13T21:23:54.188005277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 21:23:54.192768 containerd[1497]: time="2025-02-13T21:23:54.192540714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 21:23:54.193141 containerd[1497]: time="2025-02-13T21:23:54.193109567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 21:23:54.193179 containerd[1497]: time="2025-02-13T21:23:54.193147484Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 21:23:54.193179 containerd[1497]: time="2025-02-13T21:23:54.193163820Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 21:23:54.193244 containerd[1497]: time="2025-02-13T21:23:54.193181884Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 21:23:54.193244 containerd[1497]: time="2025-02-13T21:23:54.193205850Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 21:23:54.193244 containerd[1497]: time="2025-02-13T21:23:54.193220285Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 21:23:54.193244 containerd[1497]: time="2025-02-13T21:23:54.193234585Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 21:23:54.193377 containerd[1497]: time="2025-02-13T21:23:54.193250130Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193636563Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193657377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193671474Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193693849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193719376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193732723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193746655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193758897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193782314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193794592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193808550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193827414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193842648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194628 containerd[1497]: time="2025-02-13T21:23:54.193924854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.193939558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.193964443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.193979106Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194315081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194341695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194366802Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194457061Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194477155Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194489901Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194785165Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194798086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194811720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194826846Z" level=info msg="NRI interface is disabled by configuration." Feb 13 21:23:54.194988 containerd[1497]: time="2025-02-13T21:23:54.194837940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 21:23:54.196633 containerd[1497]: time="2025-02-13T21:23:54.195706794Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 21:23:54.196633 containerd[1497]: time="2025-02-13T21:23:54.195764513Z" level=info msg="Connect containerd service" Feb 13 21:23:54.196633 containerd[1497]: time="2025-02-13T21:23:54.195990411Z" level=info msg="using legacy CRI server" Feb 13 21:23:54.196633 containerd[1497]: time="2025-02-13T21:23:54.196000849Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 21:23:54.196633 containerd[1497]: time="2025-02-13T21:23:54.196449859Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 21:23:54.198449 containerd[1497]: time="2025-02-13T21:23:54.198419268Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 21:23:54.199204 containerd[1497]: time="2025-02-13T21:23:54.198560915Z" level=info msg="Start subscribing containerd event" Feb 13 21:23:54.199204 containerd[1497]: time="2025-02-13T21:23:54.198651795Z" level=info msg="Start recovering state" Feb 13 21:23:54.199204 containerd[1497]: time="2025-02-13T21:23:54.198742586Z" level=info msg="Start event monitor" Feb 13 21:23:54.199204 containerd[1497]: time="2025-02-13T21:23:54.198768278Z" level=info msg="Start snapshots syncer" Feb 13 21:23:54.199204 containerd[1497]: time="2025-02-13T21:23:54.198779148Z" level=info msg="Start cni network conf syncer for default" Feb 13 21:23:54.199204 containerd[1497]: time="2025-02-13T21:23:54.198788087Z" level=info msg="Start streaming server" Feb 13 21:23:54.199204 containerd[1497]: time="2025-02-13T21:23:54.199148116Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 21:23:54.199204 containerd[1497]: time="2025-02-13T21:23:54.199204431Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 21:23:54.199362 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 21:23:54.202184 containerd[1497]: time="2025-02-13T21:23:54.202156651Z" level=info msg="containerd successfully booted in 0.100213s" Feb 13 21:23:54.211590 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 21:23:54.220043 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 21:23:54.229824 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 21:23:54.230215 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 21:23:54.238925 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 21:23:54.249751 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 21:23:54.259283 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 21:23:54.261711 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 21:23:54.262754 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 21:23:54.284826 systemd-networkd[1419]: eth0: Gained IPv6LL Feb 13 21:23:54.288709 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 21:23:54.290541 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 21:23:54.300764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:23:54.305394 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 21:23:54.344655 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 21:23:54.579483 tar[1505]: linux-amd64/LICENSE Feb 13 21:23:54.579483 tar[1505]: linux-amd64/README.md Feb 13 21:23:54.592523 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 21:23:55.759751 systemd-timesyncd[1398]: Contacted time server 217.144.93.217:123 (2.flatcar.pool.ntp.org). Feb 13 21:23:55.759841 systemd-timesyncd[1398]: Initial clock synchronization to Thu 2025-02-13 21:23:55.759535 UTC. Feb 13 21:23:55.760032 systemd-resolved[1384]: Clock change detected. Flushing caches. Feb 13 21:23:55.951323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:23:55.954473 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:23:56.543798 kubelet[1604]: E0213 21:23:56.543260 1604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:23:56.550359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:23:56.550690 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:23:56.551804 systemd[1]: kubelet.service: Consumed 1.099s CPU time. Feb 13 21:23:56.664251 systemd-networkd[1419]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:19b7:24:19ff:fef4:66de/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:19b7:24:19ff:fef4:66de/64 assigned by NDisc. Feb 13 21:23:56.664260 systemd-networkd[1419]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 21:24:00.173159 agetty[1582]: failed to open credentials directory Feb 13 21:24:00.178898 agetty[1583]: failed to open credentials directory Feb 13 21:24:00.194572 login[1583]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 21:24:00.195062 login[1582]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 21:24:00.206377 systemd-logind[1487]: New session 1 of user core. Feb 13 21:24:00.208027 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 21:24:00.222655 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 21:24:00.242504 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 21:24:00.249721 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 21:24:00.255786 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 21:24:00.372704 systemd[1621]: Queued start job for default target default.target. Feb 13 21:24:00.382925 systemd[1621]: Created slice app.slice - User Application Slice. Feb 13 21:24:00.382968 systemd[1621]: Reached target paths.target - Paths. Feb 13 21:24:00.382988 systemd[1621]: Reached target timers.target - Timers. Feb 13 21:24:00.384898 systemd[1621]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 21:24:00.411194 systemd[1621]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 21:24:00.411341 systemd[1621]: Reached target sockets.target - Sockets. Feb 13 21:24:00.411363 systemd[1621]: Reached target basic.target - Basic System. Feb 13 21:24:00.411420 systemd[1621]: Reached target default.target - Main User Target. Feb 13 21:24:00.411474 systemd[1621]: Startup finished in 148ms. Feb 13 21:24:00.411611 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 21:24:00.425457 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 21:24:01.200006 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 21:24:01.210007 systemd-logind[1487]: New session 2 of user core. Feb 13 21:24:01.219292 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 21:24:01.611177 coreos-metadata[1478]: Feb 13 21:24:01.610 WARN failed to locate config-drive, using the metadata service API instead Feb 13 21:24:01.610946 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 21:24:01.617455 systemd[1]: Started sshd@0-10.244.102.222:22-147.75.109.163:38844.service - OpenSSH per-connection server daemon (147.75.109.163:38844). Feb 13 21:24:01.630162 coreos-metadata[1478]: Feb 13 21:24:01.630 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 21:24:01.638887 coreos-metadata[1478]: Feb 13 21:24:01.638 INFO Fetch failed with 404: resource not found Feb 13 21:24:01.639227 coreos-metadata[1478]: Feb 13 21:24:01.639 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 21:24:01.640058 coreos-metadata[1478]: Feb 13 21:24:01.640 INFO Fetch successful Feb 13 21:24:01.640523 coreos-metadata[1478]: Feb 13 21:24:01.640 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 21:24:01.654388 coreos-metadata[1478]: Feb 13 21:24:01.654 INFO Fetch successful Feb 13 21:24:01.654538 coreos-metadata[1478]: Feb 13 21:24:01.654 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 21:24:01.668228 coreos-metadata[1478]: Feb 13 21:24:01.668 INFO Fetch successful Feb 13 21:24:01.668228 coreos-metadata[1478]: Feb 13 21:24:01.668 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 21:24:01.683036 coreos-metadata[1478]: Feb 13 21:24:01.682 INFO Fetch successful Feb 13 21:24:01.683036 coreos-metadata[1478]: Feb 13 21:24:01.682 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 21:24:01.700230 coreos-metadata[1478]: Feb 13 21:24:01.699 INFO Fetch successful Feb 13 21:24:01.733168 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 21:24:01.735056 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 21:24:01.996721 coreos-metadata[1552]: Feb 13 21:24:01.996 WARN failed to locate config-drive, using the metadata service API instead Feb 13 21:24:02.012080 coreos-metadata[1552]: Feb 13 21:24:02.011 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 21:24:02.034308 coreos-metadata[1552]: Feb 13 21:24:02.034 INFO Fetch successful Feb 13 21:24:02.034308 coreos-metadata[1552]: Feb 13 21:24:02.034 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 21:24:02.064806 coreos-metadata[1552]: Feb 13 21:24:02.064 INFO Fetch successful Feb 13 21:24:02.066714 unknown[1552]: wrote ssh authorized keys file for user: core Feb 13 21:24:02.089496 update-ssh-keys[1662]: Updated "/home/core/.ssh/authorized_keys" Feb 13 21:24:02.090646 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 21:24:02.094405 systemd[1]: Finished sshkeys.service. Feb 13 21:24:02.099512 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 21:24:02.100046 systemd[1]: Startup finished in 1.135s (kernel) + 14.931s (initrd) + 10.609s (userspace) = 26.676s. Feb 13 21:24:02.526348 sshd[1652]: Accepted publickey for core from 147.75.109.163 port 38844 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:24:02.529048 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:02.540361 systemd-logind[1487]: New session 3 of user core. Feb 13 21:24:02.547464 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 21:24:03.294496 systemd[1]: Started sshd@1-10.244.102.222:22-147.75.109.163:38846.service - OpenSSH per-connection server daemon (147.75.109.163:38846). Feb 13 21:24:04.200011 sshd[1668]: Accepted publickey for core from 147.75.109.163 port 38846 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:24:04.202954 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:04.212198 systemd-logind[1487]: New session 4 of user core. Feb 13 21:24:04.223375 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 21:24:04.822971 sshd[1670]: Connection closed by 147.75.109.163 port 38846 Feb 13 21:24:04.824433 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:04.832547 systemd[1]: sshd@1-10.244.102.222:22-147.75.109.163:38846.service: Deactivated successfully. Feb 13 21:24:04.835472 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 21:24:04.836758 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Feb 13 21:24:04.838419 systemd-logind[1487]: Removed session 4. Feb 13 21:24:04.987524 systemd[1]: Started sshd@2-10.244.102.222:22-147.75.109.163:38852.service - OpenSSH per-connection server daemon (147.75.109.163:38852). Feb 13 21:24:05.915597 sshd[1675]: Accepted publickey for core from 147.75.109.163 port 38852 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:24:05.917133 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:05.923168 systemd-logind[1487]: New session 5 of user core. Feb 13 21:24:05.928378 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 21:24:06.537209 sshd[1677]: Connection closed by 147.75.109.163 port 38852 Feb 13 21:24:06.538357 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:06.544299 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Feb 13 21:24:06.545957 systemd[1]: sshd@2-10.244.102.222:22-147.75.109.163:38852.service: Deactivated successfully. Feb 13 21:24:06.548610 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 21:24:06.550085 systemd-logind[1487]: Removed session 5. Feb 13 21:24:06.665627 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 21:24:06.672624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:06.698397 systemd[1]: Started sshd@3-10.244.102.222:22-147.75.109.163:38862.service - OpenSSH per-connection server daemon (147.75.109.163:38862). Feb 13 21:24:06.817588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:06.822300 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:24:06.871126 kubelet[1692]: E0213 21:24:06.871024 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:24:06.876782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:24:06.876960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:24:07.594829 sshd[1685]: Accepted publickey for core from 147.75.109.163 port 38862 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:24:07.598376 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:07.609269 systemd-logind[1487]: New session 6 of user core. Feb 13 21:24:07.617391 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 21:24:08.210188 sshd[1699]: Connection closed by 147.75.109.163 port 38862 Feb 13 21:24:08.211304 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:08.218948 systemd[1]: sshd@3-10.244.102.222:22-147.75.109.163:38862.service: Deactivated successfully. Feb 13 21:24:08.222062 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 21:24:08.223441 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Feb 13 21:24:08.225182 systemd-logind[1487]: Removed session 6. Feb 13 21:24:08.373808 systemd[1]: Started sshd@4-10.244.102.222:22-147.75.109.163:38866.service - OpenSSH per-connection server daemon (147.75.109.163:38866). Feb 13 21:24:09.326569 sshd[1704]: Accepted publickey for core from 147.75.109.163 port 38866 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:24:09.328404 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:09.334304 systemd-logind[1487]: New session 7 of user core. Feb 13 21:24:09.344780 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 21:24:09.814406 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 21:24:09.814704 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:24:09.829572 sudo[1707]: pam_unix(sudo:session): session closed for user root Feb 13 21:24:09.973168 sshd[1706]: Connection closed by 147.75.109.163 port 38866 Feb 13 21:24:09.974030 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:09.977140 systemd[1]: sshd@4-10.244.102.222:22-147.75.109.163:38866.service: Deactivated successfully. Feb 13 21:24:09.979260 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 21:24:09.981027 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Feb 13 21:24:09.982490 systemd-logind[1487]: Removed session 7. Feb 13 21:24:10.137450 systemd[1]: Started sshd@5-10.244.102.222:22-147.75.109.163:37264.service - OpenSSH per-connection server daemon (147.75.109.163:37264). Feb 13 21:24:11.047341 sshd[1712]: Accepted publickey for core from 147.75.109.163 port 37264 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:24:11.050761 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:11.061176 systemd-logind[1487]: New session 8 of user core. Feb 13 21:24:11.072396 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 21:24:11.532019 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 21:24:11.532468 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:24:11.539545 sudo[1716]: pam_unix(sudo:session): session closed for user root Feb 13 21:24:11.548875 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 21:24:11.549201 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:24:11.573951 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 21:24:11.614421 augenrules[1738]: No rules Feb 13 21:24:11.615837 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 21:24:11.616031 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 21:24:11.617602 sudo[1715]: pam_unix(sudo:session): session closed for user root Feb 13 21:24:11.761984 sshd[1714]: Connection closed by 147.75.109.163 port 37264 Feb 13 21:24:11.763868 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:11.770313 systemd[1]: sshd@5-10.244.102.222:22-147.75.109.163:37264.service: Deactivated successfully. Feb 13 21:24:11.773535 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 21:24:11.776128 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Feb 13 21:24:11.778559 systemd-logind[1487]: Removed session 8. Feb 13 21:24:11.929679 systemd[1]: Started sshd@6-10.244.102.222:22-147.75.109.163:37280.service - OpenSSH per-connection server daemon (147.75.109.163:37280). Feb 13 21:24:12.833299 sshd[1746]: Accepted publickey for core from 147.75.109.163 port 37280 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:24:12.836219 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:24:12.845529 systemd-logind[1487]: New session 9 of user core. Feb 13 21:24:12.852310 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 21:24:13.319231 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 21:24:13.319662 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 21:24:13.764806 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 21:24:13.768786 (dockerd)[1769]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 21:24:14.102613 dockerd[1769]: time="2025-02-13T21:24:14.102381072Z" level=info msg="Starting up" Feb 13 21:24:14.201302 systemd[1]: var-lib-docker-metacopy\x2dcheck2400007886-merged.mount: Deactivated successfully. Feb 13 21:24:14.223191 dockerd[1769]: time="2025-02-13T21:24:14.222943628Z" level=info msg="Loading containers: start." Feb 13 21:24:14.427168 kernel: Initializing XFRM netlink socket Feb 13 21:24:14.526043 systemd-networkd[1419]: docker0: Link UP Feb 13 21:24:14.571959 dockerd[1769]: time="2025-02-13T21:24:14.571311940Z" level=info msg="Loading containers: done." Feb 13 21:24:14.590111 dockerd[1769]: time="2025-02-13T21:24:14.590045368Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 21:24:14.590297 dockerd[1769]: time="2025-02-13T21:24:14.590159876Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 21:24:14.590297 dockerd[1769]: time="2025-02-13T21:24:14.590280211Z" level=info msg="Daemon has completed initialization" Feb 13 21:24:14.614702 dockerd[1769]: time="2025-02-13T21:24:14.614638119Z" level=info msg="API listen on /run/docker.sock" Feb 13 21:24:14.614925 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 21:24:15.809157 containerd[1497]: time="2025-02-13T21:24:15.809060495Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 21:24:16.678026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677706373.mount: Deactivated successfully. Feb 13 21:24:16.916598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 21:24:16.925488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:17.065204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:17.070327 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:24:17.134574 kubelet[2018]: E0213 21:24:17.134479 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:24:17.136884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:24:17.137284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:24:18.397981 containerd[1497]: time="2025-02-13T21:24:18.397449350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:18.398532 containerd[1497]: time="2025-02-13T21:24:18.398499437Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976596" Feb 13 21:24:18.399114 containerd[1497]: time="2025-02-13T21:24:18.398788402Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:18.401468 containerd[1497]: time="2025-02-13T21:24:18.401425190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:18.403238 containerd[1497]: time="2025-02-13T21:24:18.402576996Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 2.593414408s" Feb 13 21:24:18.403238 containerd[1497]: time="2025-02-13T21:24:18.402622551Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 21:24:18.405403 containerd[1497]: time="2025-02-13T21:24:18.405366559Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 21:24:20.362960 containerd[1497]: time="2025-02-13T21:24:20.362775467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:20.364531 containerd[1497]: time="2025-02-13T21:24:20.364499987Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708201" Feb 13 21:24:20.365331 containerd[1497]: time="2025-02-13T21:24:20.365295614Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:20.367686 containerd[1497]: time="2025-02-13T21:24:20.367639126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:20.368946 containerd[1497]: time="2025-02-13T21:24:20.368751357Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.963356045s" Feb 13 21:24:20.368946 containerd[1497]: time="2025-02-13T21:24:20.368782271Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 21:24:20.370125 containerd[1497]: time="2025-02-13T21:24:20.369904581Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 21:24:21.949560 containerd[1497]: time="2025-02-13T21:24:21.948764067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:21.949560 containerd[1497]: time="2025-02-13T21:24:21.949441437Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652433" Feb 13 21:24:21.950276 containerd[1497]: time="2025-02-13T21:24:21.950004754Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:21.954742 containerd[1497]: time="2025-02-13T21:24:21.954691600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:21.957959 containerd[1497]: time="2025-02-13T21:24:21.957873214Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.587904201s" Feb 13 21:24:21.959018 containerd[1497]: time="2025-02-13T21:24:21.958236567Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 21:24:21.959858 containerd[1497]: time="2025-02-13T21:24:21.959828865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 21:24:23.200719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272811366.mount: Deactivated successfully. Feb 13 21:24:23.716350 containerd[1497]: time="2025-02-13T21:24:23.715328424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:23.718171 containerd[1497]: time="2025-02-13T21:24:23.718082365Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229116" Feb 13 21:24:23.721050 containerd[1497]: time="2025-02-13T21:24:23.719913676Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:23.734671 containerd[1497]: time="2025-02-13T21:24:23.734617597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:23.736005 containerd[1497]: time="2025-02-13T21:24:23.735972040Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.776111835s" Feb 13 21:24:23.736171 containerd[1497]: time="2025-02-13T21:24:23.736153479Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 21:24:23.737516 containerd[1497]: time="2025-02-13T21:24:23.737459411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 21:24:24.302193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547286456.mount: Deactivated successfully. Feb 13 21:24:25.425632 containerd[1497]: time="2025-02-13T21:24:25.425138907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:25.426476 containerd[1497]: time="2025-02-13T21:24:25.426170203Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 13 21:24:25.427124 containerd[1497]: time="2025-02-13T21:24:25.426958575Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:25.430013 containerd[1497]: time="2025-02-13T21:24:25.429978514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:25.431233 containerd[1497]: time="2025-02-13T21:24:25.431114332Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.693589595s" Feb 13 21:24:25.431233 containerd[1497]: time="2025-02-13T21:24:25.431145642Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 21:24:25.432103 containerd[1497]: time="2025-02-13T21:24:25.432071996Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 21:24:25.988423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3095595862.mount: Deactivated successfully. Feb 13 21:24:25.992258 containerd[1497]: time="2025-02-13T21:24:25.991525498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:25.992827 containerd[1497]: time="2025-02-13T21:24:25.992793662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Feb 13 21:24:25.993503 containerd[1497]: time="2025-02-13T21:24:25.993478927Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:25.995376 containerd[1497]: time="2025-02-13T21:24:25.995350066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:25.996303 containerd[1497]: time="2025-02-13T21:24:25.996280219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 564.179999ms" Feb 13 21:24:25.996518 containerd[1497]: time="2025-02-13T21:24:25.996428847Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 21:24:25.996973 containerd[1497]: time="2025-02-13T21:24:25.996958751Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 21:24:26.567217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241040026.mount: Deactivated successfully. Feb 13 21:24:26.716739 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 21:24:27.166201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 21:24:27.180228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:27.331988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:27.343399 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 21:24:27.413764 kubelet[2152]: E0213 21:24:27.413670 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 21:24:27.418413 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 21:24:27.418639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 21:24:29.381136 containerd[1497]: time="2025-02-13T21:24:29.379982154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:29.381136 containerd[1497]: time="2025-02-13T21:24:29.380348795Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Feb 13 21:24:29.383319 containerd[1497]: time="2025-02-13T21:24:29.383181594Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:29.386965 containerd[1497]: time="2025-02-13T21:24:29.386902050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:29.388729 containerd[1497]: time="2025-02-13T21:24:29.388464169Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.391414977s" Feb 13 21:24:29.388729 containerd[1497]: time="2025-02-13T21:24:29.388506077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 21:24:32.679300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:32.691443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:32.729502 systemd[1]: Reloading requested from client PID 2192 ('systemctl') (unit session-9.scope)... Feb 13 21:24:32.729526 systemd[1]: Reloading... Feb 13 21:24:32.864194 zram_generator::config[2231]: No configuration found. Feb 13 21:24:33.012142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:24:33.091905 systemd[1]: Reloading finished in 361 ms. Feb 13 21:24:33.161849 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 21:24:33.161929 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 21:24:33.162204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:33.164602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:33.315040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:33.327478 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 21:24:33.369571 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:24:33.369571 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 21:24:33.369571 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:24:33.370907 kubelet[2299]: I0213 21:24:33.370860 2299 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 21:24:33.876315 kubelet[2299]: I0213 21:24:33.876208 2299 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 21:24:33.876315 kubelet[2299]: I0213 21:24:33.876303 2299 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 21:24:33.877509 kubelet[2299]: I0213 21:24:33.877455 2299 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 21:24:33.908128 kubelet[2299]: I0213 21:24:33.907848 2299 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 21:24:33.909865 kubelet[2299]: E0213 21:24:33.909822 2299 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.102.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.102.222:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:33.923175 kubelet[2299]: E0213 21:24:33.922843 2299 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 21:24:33.923175 kubelet[2299]: I0213 21:24:33.922919 2299 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 21:24:33.933640 kubelet[2299]: I0213 21:24:33.933587 2299 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 21:24:33.935154 kubelet[2299]: I0213 21:24:33.935088 2299 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 21:24:33.935482 kubelet[2299]: I0213 21:24:33.935428 2299 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 21:24:33.935766 kubelet[2299]: I0213 21:24:33.935481 2299 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-9zhep.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 21:24:33.935968 kubelet[2299]: I0213 21:24:33.935805 2299 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 21:24:33.935968 kubelet[2299]: I0213 21:24:33.935818 2299 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 21:24:33.936046 kubelet[2299]: I0213 21:24:33.935985 2299 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:24:33.938134 kubelet[2299]: I0213 21:24:33.938037 2299 kubelet.go:408] "Attempting to sync node with API server" Feb 13 21:24:33.938134 kubelet[2299]: I0213 21:24:33.938073 2299 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 21:24:33.939243 kubelet[2299]: I0213 21:24:33.938201 2299 kubelet.go:314] "Adding apiserver pod source" Feb 13 21:24:33.939243 kubelet[2299]: I0213 21:24:33.938236 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 21:24:33.944410 kubelet[2299]: W0213 21:24:33.943123 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.102.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9zhep.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.102.222:6443: connect: connection refused Feb 13 21:24:33.944410 kubelet[2299]: E0213 21:24:33.943233 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.102.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9zhep.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.102.222:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:33.944410 kubelet[2299]: W0213 21:24:33.944347 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.102.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.102.222:6443: connect: connection refused Feb 13 21:24:33.944633 kubelet[2299]: E0213 21:24:33.944616 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.102.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.102.222:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:33.944813 kubelet[2299]: I0213 21:24:33.944799 2299 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 21:24:33.948086 kubelet[2299]: I0213 21:24:33.948058 2299 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 21:24:33.949555 kubelet[2299]: W0213 21:24:33.949534 2299 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 21:24:33.952058 kubelet[2299]: I0213 21:24:33.952036 2299 server.go:1269] "Started kubelet" Feb 13 21:24:33.954018 kubelet[2299]: I0213 21:24:33.953420 2299 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 21:24:33.954959 kubelet[2299]: I0213 21:24:33.954557 2299 server.go:460] "Adding debug handlers to kubelet server" Feb 13 21:24:33.957574 kubelet[2299]: I0213 21:24:33.957136 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 21:24:33.957574 kubelet[2299]: I0213 21:24:33.957277 2299 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 21:24:33.957574 kubelet[2299]: I0213 21:24:33.957491 2299 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 21:24:33.962136 kubelet[2299]: E0213 21:24:33.958780 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.102.222:6443/api/v1/namespaces/default/events\": dial tcp 10.244.102.222:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-9zhep.gb1.brightbox.com.1823e18c5623e6de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-9zhep.gb1.brightbox.com,UID:srv-9zhep.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-9zhep.gb1.brightbox.com,},FirstTimestamp:2025-02-13 21:24:33.952007902 +0000 UTC m=+0.620358297,LastTimestamp:2025-02-13 21:24:33.952007902 +0000 UTC m=+0.620358297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-9zhep.gb1.brightbox.com,}" Feb 13 21:24:33.967539 kubelet[2299]: I0213 21:24:33.966182 2299 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 21:24:33.967539 kubelet[2299]: E0213 21:24:33.966485 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:33.967539 kubelet[2299]: I0213 21:24:33.967456 2299 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 21:24:33.969755 kubelet[2299]: I0213 21:24:33.969733 2299 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 21:24:33.970372 kubelet[2299]: I0213 21:24:33.969947 2299 reconciler.go:26] "Reconciler: start to sync state" Feb 13 21:24:33.970593 kubelet[2299]: W0213 21:24:33.970552 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.102.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.102.222:6443: connect: connection refused Feb 13 21:24:33.972346 kubelet[2299]: E0213 21:24:33.972086 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.102.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9zhep.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.102.222:6443: connect: connection refused" interval="200ms" Feb 13 21:24:33.972346 kubelet[2299]: E0213 21:24:33.972092 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.102.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.102.222:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:33.973059 kubelet[2299]: I0213 21:24:33.972984 2299 factory.go:221] Registration of the systemd container factory successfully Feb 13 21:24:33.975071 kubelet[2299]: I0213 21:24:33.973689 2299 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 21:24:33.977812 kubelet[2299]: E0213 21:24:33.977439 2299 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 21:24:33.978036 kubelet[2299]: I0213 21:24:33.978017 2299 factory.go:221] Registration of the containerd container factory successfully Feb 13 21:24:34.004761 kubelet[2299]: I0213 21:24:34.004696 2299 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 21:24:34.004761 kubelet[2299]: I0213 21:24:34.004727 2299 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 21:24:34.004761 kubelet[2299]: I0213 21:24:34.004755 2299 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:24:34.009234 kubelet[2299]: I0213 21:24:34.009089 2299 policy_none.go:49] "None policy: Start" Feb 13 21:24:34.011767 kubelet[2299]: I0213 21:24:34.011380 2299 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 21:24:34.011767 kubelet[2299]: I0213 21:24:34.011411 2299 state_mem.go:35] "Initializing new in-memory state store" Feb 13 21:24:34.012941 kubelet[2299]: I0213 21:24:34.012912 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 21:24:34.016292 kubelet[2299]: I0213 21:24:34.016273 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 21:24:34.016396 kubelet[2299]: I0213 21:24:34.016388 2299 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 21:24:34.016475 kubelet[2299]: I0213 21:24:34.016468 2299 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 21:24:34.016578 kubelet[2299]: E0213 21:24:34.016550 2299 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 21:24:34.019567 kubelet[2299]: W0213 21:24:34.019406 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.102.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.102.222:6443: connect: connection refused Feb 13 21:24:34.019567 kubelet[2299]: E0213 21:24:34.019458 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.102.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.102.222:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:34.024907 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 21:24:34.032653 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 21:24:34.036031 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 21:24:34.047060 kubelet[2299]: I0213 21:24:34.047008 2299 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 21:24:34.047274 kubelet[2299]: I0213 21:24:34.047254 2299 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 21:24:34.047322 kubelet[2299]: I0213 21:24:34.047275 2299 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 21:24:34.048000 kubelet[2299]: I0213 21:24:34.047838 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 21:24:34.050496 kubelet[2299]: E0213 21:24:34.050306 2299 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:34.129994 systemd[1]: Created slice kubepods-burstable-pod109317cc9e188e3f831c2b3735f275e3.slice - libcontainer container kubepods-burstable-pod109317cc9e188e3f831c2b3735f275e3.slice. Feb 13 21:24:34.141903 systemd[1]: Created slice kubepods-burstable-pod16594a3c5ba60f27cfe453e5980bde03.slice - libcontainer container kubepods-burstable-pod16594a3c5ba60f27cfe453e5980bde03.slice. Feb 13 21:24:34.150299 kubelet[2299]: I0213 21:24:34.150060 2299 kubelet_node_status.go:72] "Attempting to register node" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.151157 systemd[1]: Created slice kubepods-burstable-podfcd818d62929c717e6b5c645ac8c9d22.slice - libcontainer container kubepods-burstable-podfcd818d62929c717e6b5c645ac8c9d22.slice. Feb 13 21:24:34.152068 kubelet[2299]: E0213 21:24:34.152024 2299 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.102.222:6443/api/v1/nodes\": dial tcp 10.244.102.222:6443: connect: connection refused" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.172932 kubelet[2299]: E0213 21:24:34.172803 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.102.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9zhep.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.102.222:6443: connect: connection refused" interval="400ms" Feb 13 21:24:34.272286 kubelet[2299]: I0213 21:24:34.271646 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-ca-certs\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.272286 kubelet[2299]: I0213 21:24:34.271757 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-flexvolume-dir\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.272286 kubelet[2299]: I0213 21:24:34.271804 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-kubeconfig\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.272286 kubelet[2299]: I0213 21:24:34.271848 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/109317cc9e188e3f831c2b3735f275e3-ca-certs\") pod \"kube-apiserver-srv-9zhep.gb1.brightbox.com\" (UID: \"109317cc9e188e3f831c2b3735f275e3\") " pod="kube-system/kube-apiserver-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.272286 kubelet[2299]: I0213 21:24:34.271889 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/109317cc9e188e3f831c2b3735f275e3-k8s-certs\") pod \"kube-apiserver-srv-9zhep.gb1.brightbox.com\" (UID: \"109317cc9e188e3f831c2b3735f275e3\") " pod="kube-system/kube-apiserver-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.272989 kubelet[2299]: I0213 21:24:34.271932 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/109317cc9e188e3f831c2b3735f275e3-usr-share-ca-certificates\") pod \"kube-apiserver-srv-9zhep.gb1.brightbox.com\" (UID: \"109317cc9e188e3f831c2b3735f275e3\") " pod="kube-system/kube-apiserver-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.272989 kubelet[2299]: I0213 21:24:34.271970 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fcd818d62929c717e6b5c645ac8c9d22-kubeconfig\") pod \"kube-scheduler-srv-9zhep.gb1.brightbox.com\" (UID: \"fcd818d62929c717e6b5c645ac8c9d22\") " pod="kube-system/kube-scheduler-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.272989 kubelet[2299]: I0213 21:24:34.272008 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-k8s-certs\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.272989 kubelet[2299]: I0213 21:24:34.272051 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.316510 kubelet[2299]: E0213 21:24:34.316224 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.102.222:6443/api/v1/namespaces/default/events\": dial tcp 10.244.102.222:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-9zhep.gb1.brightbox.com.1823e18c5623e6de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-9zhep.gb1.brightbox.com,UID:srv-9zhep.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-9zhep.gb1.brightbox.com,},FirstTimestamp:2025-02-13 21:24:33.952007902 +0000 UTC m=+0.620358297,LastTimestamp:2025-02-13 21:24:33.952007902 +0000 UTC m=+0.620358297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-9zhep.gb1.brightbox.com,}" Feb 13 21:24:34.356556 kubelet[2299]: I0213 21:24:34.356406 2299 kubelet_node_status.go:72] "Attempting to register node" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.357278 kubelet[2299]: E0213 21:24:34.357052 2299 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.102.222:6443/api/v1/nodes\": dial tcp 10.244.102.222:6443: connect: connection refused" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.442905 containerd[1497]: time="2025-02-13T21:24:34.442750526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-9zhep.gb1.brightbox.com,Uid:109317cc9e188e3f831c2b3735f275e3,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:34.447579 containerd[1497]: time="2025-02-13T21:24:34.447040758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-9zhep.gb1.brightbox.com,Uid:16594a3c5ba60f27cfe453e5980bde03,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:34.454672 containerd[1497]: time="2025-02-13T21:24:34.454553671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-9zhep.gb1.brightbox.com,Uid:fcd818d62929c717e6b5c645ac8c9d22,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:34.574878 kubelet[2299]: E0213 21:24:34.574761 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.102.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9zhep.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.102.222:6443: connect: connection refused" interval="800ms" Feb 13 21:24:34.761047 kubelet[2299]: I0213 21:24:34.760865 2299 kubelet_node_status.go:72] "Attempting to register node" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.761320 kubelet[2299]: E0213 21:24:34.761281 2299 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.102.222:6443/api/v1/nodes\": dial tcp 10.244.102.222:6443: connect: connection refused" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:34.857290 kubelet[2299]: W0213 21:24:34.857091 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.102.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.102.222:6443: connect: connection refused Feb 13 21:24:34.857290 kubelet[2299]: E0213 21:24:34.857227 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.102.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.102.222:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:34.978151 kubelet[2299]: W0213 21:24:34.977921 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.102.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.102.222:6443: connect: connection refused Feb 13 21:24:34.978151 kubelet[2299]: E0213 21:24:34.978061 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.102.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.102.222:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:35.031934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468769463.mount: Deactivated successfully. Feb 13 21:24:35.035248 containerd[1497]: time="2025-02-13T21:24:35.035199936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:35.036459 containerd[1497]: time="2025-02-13T21:24:35.036208294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 21:24:35.038122 containerd[1497]: time="2025-02-13T21:24:35.037556827Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:35.039154 containerd[1497]: time="2025-02-13T21:24:35.039069028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:35.039758 containerd[1497]: time="2025-02-13T21:24:35.039602141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 21:24:35.040270 containerd[1497]: time="2025-02-13T21:24:35.040248502Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:35.040909 containerd[1497]: time="2025-02-13T21:24:35.040854815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 21:24:35.041337 containerd[1497]: time="2025-02-13T21:24:35.041291021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 21:24:35.043119 containerd[1497]: time="2025-02-13T21:24:35.041958480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 594.901152ms" Feb 13 21:24:35.044252 containerd[1497]: time="2025-02-13T21:24:35.044222277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.924966ms" Feb 13 21:24:35.046344 containerd[1497]: time="2025-02-13T21:24:35.046312644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.664173ms" Feb 13 21:24:35.182159 containerd[1497]: time="2025-02-13T21:24:35.180623967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:35.182159 containerd[1497]: time="2025-02-13T21:24:35.180685890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:35.182159 containerd[1497]: time="2025-02-13T21:24:35.180702235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:35.182159 containerd[1497]: time="2025-02-13T21:24:35.181877515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:35.183828 containerd[1497]: time="2025-02-13T21:24:35.179535158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:35.183935 containerd[1497]: time="2025-02-13T21:24:35.183811266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:35.183935 containerd[1497]: time="2025-02-13T21:24:35.183827792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:35.183935 containerd[1497]: time="2025-02-13T21:24:35.183907759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:35.188799 containerd[1497]: time="2025-02-13T21:24:35.188535306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:35.188799 containerd[1497]: time="2025-02-13T21:24:35.188603458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:35.188799 containerd[1497]: time="2025-02-13T21:24:35.188641004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:35.188799 containerd[1497]: time="2025-02-13T21:24:35.188715498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:35.204279 kubelet[2299]: W0213 21:24:35.202643 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.102.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.102.222:6443: connect: connection refused Feb 13 21:24:35.204279 kubelet[2299]: E0213 21:24:35.202692 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.102.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.102.222:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:35.217887 systemd[1]: Started cri-containerd-bcd31ba2405f25d86a80a9471be6f9f955a7d259bdc94c41ee48ed721d084a73.scope - libcontainer container bcd31ba2405f25d86a80a9471be6f9f955a7d259bdc94c41ee48ed721d084a73. Feb 13 21:24:35.232251 systemd[1]: Started cri-containerd-53324cb090ea00c1f7f10d22ddc69fb7998a764f4694fea9942efb448f7b5a5e.scope - libcontainer container 53324cb090ea00c1f7f10d22ddc69fb7998a764f4694fea9942efb448f7b5a5e. Feb 13 21:24:35.233462 systemd[1]: Started cri-containerd-d103ab519b0dae6e4cc3ab5412cfc6832ea0eb80bfa0489e9ef54278c81a6e5a.scope - libcontainer container d103ab519b0dae6e4cc3ab5412cfc6832ea0eb80bfa0489e9ef54278c81a6e5a. Feb 13 21:24:35.290857 containerd[1497]: time="2025-02-13T21:24:35.290688250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-9zhep.gb1.brightbox.com,Uid:fcd818d62929c717e6b5c645ac8c9d22,Namespace:kube-system,Attempt:0,} returns sandbox id \"53324cb090ea00c1f7f10d22ddc69fb7998a764f4694fea9942efb448f7b5a5e\"" Feb 13 21:24:35.298141 containerd[1497]: time="2025-02-13T21:24:35.298080885Z" level=info msg="CreateContainer within sandbox \"53324cb090ea00c1f7f10d22ddc69fb7998a764f4694fea9942efb448f7b5a5e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 21:24:35.321717 containerd[1497]: time="2025-02-13T21:24:35.321660024Z" level=info msg="CreateContainer within sandbox \"53324cb090ea00c1f7f10d22ddc69fb7998a764f4694fea9942efb448f7b5a5e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"51585da46249ec8f1495c1f01569b1464f32ae9a02cd1b7de156a78925336923\"" Feb 13 21:24:35.322340 containerd[1497]: time="2025-02-13T21:24:35.322313720Z" level=info msg="StartContainer for \"51585da46249ec8f1495c1f01569b1464f32ae9a02cd1b7de156a78925336923\"" Feb 13 21:24:35.334189 containerd[1497]: time="2025-02-13T21:24:35.334145269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-9zhep.gb1.brightbox.com,Uid:109317cc9e188e3f831c2b3735f275e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcd31ba2405f25d86a80a9471be6f9f955a7d259bdc94c41ee48ed721d084a73\"" Feb 13 21:24:35.338803 containerd[1497]: time="2025-02-13T21:24:35.338493194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-9zhep.gb1.brightbox.com,Uid:16594a3c5ba60f27cfe453e5980bde03,Namespace:kube-system,Attempt:0,} returns sandbox id \"d103ab519b0dae6e4cc3ab5412cfc6832ea0eb80bfa0489e9ef54278c81a6e5a\"" Feb 13 21:24:35.339498 containerd[1497]: time="2025-02-13T21:24:35.339474019Z" level=info msg="CreateContainer within sandbox \"bcd31ba2405f25d86a80a9471be6f9f955a7d259bdc94c41ee48ed721d084a73\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 21:24:35.342943 containerd[1497]: time="2025-02-13T21:24:35.342917374Z" level=info msg="CreateContainer within sandbox \"d103ab519b0dae6e4cc3ab5412cfc6832ea0eb80bfa0489e9ef54278c81a6e5a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 21:24:35.352429 containerd[1497]: time="2025-02-13T21:24:35.352291778Z" level=info msg="CreateContainer within sandbox \"d103ab519b0dae6e4cc3ab5412cfc6832ea0eb80bfa0489e9ef54278c81a6e5a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0357511978dde11c98610fa1b4b73ef16bac37fa101ef8b6056fbbac25ecbc24\"" Feb 13 21:24:35.355124 containerd[1497]: time="2025-02-13T21:24:35.353320489Z" level=info msg="StartContainer for \"0357511978dde11c98610fa1b4b73ef16bac37fa101ef8b6056fbbac25ecbc24\"" Feb 13 21:24:35.355124 containerd[1497]: time="2025-02-13T21:24:35.354184407Z" level=info msg="CreateContainer within sandbox \"bcd31ba2405f25d86a80a9471be6f9f955a7d259bdc94c41ee48ed721d084a73\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"faa8a36346c4cfd46796c4bfa7f3f92184b769afea6fdf57be07d16659a4c6b6\"" Feb 13 21:24:35.356243 containerd[1497]: time="2025-02-13T21:24:35.356216761Z" level=info msg="StartContainer for \"faa8a36346c4cfd46796c4bfa7f3f92184b769afea6fdf57be07d16659a4c6b6\"" Feb 13 21:24:35.371315 systemd[1]: Started cri-containerd-51585da46249ec8f1495c1f01569b1464f32ae9a02cd1b7de156a78925336923.scope - libcontainer container 51585da46249ec8f1495c1f01569b1464f32ae9a02cd1b7de156a78925336923. Feb 13 21:24:35.375799 kubelet[2299]: E0213 21:24:35.375742 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.102.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-9zhep.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.102.222:6443: connect: connection refused" interval="1.6s" Feb 13 21:24:35.410306 systemd[1]: Started cri-containerd-0357511978dde11c98610fa1b4b73ef16bac37fa101ef8b6056fbbac25ecbc24.scope - libcontainer container 0357511978dde11c98610fa1b4b73ef16bac37fa101ef8b6056fbbac25ecbc24. Feb 13 21:24:35.417317 systemd[1]: Started cri-containerd-faa8a36346c4cfd46796c4bfa7f3f92184b769afea6fdf57be07d16659a4c6b6.scope - libcontainer container faa8a36346c4cfd46796c4bfa7f3f92184b769afea6fdf57be07d16659a4c6b6. Feb 13 21:24:35.436409 kubelet[2299]: W0213 21:24:35.436299 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.102.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9zhep.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.102.222:6443: connect: connection refused Feb 13 21:24:35.436409 kubelet[2299]: E0213 21:24:35.436376 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.102.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-9zhep.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.102.222:6443: connect: connection refused" logger="UnhandledError" Feb 13 21:24:35.454522 containerd[1497]: time="2025-02-13T21:24:35.454478059Z" level=info msg="StartContainer for \"51585da46249ec8f1495c1f01569b1464f32ae9a02cd1b7de156a78925336923\" returns successfully" Feb 13 21:24:35.493388 containerd[1497]: time="2025-02-13T21:24:35.492063790Z" level=info msg="StartContainer for \"faa8a36346c4cfd46796c4bfa7f3f92184b769afea6fdf57be07d16659a4c6b6\" returns successfully" Feb 13 21:24:35.498444 containerd[1497]: time="2025-02-13T21:24:35.498399385Z" level=info msg="StartContainer for \"0357511978dde11c98610fa1b4b73ef16bac37fa101ef8b6056fbbac25ecbc24\" returns successfully" Feb 13 21:24:35.566911 kubelet[2299]: I0213 21:24:35.566807 2299 kubelet_node_status.go:72] "Attempting to register node" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:35.568076 kubelet[2299]: E0213 21:24:35.568031 2299 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.102.222:6443/api/v1/nodes\": dial tcp 10.244.102.222:6443: connect: connection refused" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:37.173786 kubelet[2299]: I0213 21:24:37.173687 2299 kubelet_node_status.go:72] "Attempting to register node" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:37.364270 kubelet[2299]: E0213 21:24:37.364226 2299 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-9zhep.gb1.brightbox.com\" not found" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:37.429301 kubelet[2299]: I0213 21:24:37.428925 2299 kubelet_node_status.go:75] "Successfully registered node" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:37.429301 kubelet[2299]: E0213 21:24:37.428984 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-9zhep.gb1.brightbox.com\": node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:37.449359 kubelet[2299]: E0213 21:24:37.449315 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:37.550373 kubelet[2299]: E0213 21:24:37.550319 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:37.651570 kubelet[2299]: E0213 21:24:37.651487 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:37.752823 kubelet[2299]: E0213 21:24:37.752561 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:37.854029 kubelet[2299]: E0213 21:24:37.853827 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:37.954870 kubelet[2299]: E0213 21:24:37.954810 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:38.055682 kubelet[2299]: E0213 21:24:38.055533 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:38.156429 kubelet[2299]: E0213 21:24:38.156331 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:38.257252 kubelet[2299]: E0213 21:24:38.257151 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:38.956209 kubelet[2299]: I0213 21:24:38.955505 2299 apiserver.go:52] "Watching apiserver" Feb 13 21:24:38.970992 kubelet[2299]: I0213 21:24:38.970909 2299 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 21:24:39.429024 update_engine[1488]: I20250213 21:24:39.427358 1488 update_attempter.cc:509] Updating boot flags... Feb 13 21:24:39.486131 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2583) Feb 13 21:24:39.561565 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2584) Feb 13 21:24:39.781513 systemd[1]: Reloading requested from client PID 2591 ('systemctl') (unit session-9.scope)... Feb 13 21:24:39.781541 systemd[1]: Reloading... Feb 13 21:24:39.875138 zram_generator::config[2636]: No configuration found. Feb 13 21:24:40.010083 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 21:24:40.118809 systemd[1]: Reloading finished in 336 ms. Feb 13 21:24:40.165347 kubelet[2299]: I0213 21:24:40.165044 2299 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 21:24:40.165122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:40.179757 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 21:24:40.180278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:40.180383 systemd[1]: kubelet.service: Consumed 1.103s CPU time, 113.0M memory peak, 0B memory swap peak. Feb 13 21:24:40.189470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 21:24:40.357084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 21:24:40.372407 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 21:24:40.454378 kubelet[2694]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:24:40.454378 kubelet[2694]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 21:24:40.454378 kubelet[2694]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 21:24:40.454837 kubelet[2694]: I0213 21:24:40.454470 2694 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 21:24:40.465124 kubelet[2694]: I0213 21:24:40.465040 2694 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 21:24:40.465124 kubelet[2694]: I0213 21:24:40.465069 2694 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 21:24:40.465457 kubelet[2694]: I0213 21:24:40.465396 2694 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 21:24:40.467018 kubelet[2694]: I0213 21:24:40.466879 2694 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 21:24:40.472621 kubelet[2694]: I0213 21:24:40.472534 2694 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 21:24:40.477128 kubelet[2694]: E0213 21:24:40.477057 2694 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 21:24:40.477128 kubelet[2694]: I0213 21:24:40.477123 2694 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 21:24:40.482961 kubelet[2694]: I0213 21:24:40.482880 2694 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 21:24:40.483152 kubelet[2694]: I0213 21:24:40.483035 2694 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 21:24:40.483251 kubelet[2694]: I0213 21:24:40.483172 2694 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 21:24:40.483469 kubelet[2694]: I0213 21:24:40.483195 2694 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-9zhep.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 21:24:40.483469 kubelet[2694]: I0213 21:24:40.483467 2694 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 21:24:40.483931 kubelet[2694]: I0213 21:24:40.483479 2694 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 21:24:40.483931 kubelet[2694]: I0213 21:24:40.483556 2694 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:24:40.483931 kubelet[2694]: I0213 21:24:40.483675 2694 kubelet.go:408] "Attempting to sync node with API server" Feb 13 21:24:40.483931 kubelet[2694]: I0213 21:24:40.483690 2694 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 21:24:40.483931 kubelet[2694]: I0213 21:24:40.483735 2694 kubelet.go:314] "Adding apiserver pod source" Feb 13 21:24:40.483931 kubelet[2694]: I0213 21:24:40.483748 2694 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 21:24:40.487456 kubelet[2694]: I0213 21:24:40.487209 2694 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 21:24:40.489435 kubelet[2694]: I0213 21:24:40.489382 2694 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 21:24:40.492084 kubelet[2694]: I0213 21:24:40.490861 2694 server.go:1269] "Started kubelet" Feb 13 21:24:40.492403 kubelet[2694]: I0213 21:24:40.492315 2694 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 21:24:40.494119 kubelet[2694]: I0213 21:24:40.493693 2694 server.go:460] "Adding debug handlers to kubelet server" Feb 13 21:24:40.496516 kubelet[2694]: I0213 21:24:40.496345 2694 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 21:24:40.496830 kubelet[2694]: I0213 21:24:40.496809 2694 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 21:24:40.497792 kubelet[2694]: I0213 21:24:40.497772 2694 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 21:24:40.506189 kubelet[2694]: I0213 21:24:40.505956 2694 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 21:24:40.509702 kubelet[2694]: I0213 21:24:40.509060 2694 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 21:24:40.509702 kubelet[2694]: E0213 21:24:40.509331 2694 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-9zhep.gb1.brightbox.com\" not found" Feb 13 21:24:40.522157 kubelet[2694]: I0213 21:24:40.521686 2694 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 21:24:40.522157 kubelet[2694]: I0213 21:24:40.521906 2694 reconciler.go:26] "Reconciler: start to sync state" Feb 13 21:24:40.526356 kubelet[2694]: I0213 21:24:40.526287 2694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 21:24:40.529719 kubelet[2694]: I0213 21:24:40.529677 2694 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 21:24:40.529837 kubelet[2694]: I0213 21:24:40.529752 2694 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 21:24:40.529837 kubelet[2694]: I0213 21:24:40.529780 2694 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 21:24:40.529923 kubelet[2694]: E0213 21:24:40.529848 2694 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 21:24:40.535423 kubelet[2694]: I0213 21:24:40.534885 2694 factory.go:221] Registration of the systemd container factory successfully Feb 13 21:24:40.535806 kubelet[2694]: I0213 21:24:40.535637 2694 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 21:24:40.541766 kubelet[2694]: I0213 21:24:40.541743 2694 factory.go:221] Registration of the containerd container factory successfully Feb 13 21:24:40.577509 kubelet[2694]: E0213 21:24:40.576671 2694 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 21:24:40.630361 kubelet[2694]: E0213 21:24:40.630185 2694 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 21:24:40.638760 kubelet[2694]: I0213 21:24:40.638202 2694 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 21:24:40.638760 kubelet[2694]: I0213 21:24:40.638220 2694 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 21:24:40.638760 kubelet[2694]: I0213 21:24:40.638249 2694 state_mem.go:36] "Initialized new in-memory state store" Feb 13 21:24:40.638760 kubelet[2694]: I0213 21:24:40.638423 2694 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 21:24:40.638760 kubelet[2694]: I0213 21:24:40.638434 2694 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 21:24:40.638760 kubelet[2694]: I0213 21:24:40.638457 2694 policy_none.go:49] "None policy: Start" Feb 13 21:24:40.639346 kubelet[2694]: I0213 21:24:40.639319 2694 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 21:24:40.639346 kubelet[2694]: I0213 21:24:40.639346 2694 state_mem.go:35] "Initializing new in-memory state store" Feb 13 21:24:40.639517 kubelet[2694]: I0213 21:24:40.639500 2694 state_mem.go:75] "Updated machine memory state" Feb 13 21:24:40.649573 kubelet[2694]: I0213 21:24:40.649533 2694 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 21:24:40.649828 kubelet[2694]: I0213 21:24:40.649774 2694 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 21:24:40.649828 kubelet[2694]: I0213 21:24:40.649802 2694 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 21:24:40.650796 kubelet[2694]: I0213 21:24:40.650396 2694 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 21:24:40.771044 kubelet[2694]: I0213 21:24:40.770586 2694 kubelet_node_status.go:72] "Attempting to register node" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.782302 kubelet[2694]: I0213 21:24:40.782181 2694 kubelet_node_status.go:111] "Node was previously registered" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.782524 kubelet[2694]: I0213 21:24:40.782388 2694 kubelet_node_status.go:75] "Successfully registered node" node="srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.807907 sudo[2729]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 21:24:40.808850 sudo[2729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 21:24:40.852695 kubelet[2694]: W0213 21:24:40.852659 2694 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:24:40.853270 kubelet[2694]: W0213 21:24:40.853251 2694 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:24:40.856110 kubelet[2694]: W0213 21:24:40.855203 2694 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:24:40.924867 kubelet[2694]: I0213 21:24:40.924826 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/109317cc9e188e3f831c2b3735f275e3-ca-certs\") pod \"kube-apiserver-srv-9zhep.gb1.brightbox.com\" (UID: \"109317cc9e188e3f831c2b3735f275e3\") " pod="kube-system/kube-apiserver-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.925008 kubelet[2694]: I0213 21:24:40.924953 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/109317cc9e188e3f831c2b3735f275e3-k8s-certs\") pod \"kube-apiserver-srv-9zhep.gb1.brightbox.com\" (UID: \"109317cc9e188e3f831c2b3735f275e3\") " pod="kube-system/kube-apiserver-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.925040 kubelet[2694]: I0213 21:24:40.925016 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-ca-certs\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.925077 kubelet[2694]: I0213 21:24:40.925045 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-flexvolume-dir\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.927107 kubelet[2694]: I0213 21:24:40.925138 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-k8s-certs\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.927107 kubelet[2694]: I0213 21:24:40.925171 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fcd818d62929c717e6b5c645ac8c9d22-kubeconfig\") pod \"kube-scheduler-srv-9zhep.gb1.brightbox.com\" (UID: \"fcd818d62929c717e6b5c645ac8c9d22\") " pod="kube-system/kube-scheduler-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.927107 kubelet[2694]: I0213 21:24:40.925214 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/109317cc9e188e3f831c2b3735f275e3-usr-share-ca-certificates\") pod \"kube-apiserver-srv-9zhep.gb1.brightbox.com\" (UID: \"109317cc9e188e3f831c2b3735f275e3\") " pod="kube-system/kube-apiserver-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.927107 kubelet[2694]: I0213 21:24:40.925266 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-kubeconfig\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:40.927107 kubelet[2694]: I0213 21:24:40.925305 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16594a3c5ba60f27cfe453e5980bde03-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-9zhep.gb1.brightbox.com\" (UID: \"16594a3c5ba60f27cfe453e5980bde03\") " pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:41.485412 sudo[2729]: pam_unix(sudo:session): session closed for user root Feb 13 21:24:41.500352 kubelet[2694]: I0213 21:24:41.500167 2694 apiserver.go:52] "Watching apiserver" Feb 13 21:24:41.523737 kubelet[2694]: I0213 21:24:41.522051 2694 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 21:24:41.617751 kubelet[2694]: W0213 21:24:41.617337 2694 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 21:24:41.617751 kubelet[2694]: E0213 21:24:41.617423 2694 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-9zhep.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-9zhep.gb1.brightbox.com" Feb 13 21:24:41.652350 kubelet[2694]: I0213 21:24:41.652163 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-9zhep.gb1.brightbox.com" podStartSLOduration=1.652019753 podStartE2EDuration="1.652019753s" podCreationTimestamp="2025-02-13 21:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:24:41.651819738 +0000 UTC m=+1.267448457" watchObservedRunningTime="2025-02-13 21:24:41.652019753 +0000 UTC m=+1.267648503" Feb 13 21:24:41.685624 kubelet[2694]: I0213 21:24:41.685521 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-9zhep.gb1.brightbox.com" podStartSLOduration=1.6855011420000001 podStartE2EDuration="1.685501142s" podCreationTimestamp="2025-02-13 21:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:24:41.672558476 +0000 UTC m=+1.288187204" watchObservedRunningTime="2025-02-13 21:24:41.685501142 +0000 UTC m=+1.301129850" Feb 13 21:24:41.685624 kubelet[2694]: I0213 21:24:41.685625 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-9zhep.gb1.brightbox.com" podStartSLOduration=1.685621819 podStartE2EDuration="1.685621819s" podCreationTimestamp="2025-02-13 21:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:24:41.683451083 +0000 UTC m=+1.299079892" watchObservedRunningTime="2025-02-13 21:24:41.685621819 +0000 UTC m=+1.301250549" Feb 13 21:24:43.444771 sudo[1749]: pam_unix(sudo:session): session closed for user root Feb 13 21:24:43.588755 sshd[1748]: Connection closed by 147.75.109.163 port 37280 Feb 13 21:24:43.593321 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Feb 13 21:24:43.602054 systemd[1]: sshd@6-10.244.102.222:22-147.75.109.163:37280.service: Deactivated successfully. Feb 13 21:24:43.608427 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 21:24:43.608767 systemd[1]: session-9.scope: Consumed 5.685s CPU time, 143.3M memory peak, 0B memory swap peak. Feb 13 21:24:43.611863 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Feb 13 21:24:43.614081 systemd-logind[1487]: Removed session 9. Feb 13 21:24:44.747766 kubelet[2694]: I0213 21:24:44.747703 2694 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 21:24:44.749592 containerd[1497]: time="2025-02-13T21:24:44.749054518Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 21:24:44.751326 kubelet[2694]: I0213 21:24:44.750420 2694 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 21:24:45.196416 systemd[1]: Created slice kubepods-besteffort-pod63e7224f_1ba7_40f0_ab4a_7e86091addc8.slice - libcontainer container kubepods-besteffort-pod63e7224f_1ba7_40f0_ab4a_7e86091addc8.slice. Feb 13 21:24:45.210188 systemd[1]: Created slice kubepods-burstable-pod774b0e5f_85a9_4a76_b58b_9a1fcb423763.slice - libcontainer container kubepods-burstable-pod774b0e5f_85a9_4a76_b58b_9a1fcb423763.slice. Feb 13 21:24:45.261161 kubelet[2694]: I0213 21:24:45.261051 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-cgroup\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.261161 kubelet[2694]: I0213 21:24:45.261138 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63e7224f-1ba7-40f0-ab4a-7e86091addc8-lib-modules\") pod \"kube-proxy-hq4nh\" (UID: \"63e7224f-1ba7-40f0-ab4a-7e86091addc8\") " pod="kube-system/kube-proxy-hq4nh" Feb 13 21:24:45.261541 kubelet[2694]: I0213 21:24:45.261197 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-run\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.261541 kubelet[2694]: I0213 21:24:45.261220 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-bpf-maps\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.261541 kubelet[2694]: I0213 21:24:45.261258 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-hostproc\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.261541 kubelet[2694]: I0213 21:24:45.261281 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-host-proc-sys-net\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.261541 kubelet[2694]: I0213 21:24:45.261305 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55p7w\" (UniqueName: \"kubernetes.io/projected/63e7224f-1ba7-40f0-ab4a-7e86091addc8-kube-api-access-55p7w\") pod \"kube-proxy-hq4nh\" (UID: \"63e7224f-1ba7-40f0-ab4a-7e86091addc8\") " pod="kube-system/kube-proxy-hq4nh" Feb 13 21:24:45.261541 kubelet[2694]: I0213 21:24:45.261436 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-config-path\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.263706 kubelet[2694]: I0213 21:24:45.261457 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-etc-cni-netd\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.263706 kubelet[2694]: I0213 21:24:45.261577 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmvg5\" (UniqueName: \"kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-kube-api-access-pmvg5\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.263706 kubelet[2694]: I0213 21:24:45.261599 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/63e7224f-1ba7-40f0-ab4a-7e86091addc8-kube-proxy\") pod \"kube-proxy-hq4nh\" (UID: \"63e7224f-1ba7-40f0-ab4a-7e86091addc8\") " pod="kube-system/kube-proxy-hq4nh" Feb 13 21:24:45.263706 kubelet[2694]: I0213 21:24:45.261675 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cni-path\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.263706 kubelet[2694]: I0213 21:24:45.261697 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-xtables-lock\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.263706 kubelet[2694]: I0213 21:24:45.261774 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-host-proc-sys-kernel\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.263972 kubelet[2694]: I0213 21:24:45.261803 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-hubble-tls\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.263972 kubelet[2694]: I0213 21:24:45.261826 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63e7224f-1ba7-40f0-ab4a-7e86091addc8-xtables-lock\") pod \"kube-proxy-hq4nh\" (UID: \"63e7224f-1ba7-40f0-ab4a-7e86091addc8\") " pod="kube-system/kube-proxy-hq4nh" Feb 13 21:24:45.263972 kubelet[2694]: I0213 21:24:45.261883 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-lib-modules\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.263972 kubelet[2694]: I0213 21:24:45.261906 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/774b0e5f-85a9-4a76-b58b-9a1fcb423763-clustermesh-secrets\") pod \"cilium-fxglw\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " pod="kube-system/cilium-fxglw" Feb 13 21:24:45.380696 kubelet[2694]: E0213 21:24:45.380646 2694 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 21:24:45.380696 kubelet[2694]: E0213 21:24:45.380691 2694 projected.go:194] Error preparing data for projected volume kube-api-access-55p7w for pod kube-system/kube-proxy-hq4nh: configmap "kube-root-ca.crt" not found Feb 13 21:24:45.380852 kubelet[2694]: E0213 21:24:45.380773 2694 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63e7224f-1ba7-40f0-ab4a-7e86091addc8-kube-api-access-55p7w podName:63e7224f-1ba7-40f0-ab4a-7e86091addc8 nodeName:}" failed. No retries permitted until 2025-02-13 21:24:45.880743394 +0000 UTC m=+5.496372114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55p7w" (UniqueName: "kubernetes.io/projected/63e7224f-1ba7-40f0-ab4a-7e86091addc8-kube-api-access-55p7w") pod "kube-proxy-hq4nh" (UID: "63e7224f-1ba7-40f0-ab4a-7e86091addc8") : configmap "kube-root-ca.crt" not found Feb 13 21:24:45.387433 kubelet[2694]: E0213 21:24:45.385855 2694 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 21:24:45.387433 kubelet[2694]: E0213 21:24:45.385881 2694 projected.go:194] Error preparing data for projected volume kube-api-access-pmvg5 for pod kube-system/cilium-fxglw: configmap "kube-root-ca.crt" not found Feb 13 21:24:45.387775 kubelet[2694]: E0213 21:24:45.387753 2694 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-kube-api-access-pmvg5 podName:774b0e5f-85a9-4a76-b58b-9a1fcb423763 nodeName:}" failed. No retries permitted until 2025-02-13 21:24:45.887729741 +0000 UTC m=+5.503358460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pmvg5" (UniqueName: "kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-kube-api-access-pmvg5") pod "cilium-fxglw" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763") : configmap "kube-root-ca.crt" not found Feb 13 21:24:45.860937 systemd[1]: Created slice kubepods-besteffort-pod3893dc70_b3e5_448c_9be1_f7198f2a3935.slice - libcontainer container kubepods-besteffort-pod3893dc70_b3e5_448c_9be1_f7198f2a3935.slice. Feb 13 21:24:45.866996 kubelet[2694]: I0213 21:24:45.866880 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92crg\" (UniqueName: \"kubernetes.io/projected/3893dc70-b3e5-448c-9be1-f7198f2a3935-kube-api-access-92crg\") pod \"cilium-operator-5d85765b45-ctwmm\" (UID: \"3893dc70-b3e5-448c-9be1-f7198f2a3935\") " pod="kube-system/cilium-operator-5d85765b45-ctwmm" Feb 13 21:24:45.866996 kubelet[2694]: I0213 21:24:45.866978 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3893dc70-b3e5-448c-9be1-f7198f2a3935-cilium-config-path\") pod \"cilium-operator-5d85765b45-ctwmm\" (UID: \"3893dc70-b3e5-448c-9be1-f7198f2a3935\") " pod="kube-system/cilium-operator-5d85765b45-ctwmm" Feb 13 21:24:46.109338 containerd[1497]: time="2025-02-13T21:24:46.109214450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hq4nh,Uid:63e7224f-1ba7-40f0-ab4a-7e86091addc8,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:46.117309 containerd[1497]: time="2025-02-13T21:24:46.116345571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fxglw,Uid:774b0e5f-85a9-4a76-b58b-9a1fcb423763,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:46.161054 containerd[1497]: time="2025-02-13T21:24:46.160734397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:46.161054 containerd[1497]: time="2025-02-13T21:24:46.160807272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:46.161054 containerd[1497]: time="2025-02-13T21:24:46.160818708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:46.161054 containerd[1497]: time="2025-02-13T21:24:46.160916648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:46.162012 containerd[1497]: time="2025-02-13T21:24:46.161666626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:46.162012 containerd[1497]: time="2025-02-13T21:24:46.161728810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:46.162012 containerd[1497]: time="2025-02-13T21:24:46.161745298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:46.162012 containerd[1497]: time="2025-02-13T21:24:46.161846107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:46.165572 containerd[1497]: time="2025-02-13T21:24:46.165534045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ctwmm,Uid:3893dc70-b3e5-448c-9be1-f7198f2a3935,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:46.197320 systemd[1]: Started cri-containerd-6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd.scope - libcontainer container 6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd. Feb 13 21:24:46.199362 systemd[1]: Started cri-containerd-fdae03eed7c29d4dac384e94e6795d4ddbb2298d93d4eb83e14324b1a1a7ba5c.scope - libcontainer container fdae03eed7c29d4dac384e94e6795d4ddbb2298d93d4eb83e14324b1a1a7ba5c. Feb 13 21:24:46.218401 containerd[1497]: time="2025-02-13T21:24:46.218168885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:24:46.218401 containerd[1497]: time="2025-02-13T21:24:46.218244274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:24:46.218401 containerd[1497]: time="2025-02-13T21:24:46.218255853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:46.219475 containerd[1497]: time="2025-02-13T21:24:46.218430277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:24:46.244270 containerd[1497]: time="2025-02-13T21:24:46.244230912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fxglw,Uid:774b0e5f-85a9-4a76-b58b-9a1fcb423763,Namespace:kube-system,Attempt:0,} returns sandbox id \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\"" Feb 13 21:24:46.246032 containerd[1497]: time="2025-02-13T21:24:46.246001397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hq4nh,Uid:63e7224f-1ba7-40f0-ab4a-7e86091addc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdae03eed7c29d4dac384e94e6795d4ddbb2298d93d4eb83e14324b1a1a7ba5c\"" Feb 13 21:24:46.251621 containerd[1497]: time="2025-02-13T21:24:46.250058569Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 21:24:46.253476 containerd[1497]: time="2025-02-13T21:24:46.253240964Z" level=info msg="CreateContainer within sandbox \"fdae03eed7c29d4dac384e94e6795d4ddbb2298d93d4eb83e14324b1a1a7ba5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 21:24:46.259275 systemd[1]: Started cri-containerd-074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb.scope - libcontainer container 074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb. Feb 13 21:24:46.271364 containerd[1497]: time="2025-02-13T21:24:46.271317678Z" level=info msg="CreateContainer within sandbox \"fdae03eed7c29d4dac384e94e6795d4ddbb2298d93d4eb83e14324b1a1a7ba5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cab2174a81641c804a1f25ca0502beb33f9ca5e770fc995a6dc3e49fc9ae4b46\"" Feb 13 21:24:46.272728 containerd[1497]: time="2025-02-13T21:24:46.271872361Z" level=info msg="StartContainer for \"cab2174a81641c804a1f25ca0502beb33f9ca5e770fc995a6dc3e49fc9ae4b46\"" Feb 13 21:24:46.319315 systemd[1]: Started cri-containerd-cab2174a81641c804a1f25ca0502beb33f9ca5e770fc995a6dc3e49fc9ae4b46.scope - libcontainer container cab2174a81641c804a1f25ca0502beb33f9ca5e770fc995a6dc3e49fc9ae4b46. Feb 13 21:24:46.320893 containerd[1497]: time="2025-02-13T21:24:46.320847687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ctwmm,Uid:3893dc70-b3e5-448c-9be1-f7198f2a3935,Namespace:kube-system,Attempt:0,} returns sandbox id \"074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb\"" Feb 13 21:24:46.353026 containerd[1497]: time="2025-02-13T21:24:46.352932309Z" level=info msg="StartContainer for \"cab2174a81641c804a1f25ca0502beb33f9ca5e770fc995a6dc3e49fc9ae4b46\" returns successfully" Feb 13 21:24:46.655621 kubelet[2694]: I0213 21:24:46.655491 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hq4nh" podStartSLOduration=1.655460709 podStartE2EDuration="1.655460709s" podCreationTimestamp="2025-02-13 21:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:24:46.654926792 +0000 UTC m=+6.270555515" watchObservedRunningTime="2025-02-13 21:24:46.655460709 +0000 UTC m=+6.271089472" Feb 13 21:24:51.964140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088410156.mount: Deactivated successfully. Feb 13 21:24:54.275214 containerd[1497]: time="2025-02-13T21:24:54.275114184Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:54.276985 containerd[1497]: time="2025-02-13T21:24:54.276926050Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 21:24:54.277463 containerd[1497]: time="2025-02-13T21:24:54.277416397Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:54.279339 containerd[1497]: time="2025-02-13T21:24:54.279287423Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.029137297s" Feb 13 21:24:54.279339 containerd[1497]: time="2025-02-13T21:24:54.279327944Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 21:24:54.281784 containerd[1497]: time="2025-02-13T21:24:54.281432301Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 21:24:54.282867 containerd[1497]: time="2025-02-13T21:24:54.282753995Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 21:24:54.352160 containerd[1497]: time="2025-02-13T21:24:54.349453134Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\"" Feb 13 21:24:54.352160 containerd[1497]: time="2025-02-13T21:24:54.351647176Z" level=info msg="StartContainer for \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\"" Feb 13 21:24:54.467450 systemd[1]: Started cri-containerd-563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05.scope - libcontainer container 563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05. Feb 13 21:24:54.498745 containerd[1497]: time="2025-02-13T21:24:54.498380371Z" level=info msg="StartContainer for \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\" returns successfully" Feb 13 21:24:54.509247 systemd[1]: cri-containerd-563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05.scope: Deactivated successfully. Feb 13 21:24:54.577538 containerd[1497]: time="2025-02-13T21:24:54.561329281Z" level=info msg="shim disconnected" id=563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05 namespace=k8s.io Feb 13 21:24:54.577538 containerd[1497]: time="2025-02-13T21:24:54.577469044Z" level=warning msg="cleaning up after shim disconnected" id=563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05 namespace=k8s.io Feb 13 21:24:54.577538 containerd[1497]: time="2025-02-13T21:24:54.577486839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:24:54.667823 containerd[1497]: time="2025-02-13T21:24:54.667746615Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 21:24:54.681156 containerd[1497]: time="2025-02-13T21:24:54.681046294Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\"" Feb 13 21:24:54.684220 containerd[1497]: time="2025-02-13T21:24:54.683239932Z" level=info msg="StartContainer for \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\"" Feb 13 21:24:54.716296 systemd[1]: Started cri-containerd-42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da.scope - libcontainer container 42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da. Feb 13 21:24:54.765506 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 21:24:54.765760 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:24:54.765868 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:24:54.774073 containerd[1497]: time="2025-02-13T21:24:54.774036326Z" level=info msg="StartContainer for \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\" returns successfully" Feb 13 21:24:54.774728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 21:24:54.774952 systemd[1]: cri-containerd-42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da.scope: Deactivated successfully. Feb 13 21:24:54.809893 containerd[1497]: time="2025-02-13T21:24:54.809347117Z" level=info msg="shim disconnected" id=42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da namespace=k8s.io Feb 13 21:24:54.809893 containerd[1497]: time="2025-02-13T21:24:54.809406262Z" level=warning msg="cleaning up after shim disconnected" id=42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da namespace=k8s.io Feb 13 21:24:54.809893 containerd[1497]: time="2025-02-13T21:24:54.809415611Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:24:54.813025 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 21:24:54.829689 containerd[1497]: time="2025-02-13T21:24:54.829556160Z" level=warning msg="cleanup warnings time=\"2025-02-13T21:24:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 21:24:55.343828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05-rootfs.mount: Deactivated successfully. Feb 13 21:24:55.675471 containerd[1497]: time="2025-02-13T21:24:55.675059622Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 21:24:55.762354 containerd[1497]: time="2025-02-13T21:24:55.762288434Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\"" Feb 13 21:24:55.763850 containerd[1497]: time="2025-02-13T21:24:55.763647969Z" level=info msg="StartContainer for \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\"" Feb 13 21:24:55.809285 systemd[1]: Started cri-containerd-08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733.scope - libcontainer container 08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733. Feb 13 21:24:55.851817 containerd[1497]: time="2025-02-13T21:24:55.851760625Z" level=info msg="StartContainer for \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\" returns successfully" Feb 13 21:24:55.857519 systemd[1]: cri-containerd-08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733.scope: Deactivated successfully. Feb 13 21:24:55.884211 containerd[1497]: time="2025-02-13T21:24:55.884140877Z" level=info msg="shim disconnected" id=08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733 namespace=k8s.io Feb 13 21:24:55.884211 containerd[1497]: time="2025-02-13T21:24:55.884204559Z" level=warning msg="cleaning up after shim disconnected" id=08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733 namespace=k8s.io Feb 13 21:24:55.884211 containerd[1497]: time="2025-02-13T21:24:55.884214471Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:24:56.344305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733-rootfs.mount: Deactivated successfully. Feb 13 21:24:56.681044 containerd[1497]: time="2025-02-13T21:24:56.680989563Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 21:24:56.709766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186520898.mount: Deactivated successfully. Feb 13 21:24:56.719576 containerd[1497]: time="2025-02-13T21:24:56.719526613Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\"" Feb 13 21:24:56.720739 containerd[1497]: time="2025-02-13T21:24:56.720672859Z" level=info msg="StartContainer for \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\"" Feb 13 21:24:56.781829 systemd[1]: Started cri-containerd-84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e.scope - libcontainer container 84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e. Feb 13 21:24:56.830944 systemd[1]: cri-containerd-84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e.scope: Deactivated successfully. Feb 13 21:24:56.836811 containerd[1497]: time="2025-02-13T21:24:56.836362401Z" level=info msg="StartContainer for \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\" returns successfully" Feb 13 21:24:56.883322 containerd[1497]: time="2025-02-13T21:24:56.883217206Z" level=info msg="shim disconnected" id=84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e namespace=k8s.io Feb 13 21:24:56.883322 containerd[1497]: time="2025-02-13T21:24:56.883311453Z" level=warning msg="cleaning up after shim disconnected" id=84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e namespace=k8s.io Feb 13 21:24:56.883322 containerd[1497]: time="2025-02-13T21:24:56.883326574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:24:57.043071 containerd[1497]: time="2025-02-13T21:24:57.042074233Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:57.044838 containerd[1497]: time="2025-02-13T21:24:57.044774167Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 21:24:57.045314 containerd[1497]: time="2025-02-13T21:24:57.045279613Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 21:24:57.048029 containerd[1497]: time="2025-02-13T21:24:57.048003812Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.766499871s" Feb 13 21:24:57.048178 containerd[1497]: time="2025-02-13T21:24:57.048163187Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 21:24:57.050605 containerd[1497]: time="2025-02-13T21:24:57.050577263Z" level=info msg="CreateContainer within sandbox \"074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 21:24:57.061040 containerd[1497]: time="2025-02-13T21:24:57.060995407Z" level=info msg="CreateContainer within sandbox \"074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\"" Feb 13 21:24:57.061687 containerd[1497]: time="2025-02-13T21:24:57.061656801Z" level=info msg="StartContainer for \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\"" Feb 13 21:24:57.099298 systemd[1]: Started cri-containerd-2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf.scope - libcontainer container 2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf. Feb 13 21:24:57.133970 containerd[1497]: time="2025-02-13T21:24:57.133898324Z" level=info msg="StartContainer for \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\" returns successfully" Feb 13 21:24:57.344458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e-rootfs.mount: Deactivated successfully. Feb 13 21:24:57.693051 containerd[1497]: time="2025-02-13T21:24:57.692901477Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 21:24:57.713257 containerd[1497]: time="2025-02-13T21:24:57.711658649Z" level=info msg="CreateContainer within sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\"" Feb 13 21:24:57.713803 containerd[1497]: time="2025-02-13T21:24:57.713548217Z" level=info msg="StartContainer for \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\"" Feb 13 21:24:57.763621 systemd[1]: run-containerd-runc-k8s.io-60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac-runc.EY3akY.mount: Deactivated successfully. Feb 13 21:24:57.773278 systemd[1]: Started cri-containerd-60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac.scope - libcontainer container 60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac. Feb 13 21:24:57.861158 containerd[1497]: time="2025-02-13T21:24:57.860784992Z" level=info msg="StartContainer for \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\" returns successfully" Feb 13 21:24:57.938073 kubelet[2694]: I0213 21:24:57.937995 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ctwmm" podStartSLOduration=2.211449226 podStartE2EDuration="12.937943305s" podCreationTimestamp="2025-02-13 21:24:45 +0000 UTC" firstStartedPulling="2025-02-13 21:24:46.322345449 +0000 UTC m=+5.937974154" lastFinishedPulling="2025-02-13 21:24:57.048839514 +0000 UTC m=+16.664468233" observedRunningTime="2025-02-13 21:24:57.800298595 +0000 UTC m=+17.415927392" watchObservedRunningTime="2025-02-13 21:24:57.937943305 +0000 UTC m=+17.553572009" Feb 13 21:24:58.212157 kubelet[2694]: I0213 21:24:58.211872 2694 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 21:24:58.291020 systemd[1]: Created slice kubepods-burstable-pod44da46d1_0429_4582_b83f_ab877ac44830.slice - libcontainer container kubepods-burstable-pod44da46d1_0429_4582_b83f_ab877ac44830.slice. Feb 13 21:24:58.302857 systemd[1]: Created slice kubepods-burstable-poda4afcc03_1c2f_4982_a233_e08cfe11eb1c.slice - libcontainer container kubepods-burstable-poda4afcc03_1c2f_4982_a233_e08cfe11eb1c.slice. Feb 13 21:24:58.365406 kubelet[2694]: I0213 21:24:58.365362 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44da46d1-0429-4582-b83f-ab877ac44830-config-volume\") pod \"coredns-6f6b679f8f-2h5lc\" (UID: \"44da46d1-0429-4582-b83f-ab877ac44830\") " pod="kube-system/coredns-6f6b679f8f-2h5lc" Feb 13 21:24:58.365406 kubelet[2694]: I0213 21:24:58.365407 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4afcc03-1c2f-4982-a233-e08cfe11eb1c-config-volume\") pod \"coredns-6f6b679f8f-dgtt9\" (UID: \"a4afcc03-1c2f-4982-a233-e08cfe11eb1c\") " pod="kube-system/coredns-6f6b679f8f-dgtt9" Feb 13 21:24:58.365607 kubelet[2694]: I0213 21:24:58.365430 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtdx\" (UniqueName: \"kubernetes.io/projected/44da46d1-0429-4582-b83f-ab877ac44830-kube-api-access-gwtdx\") pod \"coredns-6f6b679f8f-2h5lc\" (UID: \"44da46d1-0429-4582-b83f-ab877ac44830\") " pod="kube-system/coredns-6f6b679f8f-2h5lc" Feb 13 21:24:58.365607 kubelet[2694]: I0213 21:24:58.365453 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhs4l\" (UniqueName: \"kubernetes.io/projected/a4afcc03-1c2f-4982-a233-e08cfe11eb1c-kube-api-access-bhs4l\") pod \"coredns-6f6b679f8f-dgtt9\" (UID: \"a4afcc03-1c2f-4982-a233-e08cfe11eb1c\") " pod="kube-system/coredns-6f6b679f8f-dgtt9" Feb 13 21:24:58.596447 containerd[1497]: time="2025-02-13T21:24:58.595822645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2h5lc,Uid:44da46d1-0429-4582-b83f-ab877ac44830,Namespace:kube-system,Attempt:0,}" Feb 13 21:24:58.606881 containerd[1497]: time="2025-02-13T21:24:58.606579293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dgtt9,Uid:a4afcc03-1c2f-4982-a233-e08cfe11eb1c,Namespace:kube-system,Attempt:0,}" Feb 13 21:25:00.446657 systemd-networkd[1419]: cilium_host: Link UP Feb 13 21:25:00.447035 systemd-networkd[1419]: cilium_net: Link UP Feb 13 21:25:00.447044 systemd-networkd[1419]: cilium_net: Gained carrier Feb 13 21:25:00.447905 systemd-networkd[1419]: cilium_host: Gained carrier Feb 13 21:25:00.591994 systemd-networkd[1419]: cilium_vxlan: Link UP Feb 13 21:25:00.592002 systemd-networkd[1419]: cilium_vxlan: Gained carrier Feb 13 21:25:00.660976 systemd-networkd[1419]: cilium_net: Gained IPv6LL Feb 13 21:25:00.965250 kernel: NET: Registered PF_ALG protocol family Feb 13 21:25:01.204268 systemd-networkd[1419]: cilium_host: Gained IPv6LL Feb 13 21:25:01.803974 systemd-networkd[1419]: lxc_health: Link UP Feb 13 21:25:01.813657 systemd-networkd[1419]: lxc_health: Gained carrier Feb 13 21:25:01.909198 systemd-networkd[1419]: cilium_vxlan: Gained IPv6LL Feb 13 21:25:02.154901 kubelet[2694]: I0213 21:25:02.154383 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fxglw" podStartSLOduration=9.121615583 podStartE2EDuration="17.154359495s" podCreationTimestamp="2025-02-13 21:24:45 +0000 UTC" firstStartedPulling="2025-02-13 21:24:46.248425541 +0000 UTC m=+5.864054245" lastFinishedPulling="2025-02-13 21:24:54.28116945 +0000 UTC m=+13.896798157" observedRunningTime="2025-02-13 21:24:58.749718901 +0000 UTC m=+18.365347605" watchObservedRunningTime="2025-02-13 21:25:02.154359495 +0000 UTC m=+21.769988225" Feb 13 21:25:02.193086 systemd-networkd[1419]: lxc0629558c8472: Link UP Feb 13 21:25:02.199167 kernel: eth0: renamed from tmp1abe3 Feb 13 21:25:02.205043 systemd-networkd[1419]: lxc0629558c8472: Gained carrier Feb 13 21:25:02.233844 systemd-networkd[1419]: lxc79a528480c76: Link UP Feb 13 21:25:02.237987 kernel: eth0: renamed from tmpec670 Feb 13 21:25:02.249953 systemd-networkd[1419]: lxc79a528480c76: Gained carrier Feb 13 21:25:02.996436 systemd-networkd[1419]: lxc_health: Gained IPv6LL Feb 13 21:25:03.508467 systemd-networkd[1419]: lxc79a528480c76: Gained IPv6LL Feb 13 21:25:03.572583 systemd-networkd[1419]: lxc0629558c8472: Gained IPv6LL Feb 13 21:25:06.512231 containerd[1497]: time="2025-02-13T21:25:06.511376563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:25:06.512231 containerd[1497]: time="2025-02-13T21:25:06.511486997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:25:06.512231 containerd[1497]: time="2025-02-13T21:25:06.511503405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:06.512231 containerd[1497]: time="2025-02-13T21:25:06.511626224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:06.553413 containerd[1497]: time="2025-02-13T21:25:06.551360557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:25:06.553413 containerd[1497]: time="2025-02-13T21:25:06.551454426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:25:06.553413 containerd[1497]: time="2025-02-13T21:25:06.551472168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:06.553413 containerd[1497]: time="2025-02-13T21:25:06.551573360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:25:06.563137 systemd[1]: Started cri-containerd-1abe33126bfa6cfaf6225671db84ed74c9dd83f33818de8d579fa1aeecd1e555.scope - libcontainer container 1abe33126bfa6cfaf6225671db84ed74c9dd83f33818de8d579fa1aeecd1e555. Feb 13 21:25:06.596364 systemd[1]: Started cri-containerd-ec67085cc7c06565fad4ceabc6c6e7c50ea04c7d3248e8cb2eb560e92e594f40.scope - libcontainer container ec67085cc7c06565fad4ceabc6c6e7c50ea04c7d3248e8cb2eb560e92e594f40. Feb 13 21:25:06.675121 containerd[1497]: time="2025-02-13T21:25:06.674989874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2h5lc,Uid:44da46d1-0429-4582-b83f-ab877ac44830,Namespace:kube-system,Attempt:0,} returns sandbox id \"1abe33126bfa6cfaf6225671db84ed74c9dd83f33818de8d579fa1aeecd1e555\"" Feb 13 21:25:06.680117 containerd[1497]: time="2025-02-13T21:25:06.679941475Z" level=info msg="CreateContainer within sandbox \"1abe33126bfa6cfaf6225671db84ed74c9dd83f33818de8d579fa1aeecd1e555\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 21:25:06.696668 containerd[1497]: time="2025-02-13T21:25:06.696306748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dgtt9,Uid:a4afcc03-1c2f-4982-a233-e08cfe11eb1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec67085cc7c06565fad4ceabc6c6e7c50ea04c7d3248e8cb2eb560e92e594f40\"" Feb 13 21:25:06.708619 containerd[1497]: time="2025-02-13T21:25:06.707875613Z" level=info msg="CreateContainer within sandbox \"ec67085cc7c06565fad4ceabc6c6e7c50ea04c7d3248e8cb2eb560e92e594f40\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 21:25:06.710813 containerd[1497]: time="2025-02-13T21:25:06.710781069Z" level=info msg="CreateContainer within sandbox \"1abe33126bfa6cfaf6225671db84ed74c9dd83f33818de8d579fa1aeecd1e555\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d343746d99def90a445cb5896bf081c477e1c9c8a9806657a0beb365fab4f07\"" Feb 13 21:25:06.725966 containerd[1497]: time="2025-02-13T21:25:06.724580960Z" level=info msg="StartContainer for \"7d343746d99def90a445cb5896bf081c477e1c9c8a9806657a0beb365fab4f07\"" Feb 13 21:25:06.729427 containerd[1497]: time="2025-02-13T21:25:06.728156544Z" level=info msg="CreateContainer within sandbox \"ec67085cc7c06565fad4ceabc6c6e7c50ea04c7d3248e8cb2eb560e92e594f40\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8134a82f9ff047d57501cbbbac762565aeb36b8c60e02e068ba09c6b81df936\"" Feb 13 21:25:06.729427 containerd[1497]: time="2025-02-13T21:25:06.729419239Z" level=info msg="StartContainer for \"f8134a82f9ff047d57501cbbbac762565aeb36b8c60e02e068ba09c6b81df936\"" Feb 13 21:25:06.769458 systemd[1]: Started cri-containerd-7d343746d99def90a445cb5896bf081c477e1c9c8a9806657a0beb365fab4f07.scope - libcontainer container 7d343746d99def90a445cb5896bf081c477e1c9c8a9806657a0beb365fab4f07. Feb 13 21:25:06.778297 systemd[1]: Started cri-containerd-f8134a82f9ff047d57501cbbbac762565aeb36b8c60e02e068ba09c6b81df936.scope - libcontainer container f8134a82f9ff047d57501cbbbac762565aeb36b8c60e02e068ba09c6b81df936. Feb 13 21:25:06.811978 containerd[1497]: time="2025-02-13T21:25:06.811874753Z" level=info msg="StartContainer for \"7d343746d99def90a445cb5896bf081c477e1c9c8a9806657a0beb365fab4f07\" returns successfully" Feb 13 21:25:06.817952 containerd[1497]: time="2025-02-13T21:25:06.817913841Z" level=info msg="StartContainer for \"f8134a82f9ff047d57501cbbbac762565aeb36b8c60e02e068ba09c6b81df936\" returns successfully" Feb 13 21:25:07.776970 kubelet[2694]: I0213 21:25:07.776891 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dgtt9" podStartSLOduration=22.776844587 podStartE2EDuration="22.776844587s" podCreationTimestamp="2025-02-13 21:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:25:07.773942328 +0000 UTC m=+27.389571112" watchObservedRunningTime="2025-02-13 21:25:07.776844587 +0000 UTC m=+27.392473316" Feb 13 21:25:07.793917 kubelet[2694]: I0213 21:25:07.793848 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2h5lc" podStartSLOduration=22.793824367 podStartE2EDuration="22.793824367s" podCreationTimestamp="2025-02-13 21:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:25:07.791624924 +0000 UTC m=+27.407253653" watchObservedRunningTime="2025-02-13 21:25:07.793824367 +0000 UTC m=+27.409453092" Feb 13 21:25:52.389380 systemd[1]: Started sshd@7-10.244.102.222:22-147.75.109.163:54542.service - OpenSSH per-connection server daemon (147.75.109.163:54542). Feb 13 21:25:53.336347 sshd[4082]: Accepted publickey for core from 147.75.109.163 port 54542 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:25:53.340902 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:25:53.355297 systemd-logind[1487]: New session 10 of user core. Feb 13 21:25:53.363188 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 21:25:54.442050 sshd[4084]: Connection closed by 147.75.109.163 port 54542 Feb 13 21:25:54.445037 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Feb 13 21:25:54.454364 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Feb 13 21:25:54.457747 systemd[1]: sshd@7-10.244.102.222:22-147.75.109.163:54542.service: Deactivated successfully. Feb 13 21:25:54.460461 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 21:25:54.461937 systemd-logind[1487]: Removed session 10. Feb 13 21:25:59.612620 systemd[1]: Started sshd@8-10.244.102.222:22-147.75.109.163:39682.service - OpenSSH per-connection server daemon (147.75.109.163:39682). Feb 13 21:26:00.507184 sshd[4097]: Accepted publickey for core from 147.75.109.163 port 39682 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:00.510053 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:00.520097 systemd-logind[1487]: New session 11 of user core. Feb 13 21:26:00.526348 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 21:26:01.230077 sshd[4099]: Connection closed by 147.75.109.163 port 39682 Feb 13 21:26:01.231409 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:01.241792 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Feb 13 21:26:01.242357 systemd[1]: sshd@8-10.244.102.222:22-147.75.109.163:39682.service: Deactivated successfully. Feb 13 21:26:01.245970 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 21:26:01.247502 systemd-logind[1487]: Removed session 11. Feb 13 21:26:06.401508 systemd[1]: Started sshd@9-10.244.102.222:22-147.75.109.163:39688.service - OpenSSH per-connection server daemon (147.75.109.163:39688). Feb 13 21:26:07.326436 sshd[4113]: Accepted publickey for core from 147.75.109.163 port 39688 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:07.328290 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:07.335764 systemd-logind[1487]: New session 12 of user core. Feb 13 21:26:07.349373 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 21:26:08.038961 sshd[4115]: Connection closed by 147.75.109.163 port 39688 Feb 13 21:26:08.040509 sshd-session[4113]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:08.054332 systemd[1]: sshd@9-10.244.102.222:22-147.75.109.163:39688.service: Deactivated successfully. Feb 13 21:26:08.058924 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 21:26:08.060255 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Feb 13 21:26:08.061736 systemd-logind[1487]: Removed session 12. Feb 13 21:26:13.206770 systemd[1]: Started sshd@10-10.244.102.222:22-147.75.109.163:38298.service - OpenSSH per-connection server daemon (147.75.109.163:38298). Feb 13 21:26:14.117601 sshd[4127]: Accepted publickey for core from 147.75.109.163 port 38298 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:14.121185 sshd-session[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:14.131413 systemd-logind[1487]: New session 13 of user core. Feb 13 21:26:14.139418 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 21:26:14.837311 sshd[4129]: Connection closed by 147.75.109.163 port 38298 Feb 13 21:26:14.838915 sshd-session[4127]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:14.850772 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Feb 13 21:26:14.851787 systemd[1]: sshd@10-10.244.102.222:22-147.75.109.163:38298.service: Deactivated successfully. Feb 13 21:26:14.855178 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 21:26:14.856916 systemd-logind[1487]: Removed session 13. Feb 13 21:26:15.007622 systemd[1]: Started sshd@11-10.244.102.222:22-147.75.109.163:38312.service - OpenSSH per-connection server daemon (147.75.109.163:38312). Feb 13 21:26:15.929215 sshd[4141]: Accepted publickey for core from 147.75.109.163 port 38312 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:15.932930 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:15.943314 systemd-logind[1487]: New session 14 of user core. Feb 13 21:26:15.947360 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 21:26:16.704920 sshd[4143]: Connection closed by 147.75.109.163 port 38312 Feb 13 21:26:16.706581 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:16.721355 systemd[1]: sshd@11-10.244.102.222:22-147.75.109.163:38312.service: Deactivated successfully. Feb 13 21:26:16.723796 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 21:26:16.724903 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Feb 13 21:26:16.726753 systemd-logind[1487]: Removed session 14. Feb 13 21:26:16.875649 systemd[1]: Started sshd@12-10.244.102.222:22-147.75.109.163:38320.service - OpenSSH per-connection server daemon (147.75.109.163:38320). Feb 13 21:26:17.783914 sshd[4154]: Accepted publickey for core from 147.75.109.163 port 38320 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:17.785946 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:17.793233 systemd-logind[1487]: New session 15 of user core. Feb 13 21:26:17.800360 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 21:26:18.490705 sshd[4156]: Connection closed by 147.75.109.163 port 38320 Feb 13 21:26:18.492193 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:18.500692 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Feb 13 21:26:18.501043 systemd[1]: sshd@12-10.244.102.222:22-147.75.109.163:38320.service: Deactivated successfully. Feb 13 21:26:18.503876 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 21:26:18.507173 systemd-logind[1487]: Removed session 15. Feb 13 21:26:23.662443 systemd[1]: Started sshd@13-10.244.102.222:22-147.75.109.163:53402.service - OpenSSH per-connection server daemon (147.75.109.163:53402). Feb 13 21:26:24.575990 sshd[4168]: Accepted publickey for core from 147.75.109.163 port 53402 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:24.580538 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:24.591396 systemd-logind[1487]: New session 16 of user core. Feb 13 21:26:24.599281 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 21:26:25.306142 sshd[4170]: Connection closed by 147.75.109.163 port 53402 Feb 13 21:26:25.305383 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:25.314074 systemd[1]: sshd@13-10.244.102.222:22-147.75.109.163:53402.service: Deactivated successfully. Feb 13 21:26:25.317008 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 21:26:25.317917 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Feb 13 21:26:25.319605 systemd-logind[1487]: Removed session 16. Feb 13 21:26:25.465507 systemd[1]: Started sshd@14-10.244.102.222:22-147.75.109.163:53406.service - OpenSSH per-connection server daemon (147.75.109.163:53406). Feb 13 21:26:26.397824 sshd[4181]: Accepted publickey for core from 147.75.109.163 port 53406 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:26.400077 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:26.410394 systemd-logind[1487]: New session 17 of user core. Feb 13 21:26:26.416282 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 21:26:27.275407 sshd[4183]: Connection closed by 147.75.109.163 port 53406 Feb 13 21:26:27.276448 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:27.284521 systemd[1]: sshd@14-10.244.102.222:22-147.75.109.163:53406.service: Deactivated successfully. Feb 13 21:26:27.287627 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 21:26:27.289669 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Feb 13 21:26:27.291563 systemd-logind[1487]: Removed session 17. Feb 13 21:26:27.444417 systemd[1]: Started sshd@15-10.244.102.222:22-147.75.109.163:53416.service - OpenSSH per-connection server daemon (147.75.109.163:53416). Feb 13 21:26:28.362500 sshd[4192]: Accepted publickey for core from 147.75.109.163 port 53416 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:28.366209 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:28.378244 systemd-logind[1487]: New session 18 of user core. Feb 13 21:26:28.386296 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 21:26:30.872187 sshd[4194]: Connection closed by 147.75.109.163 port 53416 Feb 13 21:26:30.873481 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:30.888583 systemd[1]: sshd@15-10.244.102.222:22-147.75.109.163:53416.service: Deactivated successfully. Feb 13 21:26:30.890897 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 21:26:30.891703 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Feb 13 21:26:30.893400 systemd-logind[1487]: Removed session 18. Feb 13 21:26:31.034417 systemd[1]: Started sshd@16-10.244.102.222:22-147.75.109.163:47226.service - OpenSSH per-connection server daemon (147.75.109.163:47226). Feb 13 21:26:31.941183 sshd[4210]: Accepted publickey for core from 147.75.109.163 port 47226 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:31.945047 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:31.954792 systemd-logind[1487]: New session 19 of user core. Feb 13 21:26:31.960253 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 21:26:32.836063 sshd[4212]: Connection closed by 147.75.109.163 port 47226 Feb 13 21:26:32.835653 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:32.841488 systemd[1]: sshd@16-10.244.102.222:22-147.75.109.163:47226.service: Deactivated successfully. Feb 13 21:26:32.846703 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 21:26:32.851215 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Feb 13 21:26:32.853262 systemd-logind[1487]: Removed session 19. Feb 13 21:26:33.005567 systemd[1]: Started sshd@17-10.244.102.222:22-147.75.109.163:47230.service - OpenSSH per-connection server daemon (147.75.109.163:47230). Feb 13 21:26:33.968626 sshd[4221]: Accepted publickey for core from 147.75.109.163 port 47230 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:33.970732 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:33.978526 systemd-logind[1487]: New session 20 of user core. Feb 13 21:26:33.991620 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 21:26:34.671532 sshd[4223]: Connection closed by 147.75.109.163 port 47230 Feb 13 21:26:34.671396 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:34.676850 systemd[1]: sshd@17-10.244.102.222:22-147.75.109.163:47230.service: Deactivated successfully. Feb 13 21:26:34.679521 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 21:26:34.681959 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Feb 13 21:26:34.685807 systemd-logind[1487]: Removed session 20. Feb 13 21:26:39.839516 systemd[1]: Started sshd@18-10.244.102.222:22-147.75.109.163:53404.service - OpenSSH per-connection server daemon (147.75.109.163:53404). Feb 13 21:26:40.744580 sshd[4236]: Accepted publickey for core from 147.75.109.163 port 53404 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:40.748130 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:40.759381 systemd-logind[1487]: New session 21 of user core. Feb 13 21:26:40.777677 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 21:26:41.447670 sshd[4240]: Connection closed by 147.75.109.163 port 53404 Feb 13 21:26:41.449433 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:41.458891 systemd[1]: sshd@18-10.244.102.222:22-147.75.109.163:53404.service: Deactivated successfully. Feb 13 21:26:41.463721 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 21:26:41.465561 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Feb 13 21:26:41.466890 systemd-logind[1487]: Removed session 21. Feb 13 21:26:46.614978 systemd[1]: Started sshd@19-10.244.102.222:22-147.75.109.163:53408.service - OpenSSH per-connection server daemon (147.75.109.163:53408). Feb 13 21:26:47.513165 sshd[4251]: Accepted publickey for core from 147.75.109.163 port 53408 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:47.514878 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:47.522744 systemd-logind[1487]: New session 22 of user core. Feb 13 21:26:47.529460 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 21:26:48.214146 sshd[4255]: Connection closed by 147.75.109.163 port 53408 Feb 13 21:26:48.215673 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:48.223397 systemd[1]: sshd@19-10.244.102.222:22-147.75.109.163:53408.service: Deactivated successfully. Feb 13 21:26:48.226938 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 21:26:48.229112 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Feb 13 21:26:48.231374 systemd-logind[1487]: Removed session 22. Feb 13 21:26:53.382457 systemd[1]: Started sshd@20-10.244.102.222:22-147.75.109.163:38846.service - OpenSSH per-connection server daemon (147.75.109.163:38846). Feb 13 21:26:54.279006 sshd[4266]: Accepted publickey for core from 147.75.109.163 port 38846 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:54.282602 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:54.292569 systemd-logind[1487]: New session 23 of user core. Feb 13 21:26:54.301302 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 21:26:54.980823 sshd[4268]: Connection closed by 147.75.109.163 port 38846 Feb 13 21:26:54.980574 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Feb 13 21:26:54.989522 systemd[1]: sshd@20-10.244.102.222:22-147.75.109.163:38846.service: Deactivated successfully. Feb 13 21:26:54.993083 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 21:26:54.994193 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Feb 13 21:26:54.995472 systemd-logind[1487]: Removed session 23. Feb 13 21:26:55.139392 systemd[1]: Started sshd@21-10.244.102.222:22-147.75.109.163:38854.service - OpenSSH per-connection server daemon (147.75.109.163:38854). Feb 13 21:26:56.030231 sshd[4279]: Accepted publickey for core from 147.75.109.163 port 38854 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:26:56.034547 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:26:56.043359 systemd-logind[1487]: New session 24 of user core. Feb 13 21:26:56.057338 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 21:26:58.030044 systemd[1]: run-containerd-runc-k8s.io-60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac-runc.1oZZT6.mount: Deactivated successfully. Feb 13 21:26:58.052488 containerd[1497]: time="2025-02-13T21:26:58.051432334Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 21:26:58.096176 containerd[1497]: time="2025-02-13T21:26:58.096015682Z" level=info msg="StopContainer for \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\" with timeout 30 (s)" Feb 13 21:26:58.096471 containerd[1497]: time="2025-02-13T21:26:58.096232149Z" level=info msg="StopContainer for \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\" with timeout 2 (s)" Feb 13 21:26:58.098932 containerd[1497]: time="2025-02-13T21:26:58.098675637Z" level=info msg="Stop container \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\" with signal terminated" Feb 13 21:26:58.099630 containerd[1497]: time="2025-02-13T21:26:58.099379063Z" level=info msg="Stop container \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\" with signal terminated" Feb 13 21:26:58.124875 systemd[1]: cri-containerd-2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf.scope: Deactivated successfully. Feb 13 21:26:58.130755 systemd-networkd[1419]: lxc_health: Link DOWN Feb 13 21:26:58.130762 systemd-networkd[1419]: lxc_health: Lost carrier Feb 13 21:26:58.150080 systemd[1]: cri-containerd-60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac.scope: Deactivated successfully. Feb 13 21:26:58.150399 systemd[1]: cri-containerd-60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac.scope: Consumed 8.010s CPU time. Feb 13 21:26:58.168705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf-rootfs.mount: Deactivated successfully. Feb 13 21:26:58.173466 containerd[1497]: time="2025-02-13T21:26:58.173358105Z" level=info msg="shim disconnected" id=2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf namespace=k8s.io Feb 13 21:26:58.173634 containerd[1497]: time="2025-02-13T21:26:58.173474326Z" level=warning msg="cleaning up after shim disconnected" id=2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf namespace=k8s.io Feb 13 21:26:58.173634 containerd[1497]: time="2025-02-13T21:26:58.173493444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:26:58.185895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac-rootfs.mount: Deactivated successfully. Feb 13 21:26:58.193786 containerd[1497]: time="2025-02-13T21:26:58.193721062Z" level=info msg="shim disconnected" id=60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac namespace=k8s.io Feb 13 21:26:58.193786 containerd[1497]: time="2025-02-13T21:26:58.193784593Z" level=warning msg="cleaning up after shim disconnected" id=60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac namespace=k8s.io Feb 13 21:26:58.194036 containerd[1497]: time="2025-02-13T21:26:58.193797032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:26:58.206479 containerd[1497]: time="2025-02-13T21:26:58.206297207Z" level=info msg="StopContainer for \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\" returns successfully" Feb 13 21:26:58.207631 containerd[1497]: time="2025-02-13T21:26:58.207248285Z" level=info msg="StopPodSandbox for \"074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb\"" Feb 13 21:26:58.217330 containerd[1497]: time="2025-02-13T21:26:58.216562025Z" level=info msg="StopContainer for \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\" returns successfully" Feb 13 21:26:58.217330 containerd[1497]: time="2025-02-13T21:26:58.216999168Z" level=info msg="StopPodSandbox for \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\"" Feb 13 21:26:58.221705 containerd[1497]: time="2025-02-13T21:26:58.215168000Z" level=info msg="Container to stop \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 21:26:58.223608 containerd[1497]: time="2025-02-13T21:26:58.217030496Z" level=info msg="Container to stop \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 21:26:58.223608 containerd[1497]: time="2025-02-13T21:26:58.223607095Z" level=info msg="Container to stop \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 21:26:58.223737 containerd[1497]: time="2025-02-13T21:26:58.223619232Z" level=info msg="Container to stop \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 21:26:58.223737 containerd[1497]: time="2025-02-13T21:26:58.223628877Z" level=info msg="Container to stop \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 21:26:58.223737 containerd[1497]: time="2025-02-13T21:26:58.223637846Z" level=info msg="Container to stop \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 21:26:58.225449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb-shm.mount: Deactivated successfully. Feb 13 21:26:58.225564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd-shm.mount: Deactivated successfully. Feb 13 21:26:58.235092 systemd[1]: cri-containerd-6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd.scope: Deactivated successfully. Feb 13 21:26:58.236855 systemd[1]: cri-containerd-074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb.scope: Deactivated successfully. Feb 13 21:26:58.268763 containerd[1497]: time="2025-02-13T21:26:58.268579333Z" level=info msg="shim disconnected" id=6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd namespace=k8s.io Feb 13 21:26:58.268763 containerd[1497]: time="2025-02-13T21:26:58.268636920Z" level=warning msg="cleaning up after shim disconnected" id=6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd namespace=k8s.io Feb 13 21:26:58.268763 containerd[1497]: time="2025-02-13T21:26:58.268647784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:26:58.272805 containerd[1497]: time="2025-02-13T21:26:58.272600013Z" level=info msg="shim disconnected" id=074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb namespace=k8s.io Feb 13 21:26:58.272805 containerd[1497]: time="2025-02-13T21:26:58.272671522Z" level=warning msg="cleaning up after shim disconnected" id=074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb namespace=k8s.io Feb 13 21:26:58.272805 containerd[1497]: time="2025-02-13T21:26:58.272679889Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:26:58.285799 containerd[1497]: time="2025-02-13T21:26:58.285043417Z" level=warning msg="cleanup warnings time=\"2025-02-13T21:26:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 21:26:58.287684 containerd[1497]: time="2025-02-13T21:26:58.287631682Z" level=info msg="TearDown network for sandbox \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" successfully" Feb 13 21:26:58.287684 containerd[1497]: time="2025-02-13T21:26:58.287662240Z" level=info msg="StopPodSandbox for \"6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd\" returns successfully" Feb 13 21:26:58.298621 containerd[1497]: time="2025-02-13T21:26:58.298552330Z" level=warning msg="cleanup warnings time=\"2025-02-13T21:26:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 21:26:58.299753 containerd[1497]: time="2025-02-13T21:26:58.299718097Z" level=info msg="TearDown network for sandbox \"074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb\" successfully" Feb 13 21:26:58.299753 containerd[1497]: time="2025-02-13T21:26:58.299744273Z" level=info msg="StopPodSandbox for \"074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb\" returns successfully" Feb 13 21:26:58.503132 kubelet[2694]: I0213 21:26:58.502666 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-cgroup\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503132 kubelet[2694]: I0213 21:26:58.502763 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3893dc70-b3e5-448c-9be1-f7198f2a3935-cilium-config-path\") pod \"3893dc70-b3e5-448c-9be1-f7198f2a3935\" (UID: \"3893dc70-b3e5-448c-9be1-f7198f2a3935\") " Feb 13 21:26:58.503132 kubelet[2694]: I0213 21:26:58.502788 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-hostproc\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503132 kubelet[2694]: I0213 21:26:58.502812 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-lib-modules\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503132 kubelet[2694]: I0213 21:26:58.502829 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-bpf-maps\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503132 kubelet[2694]: I0213 21:26:58.502845 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cni-path\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503940 kubelet[2694]: I0213 21:26:58.502861 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-host-proc-sys-net\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503940 kubelet[2694]: I0213 21:26:58.502888 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-config-path\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503940 kubelet[2694]: I0213 21:26:58.502909 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-etc-cni-netd\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503940 kubelet[2694]: I0213 21:26:58.502925 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-xtables-lock\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503940 kubelet[2694]: I0213 21:26:58.502945 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/774b0e5f-85a9-4a76-b58b-9a1fcb423763-clustermesh-secrets\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.503940 kubelet[2694]: I0213 21:26:58.502966 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-run\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.504166 kubelet[2694]: I0213 21:26:58.502992 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-hubble-tls\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.504166 kubelet[2694]: I0213 21:26:58.503009 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92crg\" (UniqueName: \"kubernetes.io/projected/3893dc70-b3e5-448c-9be1-f7198f2a3935-kube-api-access-92crg\") pod \"3893dc70-b3e5-448c-9be1-f7198f2a3935\" (UID: \"3893dc70-b3e5-448c-9be1-f7198f2a3935\") " Feb 13 21:26:58.504166 kubelet[2694]: I0213 21:26:58.503027 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmvg5\" (UniqueName: \"kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-kube-api-access-pmvg5\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.504166 kubelet[2694]: I0213 21:26:58.503046 2694 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-host-proc-sys-kernel\") pod \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\" (UID: \"774b0e5f-85a9-4a76-b58b-9a1fcb423763\") " Feb 13 21:26:58.509957 kubelet[2694]: I0213 21:26:58.508561 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.509957 kubelet[2694]: I0213 21:26:58.509417 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.509957 kubelet[2694]: I0213 21:26:58.507845 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3893dc70-b3e5-448c-9be1-f7198f2a3935-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3893dc70-b3e5-448c-9be1-f7198f2a3935" (UID: "3893dc70-b3e5-448c-9be1-f7198f2a3935"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 21:26:58.509957 kubelet[2694]: I0213 21:26:58.509446 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-hostproc" (OuterVolumeSpecName: "hostproc") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.509957 kubelet[2694]: I0213 21:26:58.509452 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.510259 kubelet[2694]: I0213 21:26:58.509472 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.510259 kubelet[2694]: I0213 21:26:58.509462 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.510259 kubelet[2694]: I0213 21:26:58.509495 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cni-path" (OuterVolumeSpecName: "cni-path") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.510259 kubelet[2694]: I0213 21:26:58.509510 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.512197 kubelet[2694]: I0213 21:26:58.511467 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 21:26:58.513930 kubelet[2694]: I0213 21:26:58.513901 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774b0e5f-85a9-4a76-b58b-9a1fcb423763-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 21:26:58.514322 kubelet[2694]: I0213 21:26:58.514296 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3893dc70-b3e5-448c-9be1-f7198f2a3935-kube-api-access-92crg" (OuterVolumeSpecName: "kube-api-access-92crg") pod "3893dc70-b3e5-448c-9be1-f7198f2a3935" (UID: "3893dc70-b3e5-448c-9be1-f7198f2a3935"). InnerVolumeSpecName "kube-api-access-92crg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 21:26:58.514394 kubelet[2694]: I0213 21:26:58.514344 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.516836 kubelet[2694]: I0213 21:26:58.516808 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-kube-api-access-pmvg5" (OuterVolumeSpecName: "kube-api-access-pmvg5") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "kube-api-access-pmvg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 21:26:58.516958 kubelet[2694]: I0213 21:26:58.516837 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 21:26:58.517026 kubelet[2694]: I0213 21:26:58.516863 2694 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "774b0e5f-85a9-4a76-b58b-9a1fcb423763" (UID: "774b0e5f-85a9-4a76-b58b-9a1fcb423763"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 21:26:58.549140 systemd[1]: Removed slice kubepods-besteffort-pod3893dc70_b3e5_448c_9be1_f7198f2a3935.slice - libcontainer container kubepods-besteffort-pod3893dc70_b3e5_448c_9be1_f7198f2a3935.slice. Feb 13 21:26:58.551608 systemd[1]: Removed slice kubepods-burstable-pod774b0e5f_85a9_4a76_b58b_9a1fcb423763.slice - libcontainer container kubepods-burstable-pod774b0e5f_85a9_4a76_b58b_9a1fcb423763.slice. Feb 13 21:26:58.551862 systemd[1]: kubepods-burstable-pod774b0e5f_85a9_4a76_b58b_9a1fcb423763.slice: Consumed 8.105s CPU time. Feb 13 21:26:58.606299 kubelet[2694]: I0213 21:26:58.606227 2694 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-host-proc-sys-net\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.606299 kubelet[2694]: I0213 21:26:58.606283 2694 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-config-path\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.606299 kubelet[2694]: I0213 21:26:58.606304 2694 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-etc-cni-netd\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.607003 kubelet[2694]: I0213 21:26:58.606320 2694 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-xtables-lock\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.607003 kubelet[2694]: I0213 21:26:58.606347 2694 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/774b0e5f-85a9-4a76-b58b-9a1fcb423763-clustermesh-secrets\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.607003 kubelet[2694]: I0213 21:26:58.606363 2694 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-run\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.607003 kubelet[2694]: I0213 21:26:58.606376 2694 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-hubble-tls\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.607003 kubelet[2694]: I0213 21:26:58.606391 2694 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-92crg\" (UniqueName: \"kubernetes.io/projected/3893dc70-b3e5-448c-9be1-f7198f2a3935-kube-api-access-92crg\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.607003 kubelet[2694]: I0213 21:26:58.606405 2694 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pmvg5\" (UniqueName: \"kubernetes.io/projected/774b0e5f-85a9-4a76-b58b-9a1fcb423763-kube-api-access-pmvg5\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.607003 kubelet[2694]: I0213 21:26:58.606419 2694 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-host-proc-sys-kernel\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.608404 kubelet[2694]: I0213 21:26:58.606433 2694 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cilium-cgroup\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.608404 kubelet[2694]: I0213 21:26:58.606446 2694 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3893dc70-b3e5-448c-9be1-f7198f2a3935-cilium-config-path\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.608404 kubelet[2694]: I0213 21:26:58.606460 2694 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-hostproc\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.608404 kubelet[2694]: I0213 21:26:58.606473 2694 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-lib-modules\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.608404 kubelet[2694]: I0213 21:26:58.606501 2694 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-bpf-maps\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:58.608404 kubelet[2694]: I0213 21:26:58.606515 2694 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/774b0e5f-85a9-4a76-b58b-9a1fcb423763-cni-path\") on node \"srv-9zhep.gb1.brightbox.com\" DevicePath \"\"" Feb 13 21:26:59.027203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-074724b344807397c51a7404144d387cf7a46ae5aa428011105d29fb19971bcb-rootfs.mount: Deactivated successfully. Feb 13 21:26:59.027451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6566a190a89c4cf52afc7bd52df4159140f774c9a21d3faf3780fc471505e5dd-rootfs.mount: Deactivated successfully. Feb 13 21:26:59.027614 systemd[1]: var-lib-kubelet-pods-3893dc70\x2db3e5\x2d448c\x2d9be1\x2df7198f2a3935-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d92crg.mount: Deactivated successfully. Feb 13 21:26:59.027778 systemd[1]: var-lib-kubelet-pods-774b0e5f\x2d85a9\x2d4a76\x2db58b\x2d9a1fcb423763-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpmvg5.mount: Deactivated successfully. Feb 13 21:26:59.027931 systemd[1]: var-lib-kubelet-pods-774b0e5f\x2d85a9\x2d4a76\x2db58b\x2d9a1fcb423763-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 21:26:59.028124 systemd[1]: var-lib-kubelet-pods-774b0e5f\x2d85a9\x2d4a76\x2db58b\x2d9a1fcb423763-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 21:26:59.136721 kubelet[2694]: I0213 21:26:59.136676 2694 scope.go:117] "RemoveContainer" containerID="2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf" Feb 13 21:26:59.154896 containerd[1497]: time="2025-02-13T21:26:59.151723390Z" level=info msg="RemoveContainer for \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\"" Feb 13 21:26:59.165669 containerd[1497]: time="2025-02-13T21:26:59.165617296Z" level=info msg="RemoveContainer for \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\" returns successfully" Feb 13 21:26:59.166324 kubelet[2694]: I0213 21:26:59.166192 2694 scope.go:117] "RemoveContainer" containerID="2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf" Feb 13 21:26:59.167074 containerd[1497]: time="2025-02-13T21:26:59.167029654Z" level=error msg="ContainerStatus for \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\": not found" Feb 13 21:26:59.170257 kubelet[2694]: E0213 21:26:59.170207 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\": not found" containerID="2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf" Feb 13 21:26:59.170369 kubelet[2694]: I0213 21:26:59.170272 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf"} err="failed to get container status \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\": rpc error: code = NotFound desc = an error occurred when try to find container \"2aff12ed38f397a6115491c8c01916204030321ecae25f3c5b54a0baeb730dcf\": not found" Feb 13 21:26:59.170411 kubelet[2694]: I0213 21:26:59.170370 2694 scope.go:117] "RemoveContainer" containerID="60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac" Feb 13 21:26:59.173016 containerd[1497]: time="2025-02-13T21:26:59.172660941Z" level=info msg="RemoveContainer for \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\"" Feb 13 21:26:59.176137 containerd[1497]: time="2025-02-13T21:26:59.175408524Z" level=info msg="RemoveContainer for \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\" returns successfully" Feb 13 21:26:59.176929 kubelet[2694]: I0213 21:26:59.176911 2694 scope.go:117] "RemoveContainer" containerID="84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e" Feb 13 21:26:59.178529 containerd[1497]: time="2025-02-13T21:26:59.178232104Z" level=info msg="RemoveContainer for \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\"" Feb 13 21:26:59.180222 containerd[1497]: time="2025-02-13T21:26:59.180198877Z" level=info msg="RemoveContainer for \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\" returns successfully" Feb 13 21:26:59.180490 kubelet[2694]: I0213 21:26:59.180470 2694 scope.go:117] "RemoveContainer" containerID="08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733" Feb 13 21:26:59.181830 containerd[1497]: time="2025-02-13T21:26:59.181428025Z" level=info msg="RemoveContainer for \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\"" Feb 13 21:26:59.183280 containerd[1497]: time="2025-02-13T21:26:59.183259021Z" level=info msg="RemoveContainer for \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\" returns successfully" Feb 13 21:26:59.183829 kubelet[2694]: I0213 21:26:59.183729 2694 scope.go:117] "RemoveContainer" containerID="42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da" Feb 13 21:26:59.186216 containerd[1497]: time="2025-02-13T21:26:59.186082552Z" level=info msg="RemoveContainer for \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\"" Feb 13 21:26:59.188886 containerd[1497]: time="2025-02-13T21:26:59.188769229Z" level=info msg="RemoveContainer for \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\" returns successfully" Feb 13 21:26:59.188995 kubelet[2694]: I0213 21:26:59.188974 2694 scope.go:117] "RemoveContainer" containerID="563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05" Feb 13 21:26:59.190178 containerd[1497]: time="2025-02-13T21:26:59.190159315Z" level=info msg="RemoveContainer for \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\"" Feb 13 21:26:59.192126 containerd[1497]: time="2025-02-13T21:26:59.192024470Z" level=info msg="RemoveContainer for \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\" returns successfully" Feb 13 21:26:59.192247 kubelet[2694]: I0213 21:26:59.192183 2694 scope.go:117] "RemoveContainer" containerID="60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac" Feb 13 21:26:59.192555 containerd[1497]: time="2025-02-13T21:26:59.192513274Z" level=error msg="ContainerStatus for \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\": not found" Feb 13 21:26:59.192685 kubelet[2694]: E0213 21:26:59.192665 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\": not found" containerID="60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac" Feb 13 21:26:59.192738 kubelet[2694]: I0213 21:26:59.192713 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac"} err="failed to get container status \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\": rpc error: code = NotFound desc = an error occurred when try to find container \"60269977aab193a13bb39dfa1ac77da6af8fa8c258dc44669831269fb1e1feac\": not found" Feb 13 21:26:59.192773 kubelet[2694]: I0213 21:26:59.192741 2694 scope.go:117] "RemoveContainer" containerID="84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e" Feb 13 21:26:59.192999 containerd[1497]: time="2025-02-13T21:26:59.192896544Z" level=error msg="ContainerStatus for \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\": not found" Feb 13 21:26:59.193091 kubelet[2694]: E0213 21:26:59.193071 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\": not found" containerID="84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e" Feb 13 21:26:59.193155 kubelet[2694]: I0213 21:26:59.193133 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e"} err="failed to get container status \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\": rpc error: code = NotFound desc = an error occurred when try to find container \"84a84d168ccdd26ff1f64700236b923d888691fd281d6663007a66af1a8a904e\": not found" Feb 13 21:26:59.193239 kubelet[2694]: I0213 21:26:59.193154 2694 scope.go:117] "RemoveContainer" containerID="08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733" Feb 13 21:26:59.193601 containerd[1497]: time="2025-02-13T21:26:59.193377982Z" level=error msg="ContainerStatus for \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\": not found" Feb 13 21:26:59.194005 containerd[1497]: time="2025-02-13T21:26:59.193910894Z" level=error msg="ContainerStatus for \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\": not found" Feb 13 21:26:59.194064 kubelet[2694]: E0213 21:26:59.193725 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\": not found" containerID="08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733" Feb 13 21:26:59.194064 kubelet[2694]: I0213 21:26:59.193746 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733"} err="failed to get container status \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\": rpc error: code = NotFound desc = an error occurred when try to find container \"08ad03de3a506d2b44cdab9154fdc5d2d7a44ee7195466c261128deccb6c5733\": not found" Feb 13 21:26:59.194064 kubelet[2694]: I0213 21:26:59.193764 2694 scope.go:117] "RemoveContainer" containerID="42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da" Feb 13 21:26:59.194064 kubelet[2694]: E0213 21:26:59.194023 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\": not found" containerID="42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da" Feb 13 21:26:59.194064 kubelet[2694]: I0213 21:26:59.194046 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da"} err="failed to get container status \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\": rpc error: code = NotFound desc = an error occurred when try to find container \"42580a693136f1712b130776df6ba11d25994b8e0c0c4bbf841e5739137915da\": not found" Feb 13 21:26:59.194064 kubelet[2694]: I0213 21:26:59.194064 2694 scope.go:117] "RemoveContainer" containerID="563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05" Feb 13 21:26:59.194309 containerd[1497]: time="2025-02-13T21:26:59.194231674Z" level=error msg="ContainerStatus for \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\": not found" Feb 13 21:26:59.194452 kubelet[2694]: E0213 21:26:59.194431 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\": not found" containerID="563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05" Feb 13 21:26:59.194539 kubelet[2694]: I0213 21:26:59.194506 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05"} err="failed to get container status \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\": rpc error: code = NotFound desc = an error occurred when try to find container \"563084c4abedf8047de2648a31147cfcee98f34ae2dcc749fa4dfa1aa9c85d05\": not found" Feb 13 21:27:00.042230 sshd[4281]: Connection closed by 147.75.109.163 port 38854 Feb 13 21:27:00.044125 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Feb 13 21:27:00.054687 systemd[1]: sshd@21-10.244.102.222:22-147.75.109.163:38854.service: Deactivated successfully. Feb 13 21:27:00.058643 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 21:27:00.060743 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Feb 13 21:27:00.063618 systemd-logind[1487]: Removed session 24. Feb 13 21:27:00.203488 systemd[1]: Started sshd@22-10.244.102.222:22-147.75.109.163:50206.service - OpenSSH per-connection server daemon (147.75.109.163:50206). Feb 13 21:27:00.536121 kubelet[2694]: I0213 21:27:00.535504 2694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3893dc70-b3e5-448c-9be1-f7198f2a3935" path="/var/lib/kubelet/pods/3893dc70-b3e5-448c-9be1-f7198f2a3935/volumes" Feb 13 21:27:00.536720 kubelet[2694]: I0213 21:27:00.536693 2694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774b0e5f-85a9-4a76-b58b-9a1fcb423763" path="/var/lib/kubelet/pods/774b0e5f-85a9-4a76-b58b-9a1fcb423763/volumes" Feb 13 21:27:00.699412 kubelet[2694]: E0213 21:27:00.699303 2694 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:27:01.102772 sshd[4439]: Accepted publickey for core from 147.75.109.163 port 50206 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:27:01.108016 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:27:01.114883 systemd-logind[1487]: New session 25 of user core. Feb 13 21:27:01.123291 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 21:27:02.437931 kubelet[2694]: E0213 21:27:02.437476 2694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="774b0e5f-85a9-4a76-b58b-9a1fcb423763" containerName="mount-cgroup" Feb 13 21:27:02.437931 kubelet[2694]: E0213 21:27:02.437516 2694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="774b0e5f-85a9-4a76-b58b-9a1fcb423763" containerName="apply-sysctl-overwrites" Feb 13 21:27:02.437931 kubelet[2694]: E0213 21:27:02.437525 2694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="774b0e5f-85a9-4a76-b58b-9a1fcb423763" containerName="clean-cilium-state" Feb 13 21:27:02.437931 kubelet[2694]: E0213 21:27:02.437533 2694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3893dc70-b3e5-448c-9be1-f7198f2a3935" containerName="cilium-operator" Feb 13 21:27:02.437931 kubelet[2694]: E0213 21:27:02.437539 2694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="774b0e5f-85a9-4a76-b58b-9a1fcb423763" containerName="cilium-agent" Feb 13 21:27:02.437931 kubelet[2694]: E0213 21:27:02.437547 2694 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="774b0e5f-85a9-4a76-b58b-9a1fcb423763" containerName="mount-bpf-fs" Feb 13 21:27:02.437931 kubelet[2694]: I0213 21:27:02.437607 2694 memory_manager.go:354] "RemoveStaleState removing state" podUID="774b0e5f-85a9-4a76-b58b-9a1fcb423763" containerName="cilium-agent" Feb 13 21:27:02.437931 kubelet[2694]: I0213 21:27:02.437621 2694 memory_manager.go:354] "RemoveStaleState removing state" podUID="3893dc70-b3e5-448c-9be1-f7198f2a3935" containerName="cilium-operator" Feb 13 21:27:02.504951 systemd[1]: Created slice kubepods-burstable-pod300093f6_ff18_4414_a451_15c0f57ac832.slice - libcontainer container kubepods-burstable-pod300093f6_ff18_4414_a451_15c0f57ac832.slice. Feb 13 21:27:02.539666 kubelet[2694]: I0213 21:27:02.539386 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/300093f6-ff18-4414-a451-15c0f57ac832-cilium-config-path\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539666 kubelet[2694]: I0213 21:27:02.539426 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/300093f6-ff18-4414-a451-15c0f57ac832-cilium-ipsec-secrets\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539666 kubelet[2694]: I0213 21:27:02.539447 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-host-proc-sys-kernel\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539666 kubelet[2694]: I0213 21:27:02.539465 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/300093f6-ff18-4414-a451-15c0f57ac832-hubble-tls\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539666 kubelet[2694]: I0213 21:27:02.539501 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-lib-modules\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539666 kubelet[2694]: I0213 21:27:02.539525 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-cni-path\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539982 kubelet[2694]: I0213 21:27:02.539539 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-host-proc-sys-net\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539982 kubelet[2694]: I0213 21:27:02.539574 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-etc-cni-netd\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539982 kubelet[2694]: I0213 21:27:02.539592 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/300093f6-ff18-4414-a451-15c0f57ac832-clustermesh-secrets\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539982 kubelet[2694]: I0213 21:27:02.539608 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-cilium-run\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539982 kubelet[2694]: I0213 21:27:02.539626 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-cilium-cgroup\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.539982 kubelet[2694]: I0213 21:27:02.539643 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-bpf-maps\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.540188 kubelet[2694]: I0213 21:27:02.539682 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-xtables-lock\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.540188 kubelet[2694]: I0213 21:27:02.539733 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r454g\" (UniqueName: \"kubernetes.io/projected/300093f6-ff18-4414-a451-15c0f57ac832-kube-api-access-r454g\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.540188 kubelet[2694]: I0213 21:27:02.539757 2694 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/300093f6-ff18-4414-a451-15c0f57ac832-hostproc\") pod \"cilium-5nghx\" (UID: \"300093f6-ff18-4414-a451-15c0f57ac832\") " pod="kube-system/cilium-5nghx" Feb 13 21:27:02.569834 sshd[4441]: Connection closed by 147.75.109.163 port 50206 Feb 13 21:27:02.570780 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Feb 13 21:27:02.576500 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Feb 13 21:27:02.577162 systemd[1]: sshd@22-10.244.102.222:22-147.75.109.163:50206.service: Deactivated successfully. Feb 13 21:27:02.581963 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 21:27:02.584963 systemd-logind[1487]: Removed session 25. Feb 13 21:27:02.679902 kubelet[2694]: I0213 21:27:02.679585 2694 setters.go:600] "Node became not ready" node="srv-9zhep.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T21:27:02Z","lastTransitionTime":"2025-02-13T21:27:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 21:27:02.737254 systemd[1]: Started sshd@23-10.244.102.222:22-147.75.109.163:50220.service - OpenSSH per-connection server daemon (147.75.109.163:50220). Feb 13 21:27:02.834270 containerd[1497]: time="2025-02-13T21:27:02.834185912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nghx,Uid:300093f6-ff18-4414-a451-15c0f57ac832,Namespace:kube-system,Attempt:0,}" Feb 13 21:27:02.866922 containerd[1497]: time="2025-02-13T21:27:02.866634318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 21:27:02.866922 containerd[1497]: time="2025-02-13T21:27:02.866709877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 21:27:02.866922 containerd[1497]: time="2025-02-13T21:27:02.866721688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:27:02.866922 containerd[1497]: time="2025-02-13T21:27:02.866826950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 21:27:02.890420 systemd[1]: Started cri-containerd-40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324.scope - libcontainer container 40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324. Feb 13 21:27:02.939180 containerd[1497]: time="2025-02-13T21:27:02.939139466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nghx,Uid:300093f6-ff18-4414-a451-15c0f57ac832,Namespace:kube-system,Attempt:0,} returns sandbox id \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\"" Feb 13 21:27:02.944447 containerd[1497]: time="2025-02-13T21:27:02.944235357Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 21:27:02.968250 containerd[1497]: time="2025-02-13T21:27:02.968163111Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"658cb9ca2754baf750ff6d59c51b84626dced8bc0befe3b91c19ef1366d9c2b6\"" Feb 13 21:27:02.970990 containerd[1497]: time="2025-02-13T21:27:02.969032044Z" level=info msg="StartContainer for \"658cb9ca2754baf750ff6d59c51b84626dced8bc0befe3b91c19ef1366d9c2b6\"" Feb 13 21:27:03.003298 systemd[1]: Started cri-containerd-658cb9ca2754baf750ff6d59c51b84626dced8bc0befe3b91c19ef1366d9c2b6.scope - libcontainer container 658cb9ca2754baf750ff6d59c51b84626dced8bc0befe3b91c19ef1366d9c2b6. Feb 13 21:27:03.042618 containerd[1497]: time="2025-02-13T21:27:03.042571786Z" level=info msg="StartContainer for \"658cb9ca2754baf750ff6d59c51b84626dced8bc0befe3b91c19ef1366d9c2b6\" returns successfully" Feb 13 21:27:03.062298 systemd[1]: cri-containerd-658cb9ca2754baf750ff6d59c51b84626dced8bc0befe3b91c19ef1366d9c2b6.scope: Deactivated successfully. Feb 13 21:27:03.113320 containerd[1497]: time="2025-02-13T21:27:03.113156570Z" level=info msg="shim disconnected" id=658cb9ca2754baf750ff6d59c51b84626dced8bc0befe3b91c19ef1366d9c2b6 namespace=k8s.io Feb 13 21:27:03.113320 containerd[1497]: time="2025-02-13T21:27:03.113307602Z" level=warning msg="cleaning up after shim disconnected" id=658cb9ca2754baf750ff6d59c51b84626dced8bc0befe3b91c19ef1366d9c2b6 namespace=k8s.io Feb 13 21:27:03.113320 containerd[1497]: time="2025-02-13T21:27:03.113334954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:27:03.170555 containerd[1497]: time="2025-02-13T21:27:03.169528297Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 21:27:03.180731 containerd[1497]: time="2025-02-13T21:27:03.180671318Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"46e71915212b78ce822c54d35ff9d3f9569d25499265772d162766bd708fda00\"" Feb 13 21:27:03.181364 containerd[1497]: time="2025-02-13T21:27:03.181306476Z" level=info msg="StartContainer for \"46e71915212b78ce822c54d35ff9d3f9569d25499265772d162766bd708fda00\"" Feb 13 21:27:03.226274 systemd[1]: Started cri-containerd-46e71915212b78ce822c54d35ff9d3f9569d25499265772d162766bd708fda00.scope - libcontainer container 46e71915212b78ce822c54d35ff9d3f9569d25499265772d162766bd708fda00. Feb 13 21:27:03.257805 containerd[1497]: time="2025-02-13T21:27:03.257552859Z" level=info msg="StartContainer for \"46e71915212b78ce822c54d35ff9d3f9569d25499265772d162766bd708fda00\" returns successfully" Feb 13 21:27:03.270325 systemd[1]: cri-containerd-46e71915212b78ce822c54d35ff9d3f9569d25499265772d162766bd708fda00.scope: Deactivated successfully. Feb 13 21:27:03.297236 containerd[1497]: time="2025-02-13T21:27:03.296997965Z" level=info msg="shim disconnected" id=46e71915212b78ce822c54d35ff9d3f9569d25499265772d162766bd708fda00 namespace=k8s.io Feb 13 21:27:03.297236 containerd[1497]: time="2025-02-13T21:27:03.297057301Z" level=warning msg="cleaning up after shim disconnected" id=46e71915212b78ce822c54d35ff9d3f9569d25499265772d162766bd708fda00 namespace=k8s.io Feb 13 21:27:03.297236 containerd[1497]: time="2025-02-13T21:27:03.297066186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:27:03.736917 sshd[4456]: Accepted publickey for core from 147.75.109.163 port 50220 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:27:03.739863 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:27:03.749437 systemd-logind[1487]: New session 26 of user core. Feb 13 21:27:03.758281 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 21:27:04.178615 containerd[1497]: time="2025-02-13T21:27:04.178551441Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 21:27:04.207808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273396129.mount: Deactivated successfully. Feb 13 21:27:04.210633 containerd[1497]: time="2025-02-13T21:27:04.209391939Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3c346ade7449d6957ec91c53f7158bf657deec81f1f1c8ace35cc04f27c5c5e7\"" Feb 13 21:27:04.210961 containerd[1497]: time="2025-02-13T21:27:04.210935973Z" level=info msg="StartContainer for \"3c346ade7449d6957ec91c53f7158bf657deec81f1f1c8ace35cc04f27c5c5e7\"" Feb 13 21:27:04.246320 systemd[1]: Started cri-containerd-3c346ade7449d6957ec91c53f7158bf657deec81f1f1c8ace35cc04f27c5c5e7.scope - libcontainer container 3c346ade7449d6957ec91c53f7158bf657deec81f1f1c8ace35cc04f27c5c5e7. Feb 13 21:27:04.282530 containerd[1497]: time="2025-02-13T21:27:04.282319154Z" level=info msg="StartContainer for \"3c346ade7449d6957ec91c53f7158bf657deec81f1f1c8ace35cc04f27c5c5e7\" returns successfully" Feb 13 21:27:04.287388 systemd[1]: cri-containerd-3c346ade7449d6957ec91c53f7158bf657deec81f1f1c8ace35cc04f27c5c5e7.scope: Deactivated successfully. Feb 13 21:27:04.311548 containerd[1497]: time="2025-02-13T21:27:04.311476633Z" level=info msg="shim disconnected" id=3c346ade7449d6957ec91c53f7158bf657deec81f1f1c8ace35cc04f27c5c5e7 namespace=k8s.io Feb 13 21:27:04.311548 containerd[1497]: time="2025-02-13T21:27:04.311538747Z" level=warning msg="cleaning up after shim disconnected" id=3c346ade7449d6957ec91c53f7158bf657deec81f1f1c8ace35cc04f27c5c5e7 namespace=k8s.io Feb 13 21:27:04.311548 containerd[1497]: time="2025-02-13T21:27:04.311548578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:27:04.358568 sshd[4618]: Connection closed by 147.75.109.163 port 50220 Feb 13 21:27:04.359228 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Feb 13 21:27:04.366271 systemd[1]: sshd@23-10.244.102.222:22-147.75.109.163:50220.service: Deactivated successfully. Feb 13 21:27:04.371980 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 21:27:04.373535 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Feb 13 21:27:04.375708 systemd-logind[1487]: Removed session 26. Feb 13 21:27:04.528617 systemd[1]: Started sshd@24-10.244.102.222:22-147.75.109.163:50224.service - OpenSSH per-connection server daemon (147.75.109.163:50224). Feb 13 21:27:04.655468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c346ade7449d6957ec91c53f7158bf657deec81f1f1c8ace35cc04f27c5c5e7-rootfs.mount: Deactivated successfully. Feb 13 21:27:05.194778 containerd[1497]: time="2025-02-13T21:27:05.194653568Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 21:27:05.225928 containerd[1497]: time="2025-02-13T21:27:05.225872771Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1efd353ff8a066baf9e0e33f6f17c20492eedcdaeff84282764a88df71514d66\"" Feb 13 21:27:05.226806 containerd[1497]: time="2025-02-13T21:27:05.226745044Z" level=info msg="StartContainer for \"1efd353ff8a066baf9e0e33f6f17c20492eedcdaeff84282764a88df71514d66\"" Feb 13 21:27:05.272316 systemd[1]: Started cri-containerd-1efd353ff8a066baf9e0e33f6f17c20492eedcdaeff84282764a88df71514d66.scope - libcontainer container 1efd353ff8a066baf9e0e33f6f17c20492eedcdaeff84282764a88df71514d66. Feb 13 21:27:05.309167 systemd[1]: cri-containerd-1efd353ff8a066baf9e0e33f6f17c20492eedcdaeff84282764a88df71514d66.scope: Deactivated successfully. Feb 13 21:27:05.311067 containerd[1497]: time="2025-02-13T21:27:05.310994702Z" level=info msg="StartContainer for \"1efd353ff8a066baf9e0e33f6f17c20492eedcdaeff84282764a88df71514d66\" returns successfully" Feb 13 21:27:05.336907 containerd[1497]: time="2025-02-13T21:27:05.336693038Z" level=info msg="shim disconnected" id=1efd353ff8a066baf9e0e33f6f17c20492eedcdaeff84282764a88df71514d66 namespace=k8s.io Feb 13 21:27:05.336907 containerd[1497]: time="2025-02-13T21:27:05.336748811Z" level=warning msg="cleaning up after shim disconnected" id=1efd353ff8a066baf9e0e33f6f17c20492eedcdaeff84282764a88df71514d66 namespace=k8s.io Feb 13 21:27:05.336907 containerd[1497]: time="2025-02-13T21:27:05.336757669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 21:27:05.428167 sshd[4681]: Accepted publickey for core from 147.75.109.163 port 50224 ssh2: RSA SHA256:ulgBgUPlADOweaxhAmkTx/EhcRWsA2XzxJSff9bgRRQ Feb 13 21:27:05.432494 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 21:27:05.440577 systemd-logind[1487]: New session 27 of user core. Feb 13 21:27:05.448349 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 21:27:05.656208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1efd353ff8a066baf9e0e33f6f17c20492eedcdaeff84282764a88df71514d66-rootfs.mount: Deactivated successfully. Feb 13 21:27:05.701890 kubelet[2694]: E0213 21:27:05.701519 2694 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 21:27:06.204574 containerd[1497]: time="2025-02-13T21:27:06.204516663Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 21:27:06.220713 containerd[1497]: time="2025-02-13T21:27:06.220673641Z" level=info msg="CreateContainer within sandbox \"40ad74e32168450218a5a4a8838c50422074ace41f77f8c00a34d6b2c45f8324\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"70f76cb235f2e36fc77c1d29ea06447434c42e64b39d2878eadb2e636c7e0fd2\"" Feb 13 21:27:06.222262 containerd[1497]: time="2025-02-13T21:27:06.222233431Z" level=info msg="StartContainer for \"70f76cb235f2e36fc77c1d29ea06447434c42e64b39d2878eadb2e636c7e0fd2\"" Feb 13 21:27:06.270275 systemd[1]: Started cri-containerd-70f76cb235f2e36fc77c1d29ea06447434c42e64b39d2878eadb2e636c7e0fd2.scope - libcontainer container 70f76cb235f2e36fc77c1d29ea06447434c42e64b39d2878eadb2e636c7e0fd2. Feb 13 21:27:06.308136 containerd[1497]: time="2025-02-13T21:27:06.308072614Z" level=info msg="StartContainer for \"70f76cb235f2e36fc77c1d29ea06447434c42e64b39d2878eadb2e636c7e0fd2\" returns successfully" Feb 13 21:27:06.654689 systemd[1]: run-containerd-runc-k8s.io-70f76cb235f2e36fc77c1d29ea06447434c42e64b39d2878eadb2e636c7e0fd2-runc.rQkYif.mount: Deactivated successfully. Feb 13 21:27:06.748131 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 21:27:07.251058 kubelet[2694]: I0213 21:27:07.250894 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5nghx" podStartSLOduration=5.250771003 podStartE2EDuration="5.250771003s" podCreationTimestamp="2025-02-13 21:27:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 21:27:07.24908183 +0000 UTC m=+146.864710693" watchObservedRunningTime="2025-02-13 21:27:07.250771003 +0000 UTC m=+146.866399755" Feb 13 21:27:10.030586 systemd-networkd[1419]: lxc_health: Link UP Feb 13 21:27:10.042673 systemd-networkd[1419]: lxc_health: Gained carrier Feb 13 21:27:10.597829 systemd[1]: run-containerd-runc-k8s.io-70f76cb235f2e36fc77c1d29ea06447434c42e64b39d2878eadb2e636c7e0fd2-runc.6LkKZF.mount: Deactivated successfully. Feb 13 21:27:11.636510 systemd-networkd[1419]: lxc_health: Gained IPv6LL Feb 13 21:27:17.460679 sshd[4741]: Connection closed by 147.75.109.163 port 50224 Feb 13 21:27:17.463361 sshd-session[4681]: pam_unix(sshd:session): session closed for user core Feb 13 21:27:17.472710 systemd-logind[1487]: Session 27 logged out. Waiting for processes to exit. Feb 13 21:27:17.473645 systemd[1]: sshd@24-10.244.102.222:22-147.75.109.163:50224.service: Deactivated successfully. Feb 13 21:27:17.477275 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 21:27:17.480200 systemd-logind[1487]: Removed session 27.