Feb 13 23:18:54.015864 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 23:18:54.015916 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 23:18:54.015931 kernel: BIOS-provided physical RAM map: Feb 13 23:18:54.015947 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 23:18:54.015956 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 23:18:54.015966 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 23:18:54.015978 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 23:18:54.015988 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 23:18:54.015997 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 23:18:54.016007 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 23:18:54.016017 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 23:18:54.016027 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 23:18:54.016048 kernel: NX (Execute Disable) protection: active Feb 13 23:18:54.016059 kernel: APIC: Static calls initialized Feb 13 23:18:54.016071 kernel: SMBIOS 2.8 present. Feb 13 23:18:54.016087 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 23:18:54.016099 kernel: Hypervisor detected: KVM Feb 13 23:18:54.016115 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 23:18:54.016126 kernel: kvm-clock: using sched offset of 4629553002 cycles Feb 13 23:18:54.016138 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 23:18:54.016149 kernel: tsc: Detected 2799.998 MHz processor Feb 13 23:18:54.016160 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 23:18:54.016171 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 23:18:54.016182 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 23:18:54.016193 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 23:18:54.016204 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 23:18:54.016220 kernel: Using GB pages for direct mapping Feb 13 23:18:54.016231 kernel: ACPI: Early table checksum verification disabled Feb 13 23:18:54.016242 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 23:18:54.016253 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:18:54.016264 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:18:54.016276 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:18:54.016286 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 23:18:54.016297 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:18:54.016308 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:18:54.016323 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:18:54.016352 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:18:54.016367 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 23:18:54.016378 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 23:18:54.016389 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 23:18:54.016408 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 23:18:54.016420 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 23:18:54.016436 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 23:18:54.016448 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 23:18:54.016459 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 23:18:54.016476 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 23:18:54.016488 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 23:18:54.016500 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 23:18:54.016511 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 23:18:54.016522 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 23:18:54.016539 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 23:18:54.016551 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 23:18:54.016562 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 23:18:54.016573 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 23:18:54.016584 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 23:18:54.016596 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 23:18:54.016607 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 23:18:54.016618 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 23:18:54.016634 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 23:18:54.016651 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 23:18:54.016662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 23:18:54.016674 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 23:18:54.016685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 23:18:54.016706 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 23:18:54.016720 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 23:18:54.016732 kernel: Zone ranges: Feb 13 23:18:54.016743 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 23:18:54.016755 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 23:18:54.016771 kernel: Normal empty Feb 13 23:18:54.016783 kernel: Movable zone start for each node Feb 13 23:18:54.016794 kernel: Early memory node ranges Feb 13 23:18:54.016806 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 23:18:54.016817 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 23:18:54.016828 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 23:18:54.016840 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 23:18:54.016851 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 23:18:54.016867 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 23:18:54.016880 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 23:18:54.016896 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 23:18:54.016908 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 23:18:54.016919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 23:18:54.016931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 23:18:54.016942 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 23:18:54.016954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 23:18:54.016965 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 23:18:54.016977 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 23:18:54.016988 kernel: TSC deadline timer available Feb 13 23:18:54.017004 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 23:18:54.017016 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 23:18:54.017027 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 23:18:54.017038 kernel: Booting paravirtualized kernel on KVM Feb 13 23:18:54.017050 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 23:18:54.017062 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 23:18:54.017073 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 23:18:54.017085 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 23:18:54.017096 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 23:18:54.017112 kernel: kvm-guest: PV spinlocks enabled Feb 13 23:18:54.017123 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 23:18:54.017136 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 23:18:54.017148 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 23:18:54.017159 kernel: random: crng init done Feb 13 23:18:54.017171 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 23:18:54.017182 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 23:18:54.017193 kernel: Fallback order for Node 0: 0 Feb 13 23:18:54.017210 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 23:18:54.017226 kernel: Policy zone: DMA32 Feb 13 23:18:54.017239 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 23:18:54.017250 kernel: software IO TLB: area num 16. Feb 13 23:18:54.017262 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 194828K reserved, 0K cma-reserved) Feb 13 23:18:54.017274 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 23:18:54.017285 kernel: Kernel/User page tables isolation: enabled Feb 13 23:18:54.017296 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 23:18:54.017308 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 23:18:54.017324 kernel: Dynamic Preempt: voluntary Feb 13 23:18:54.020415 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 23:18:54.020437 kernel: rcu: RCU event tracing is enabled. Feb 13 23:18:54.020449 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 23:18:54.020462 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 23:18:54.020490 kernel: Rude variant of Tasks RCU enabled. Feb 13 23:18:54.020506 kernel: Tracing variant of Tasks RCU enabled. Feb 13 23:18:54.020519 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 23:18:54.020531 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 23:18:54.020543 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 23:18:54.020555 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 23:18:54.020567 kernel: Console: colour VGA+ 80x25 Feb 13 23:18:54.020584 kernel: printk: console [tty0] enabled Feb 13 23:18:54.020596 kernel: printk: console [ttyS0] enabled Feb 13 23:18:54.020608 kernel: ACPI: Core revision 20230628 Feb 13 23:18:54.020620 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 23:18:54.020632 kernel: x2apic enabled Feb 13 23:18:54.020648 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 23:18:54.020669 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Feb 13 23:18:54.020682 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Feb 13 23:18:54.020695 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 23:18:54.020716 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 23:18:54.020728 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 23:18:54.020741 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 23:18:54.020752 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 23:18:54.020764 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 23:18:54.020782 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 23:18:54.020795 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 23:18:54.020807 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 23:18:54.020819 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 23:18:54.020831 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 23:18:54.020842 kernel: MMIO Stale Data: Unknown: No mitigations Feb 13 23:18:54.020854 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 23:18:54.020866 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 23:18:54.020878 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 23:18:54.020890 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 23:18:54.020902 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 23:18:54.020919 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 23:18:54.020932 kernel: Freeing SMP alternatives memory: 32K Feb 13 23:18:54.020949 kernel: pid_max: default: 32768 minimum: 301 Feb 13 23:18:54.020962 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 23:18:54.020974 kernel: landlock: Up and running. Feb 13 23:18:54.020986 kernel: SELinux: Initializing. Feb 13 23:18:54.020998 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 23:18:54.021010 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 23:18:54.021022 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 13 23:18:54.021034 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 23:18:54.021046 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 23:18:54.021065 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 23:18:54.021077 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 13 23:18:54.021089 kernel: signal: max sigframe size: 1776 Feb 13 23:18:54.021101 kernel: rcu: Hierarchical SRCU implementation. Feb 13 23:18:54.021113 kernel: rcu: Max phase no-delay instances is 400. Feb 13 23:18:54.021125 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 23:18:54.021137 kernel: smp: Bringing up secondary CPUs ... Feb 13 23:18:54.021149 kernel: smpboot: x86: Booting SMP configuration: Feb 13 23:18:54.021161 kernel: .... node #0, CPUs: #1 Feb 13 23:18:54.021178 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 23:18:54.021190 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 23:18:54.021202 kernel: smpboot: Max logical packages: 16 Feb 13 23:18:54.021214 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Feb 13 23:18:54.021226 kernel: devtmpfs: initialized Feb 13 23:18:54.021238 kernel: x86/mm: Memory block size: 128MB Feb 13 23:18:54.021251 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 23:18:54.021263 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 23:18:54.021275 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 23:18:54.021292 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 23:18:54.021304 kernel: audit: initializing netlink subsys (disabled) Feb 13 23:18:54.021316 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 23:18:54.021328 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 23:18:54.021357 kernel: audit: type=2000 audit(1739488733.168:1): state=initialized audit_enabled=0 res=1 Feb 13 23:18:54.021370 kernel: cpuidle: using governor menu Feb 13 23:18:54.021382 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 23:18:54.021394 kernel: dca service started, version 1.12.1 Feb 13 23:18:54.021406 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 23:18:54.021425 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 23:18:54.021437 kernel: PCI: Using configuration type 1 for base access Feb 13 23:18:54.021449 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 23:18:54.021462 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 23:18:54.021474 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 23:18:54.021486 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 23:18:54.021498 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 23:18:54.021510 kernel: ACPI: Added _OSI(Module Device) Feb 13 23:18:54.021522 kernel: ACPI: Added _OSI(Processor Device) Feb 13 23:18:54.021539 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 23:18:54.021552 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 23:18:54.021564 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 23:18:54.021576 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 23:18:54.021587 kernel: ACPI: Interpreter enabled Feb 13 23:18:54.021599 kernel: ACPI: PM: (supports S0 S5) Feb 13 23:18:54.021611 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 23:18:54.021623 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 23:18:54.021636 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 23:18:54.021653 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 23:18:54.021665 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 23:18:54.021943 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 23:18:54.022124 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 23:18:54.022282 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 23:18:54.022301 kernel: PCI host bridge to bus 0000:00 Feb 13 23:18:54.023519 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 23:18:54.023680 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 23:18:54.023838 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 23:18:54.023980 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 23:18:54.024121 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 23:18:54.024265 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 23:18:54.025851 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 23:18:54.026040 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 23:18:54.026238 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 23:18:54.026488 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 23:18:54.026652 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 23:18:54.026826 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 23:18:54.026987 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 23:18:54.027163 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 23:18:54.027348 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 23:18:54.027540 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 23:18:54.027715 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 23:18:54.027890 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 23:18:54.028052 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 23:18:54.028224 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 23:18:54.029842 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 23:18:54.030031 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 23:18:54.030200 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 23:18:54.030444 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 23:18:54.030608 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 23:18:54.030795 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 23:18:54.030964 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 23:18:54.031147 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 23:18:54.031314 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 23:18:54.031528 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 23:18:54.031688 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 23:18:54.031859 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 23:18:54.032015 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 23:18:54.032184 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 23:18:54.032464 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 23:18:54.032631 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 23:18:54.032817 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 23:18:54.032975 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 23:18:54.033142 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 23:18:54.033300 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 23:18:54.033505 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 23:18:54.035395 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 23:18:54.035610 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 23:18:54.035799 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 23:18:54.035960 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 23:18:54.036137 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 23:18:54.036314 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 23:18:54.036549 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 23:18:54.036721 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 23:18:54.036879 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 23:18:54.037050 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 23:18:54.037241 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 23:18:54.039494 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 23:18:54.039673 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 23:18:54.039854 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 23:18:54.040035 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 23:18:54.040202 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 23:18:54.040381 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 23:18:54.040540 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 23:18:54.040718 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 23:18:54.040896 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 23:18:54.041082 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 23:18:54.041254 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 23:18:54.042489 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 23:18:54.042654 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 23:18:54.042832 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 23:18:54.042995 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 23:18:54.043188 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 23:18:54.043368 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 23:18:54.043531 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 23:18:54.043693 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 23:18:54.043869 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 23:18:54.044032 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 23:18:54.044194 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 23:18:54.046962 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 23:18:54.047149 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 23:18:54.047314 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 23:18:54.047509 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 23:18:54.047669 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 23:18:54.047839 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 23:18:54.047859 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 23:18:54.047873 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 23:18:54.047885 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 23:18:54.047906 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 23:18:54.047919 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 23:18:54.047931 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 23:18:54.047943 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 23:18:54.047956 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 23:18:54.047968 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 23:18:54.047980 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 23:18:54.047992 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 23:18:54.048005 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 23:18:54.048040 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 23:18:54.048053 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 23:18:54.048066 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 23:18:54.048078 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 23:18:54.048090 kernel: iommu: Default domain type: Translated Feb 13 23:18:54.048102 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 23:18:54.048114 kernel: PCI: Using ACPI for IRQ routing Feb 13 23:18:54.048127 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 23:18:54.048139 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 23:18:54.048157 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 23:18:54.048316 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 23:18:54.048519 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 23:18:54.048676 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 23:18:54.048706 kernel: vgaarb: loaded Feb 13 23:18:54.048721 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 23:18:54.048734 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 23:18:54.048747 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 23:18:54.048766 kernel: pnp: PnP ACPI init Feb 13 23:18:54.048933 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 23:18:54.048954 kernel: pnp: PnP ACPI: found 5 devices Feb 13 23:18:54.048967 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 23:18:54.048980 kernel: NET: Registered PF_INET protocol family Feb 13 23:18:54.048993 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 23:18:54.049005 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 23:18:54.049018 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 23:18:54.049030 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 23:18:54.049050 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 23:18:54.049062 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 23:18:54.049075 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 23:18:54.049088 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 23:18:54.049100 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 23:18:54.049112 kernel: NET: Registered PF_XDP protocol family Feb 13 23:18:54.049266 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 23:18:54.049454 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 23:18:54.049620 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 23:18:54.049791 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 23:18:54.049949 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 23:18:54.050109 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 23:18:54.050268 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 23:18:54.050456 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 23:18:54.050624 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 23:18:54.050796 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 23:18:54.050954 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 23:18:54.051113 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 23:18:54.051270 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 23:18:54.051459 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 23:18:54.051622 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 23:18:54.051803 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 23:18:54.051996 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 23:18:54.052173 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 23:18:54.052333 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 23:18:54.052527 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 23:18:54.052683 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 23:18:54.052852 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 23:18:54.053007 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 23:18:54.053165 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 23:18:54.053328 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 23:18:54.053511 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 23:18:54.053668 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 23:18:54.053843 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 23:18:54.054004 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 23:18:54.054173 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 23:18:54.054376 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 23:18:54.054538 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 23:18:54.054694 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 23:18:54.054863 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 23:18:54.055020 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 23:18:54.055177 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 23:18:54.055332 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 23:18:54.055518 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 23:18:54.055675 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 23:18:54.055851 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 23:18:54.056010 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 23:18:54.056167 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 23:18:54.056324 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 23:18:54.056523 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 23:18:54.056689 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 23:18:54.056858 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 23:18:54.057015 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 23:18:54.057171 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 23:18:54.057328 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 23:18:54.057513 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 23:18:54.057667 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 23:18:54.057826 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 23:18:54.057980 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 23:18:54.058125 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 23:18:54.058269 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 23:18:54.058461 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 23:18:54.058625 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 23:18:54.058789 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 23:18:54.058937 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 23:18:54.059106 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 23:18:54.059268 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 23:18:54.059446 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 23:18:54.059600 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 23:18:54.059779 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 23:18:54.059932 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 23:18:54.060081 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 23:18:54.060264 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 23:18:54.060463 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 23:18:54.060621 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 23:18:54.060802 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 23:18:54.060951 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 23:18:54.061098 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 23:18:54.061256 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 23:18:54.061439 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 23:18:54.061590 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 23:18:54.061769 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 23:18:54.061919 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 23:18:54.062069 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 23:18:54.062228 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 23:18:54.062407 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 23:18:54.062571 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 23:18:54.062592 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 23:18:54.062606 kernel: PCI: CLS 0 bytes, default 64 Feb 13 23:18:54.062619 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 23:18:54.062632 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 23:18:54.062645 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 23:18:54.062658 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Feb 13 23:18:54.062671 kernel: Initialise system trusted keyrings Feb 13 23:18:54.062692 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 23:18:54.062716 kernel: Key type asymmetric registered Feb 13 23:18:54.062729 kernel: Asymmetric key parser 'x509' registered Feb 13 23:18:54.062742 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 23:18:54.062755 kernel: io scheduler mq-deadline registered Feb 13 23:18:54.062768 kernel: io scheduler kyber registered Feb 13 23:18:54.062780 kernel: io scheduler bfq registered Feb 13 23:18:54.062941 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 23:18:54.063105 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 23:18:54.063291 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:18:54.063528 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 23:18:54.063686 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 23:18:54.063855 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:18:54.064014 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 23:18:54.064170 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 23:18:54.064334 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:18:54.064557 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 23:18:54.064725 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 23:18:54.064883 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:18:54.065043 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 23:18:54.065203 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 23:18:54.065420 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:18:54.065613 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 23:18:54.065786 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 23:18:54.065944 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:18:54.066102 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 23:18:54.066259 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 23:18:54.066453 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:18:54.066613 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 23:18:54.066782 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 23:18:54.066940 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:18:54.066961 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 23:18:54.066975 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 23:18:54.066996 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 23:18:54.067010 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 23:18:54.067023 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 23:18:54.067036 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 23:18:54.067049 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 23:18:54.067062 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 23:18:54.067225 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 23:18:54.067247 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 23:18:54.067438 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 23:18:54.067587 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T23:18:53 UTC (1739488733) Feb 13 23:18:54.067745 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 23:18:54.067765 kernel: intel_pstate: CPU model not supported Feb 13 23:18:54.067778 kernel: NET: Registered PF_INET6 protocol family Feb 13 23:18:54.067791 kernel: Segment Routing with IPv6 Feb 13 23:18:54.067804 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 23:18:54.067817 kernel: NET: Registered PF_PACKET protocol family Feb 13 23:18:54.067829 kernel: Key type dns_resolver registered Feb 13 23:18:54.067850 kernel: IPI shorthand broadcast: enabled Feb 13 23:18:54.067863 kernel: sched_clock: Marking stable (1238003986, 221859868)->(1676946095, -217082241) Feb 13 23:18:54.067876 kernel: registered taskstats version 1 Feb 13 23:18:54.067889 kernel: Loading compiled-in X.509 certificates Feb 13 23:18:54.067902 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 23:18:54.067914 kernel: Key type .fscrypt registered Feb 13 23:18:54.067927 kernel: Key type fscrypt-provisioning registered Feb 13 23:18:54.067940 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 23:18:54.067953 kernel: ima: Allocated hash algorithm: sha1 Feb 13 23:18:54.067972 kernel: ima: No architecture policies found Feb 13 23:18:54.067984 kernel: clk: Disabling unused clocks Feb 13 23:18:54.067997 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 23:18:54.068010 kernel: Write protecting the kernel read-only data: 36864k Feb 13 23:18:54.068023 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 23:18:54.068035 kernel: Run /init as init process Feb 13 23:18:54.068048 kernel: with arguments: Feb 13 23:18:54.068061 kernel: /init Feb 13 23:18:54.068073 kernel: with environment: Feb 13 23:18:54.068091 kernel: HOME=/ Feb 13 23:18:54.068104 kernel: TERM=linux Feb 13 23:18:54.068116 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 23:18:54.068141 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 23:18:54.068160 systemd[1]: Detected virtualization kvm. Feb 13 23:18:54.068174 systemd[1]: Detected architecture x86-64. Feb 13 23:18:54.068188 systemd[1]: Running in initrd. Feb 13 23:18:54.068208 systemd[1]: No hostname configured, using default hostname. Feb 13 23:18:54.068221 systemd[1]: Hostname set to . Feb 13 23:18:54.068236 systemd[1]: Initializing machine ID from VM UUID. Feb 13 23:18:54.068249 systemd[1]: Queued start job for default target initrd.target. Feb 13 23:18:54.068263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 23:18:54.068277 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 23:18:54.068292 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 23:18:54.068306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 23:18:54.068325 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 23:18:54.068364 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 23:18:54.068382 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 23:18:54.068396 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 23:18:54.068410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 23:18:54.068424 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 23:18:54.068438 systemd[1]: Reached target paths.target - Path Units. Feb 13 23:18:54.068459 systemd[1]: Reached target slices.target - Slice Units. Feb 13 23:18:54.068473 systemd[1]: Reached target swap.target - Swaps. Feb 13 23:18:54.068487 systemd[1]: Reached target timers.target - Timer Units. Feb 13 23:18:54.068501 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 23:18:54.068515 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 23:18:54.068529 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 23:18:54.068543 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 23:18:54.068557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 23:18:54.068571 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 23:18:54.068590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 23:18:54.068604 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 23:18:54.068618 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 23:18:54.068632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 23:18:54.068645 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 23:18:54.068659 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 23:18:54.068673 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 23:18:54.068692 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 23:18:54.068723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:18:54.068783 systemd-journald[202]: Collecting audit messages is disabled. Feb 13 23:18:54.068816 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 23:18:54.068830 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 23:18:54.068850 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 23:18:54.068865 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 23:18:54.068881 systemd-journald[202]: Journal started Feb 13 23:18:54.068912 systemd-journald[202]: Runtime Journal (/run/log/journal/b336589ef4004984acd020ce8bfe63b9) is 4.7M, max 38.0M, 33.2M free. Feb 13 23:18:54.038792 systemd-modules-load[203]: Inserted module 'overlay' Feb 13 23:18:54.126057 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 23:18:54.126093 kernel: Bridge firewalling registered Feb 13 23:18:54.126112 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 23:18:54.083467 systemd-modules-load[203]: Inserted module 'br_netfilter' Feb 13 23:18:54.128609 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 23:18:54.129579 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:18:54.136672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 23:18:54.148554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:18:54.151544 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 23:18:54.152704 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 23:18:54.163603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 23:18:54.167273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:18:54.179142 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 23:18:54.180291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 23:18:54.188552 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 23:18:54.190505 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:18:54.197670 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 23:18:54.227111 systemd-resolved[234]: Positive Trust Anchors: Feb 13 23:18:54.228194 dracut-cmdline[236]: dracut-dracut-053 Feb 13 23:18:54.230400 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 23:18:54.229059 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 23:18:54.229104 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 23:18:54.236662 systemd-resolved[234]: Defaulting to hostname 'linux'. Feb 13 23:18:54.241469 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 23:18:54.242547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 23:18:54.324486 kernel: SCSI subsystem initialized Feb 13 23:18:54.336376 kernel: Loading iSCSI transport class v2.0-870. Feb 13 23:18:54.348381 kernel: iscsi: registered transport (tcp) Feb 13 23:18:54.373476 kernel: iscsi: registered transport (qla4xxx) Feb 13 23:18:54.373541 kernel: QLogic iSCSI HBA Driver Feb 13 23:18:54.426022 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 23:18:54.434543 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 23:18:54.467122 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 23:18:54.467231 kernel: device-mapper: uevent: version 1.0.3 Feb 13 23:18:54.467253 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 23:18:54.515424 kernel: raid6: sse2x4 gen() 14336 MB/s Feb 13 23:18:54.533424 kernel: raid6: sse2x2 gen() 9632 MB/s Feb 13 23:18:54.551976 kernel: raid6: sse2x1 gen() 10379 MB/s Feb 13 23:18:54.552048 kernel: raid6: using algorithm sse2x4 gen() 14336 MB/s Feb 13 23:18:54.571138 kernel: raid6: .... xor() 7959 MB/s, rmw enabled Feb 13 23:18:54.571207 kernel: raid6: using ssse3x2 recovery algorithm Feb 13 23:18:54.596378 kernel: xor: automatically using best checksumming function avx Feb 13 23:18:54.808398 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 23:18:54.822642 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 23:18:54.829590 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 23:18:54.853885 systemd-udevd[419]: Using default interface naming scheme 'v255'. Feb 13 23:18:54.860689 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 23:18:54.870554 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 23:18:54.890983 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Feb 13 23:18:54.935041 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 23:18:54.941515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 23:18:55.044569 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 23:18:55.054548 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 23:18:55.086971 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 23:18:55.095877 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 23:18:55.098089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 23:18:55.100012 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 23:18:55.108637 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 23:18:55.145903 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 23:18:55.190387 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 23:18:55.322513 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 23:18:55.322544 kernel: AVX version of gcm_enc/dec engaged. Feb 13 23:18:55.322564 kernel: AES CTR mode by8 optimization enabled Feb 13 23:18:55.322592 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 23:18:55.322797 kernel: ACPI: bus type USB registered Feb 13 23:18:55.322818 kernel: usbcore: registered new interface driver usbfs Feb 13 23:18:55.322836 kernel: usbcore: registered new interface driver hub Feb 13 23:18:55.322854 kernel: usbcore: registered new device driver usb Feb 13 23:18:55.322872 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 23:18:55.322889 kernel: GPT:17805311 != 125829119 Feb 13 23:18:55.322906 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 23:18:55.322923 kernel: GPT:17805311 != 125829119 Feb 13 23:18:55.322947 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 23:18:55.322965 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:18:55.322983 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 23:18:55.323185 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 23:18:55.323429 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 23:18:55.323637 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 23:18:55.323842 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 23:18:55.324030 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 23:18:55.324228 kernel: hub 1-0:1.0: USB hub found Feb 13 23:18:55.327542 kernel: hub 1-0:1.0: 4 ports detected Feb 13 23:18:55.327776 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 23:18:55.328043 kernel: hub 2-0:1.0: USB hub found Feb 13 23:18:55.328280 kernel: hub 2-0:1.0: 4 ports detected Feb 13 23:18:55.329861 kernel: libata version 3.00 loaded. Feb 13 23:18:55.233097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 23:18:55.444998 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (466) Feb 13 23:18:55.445044 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (468) Feb 13 23:18:55.445071 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 23:18:55.445399 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 23:18:55.445436 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 23:18:55.445639 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 23:18:55.445885 kernel: scsi host0: ahci Feb 13 23:18:55.446106 kernel: scsi host1: ahci Feb 13 23:18:55.446296 kernel: scsi host2: ahci Feb 13 23:18:55.446508 kernel: scsi host3: ahci Feb 13 23:18:55.446706 kernel: scsi host4: ahci Feb 13 23:18:55.446904 kernel: scsi host5: ahci Feb 13 23:18:55.447101 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Feb 13 23:18:55.447123 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Feb 13 23:18:55.447141 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Feb 13 23:18:55.447159 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Feb 13 23:18:55.447177 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Feb 13 23:18:55.447203 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Feb 13 23:18:55.233327 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:18:55.241688 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 23:18:55.245725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 23:18:55.245923 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:18:55.247692 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:18:55.255618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:18:55.390972 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 23:18:55.444697 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:18:55.451493 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 23:18:55.457680 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 23:18:55.458510 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 23:18:55.466705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 23:18:55.481568 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 23:18:55.485511 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 23:18:55.490052 disk-uuid[563]: Primary Header is updated. Feb 13 23:18:55.490052 disk-uuid[563]: Secondary Entries is updated. Feb 13 23:18:55.490052 disk-uuid[563]: Secondary Header is updated. Feb 13 23:18:55.494134 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:18:55.515244 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:18:55.544372 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 23:18:55.685386 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 23:18:55.704813 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 23:18:55.704890 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 23:18:55.707379 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 23:18:55.709556 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 23:18:55.709594 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 23:18:55.712035 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 23:18:55.723669 kernel: usbcore: registered new interface driver usbhid Feb 13 23:18:55.723716 kernel: usbhid: USB HID core driver Feb 13 23:18:55.730917 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Feb 13 23:18:55.730966 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 23:18:56.506402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:18:56.507581 disk-uuid[565]: The operation has completed successfully. Feb 13 23:18:56.566430 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 23:18:56.566602 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 23:18:56.579567 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 23:18:56.587079 sh[587]: Success Feb 13 23:18:56.604373 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 13 23:18:56.671535 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 23:18:56.674500 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 23:18:56.676139 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 23:18:56.697368 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 23:18:56.701904 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:18:56.701947 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 23:18:56.701969 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 23:18:56.704902 kernel: BTRFS info (device dm-0): using free space tree Feb 13 23:18:56.714312 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 23:18:56.715811 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 23:18:56.722540 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 23:18:56.725409 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 23:18:56.744240 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 23:18:56.744317 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:18:56.744363 kernel: BTRFS info (device vda6): using free space tree Feb 13 23:18:56.767406 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 23:18:56.782256 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 23:18:56.786390 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 23:18:56.794256 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 23:18:56.801626 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 23:18:56.947863 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 23:18:56.959646 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 23:18:56.995469 systemd-networkd[775]: lo: Link UP Feb 13 23:18:56.996433 systemd-networkd[775]: lo: Gained carrier Feb 13 23:18:56.999835 ignition[687]: Ignition 2.20.0 Feb 13 23:18:56.999855 ignition[687]: Stage: fetch-offline Feb 13 23:18:56.999919 ignition[687]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:18:57.001391 systemd-networkd[775]: Enumeration completed Feb 13 23:18:56.999938 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:18:57.001565 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 23:18:57.000107 ignition[687]: parsed url from cmdline: "" Feb 13 23:18:57.002180 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:18:57.000115 ignition[687]: no config URL provided Feb 13 23:18:57.002186 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 23:18:57.000124 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 23:18:57.002888 systemd[1]: Reached target network.target - Network. Feb 13 23:18:57.000140 ignition[687]: no config at "/usr/lib/ignition/user.ign" Feb 13 23:18:57.004679 systemd-networkd[775]: eth0: Link UP Feb 13 23:18:57.000149 ignition[687]: failed to fetch config: resource requires networking Feb 13 23:18:57.004685 systemd-networkd[775]: eth0: Gained carrier Feb 13 23:18:57.003602 ignition[687]: Ignition finished successfully Feb 13 23:18:57.004695 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:18:57.006881 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 23:18:57.016589 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 23:18:57.020438 systemd-networkd[775]: eth0: DHCPv4 address 10.230.54.94/30, gateway 10.230.54.93 acquired from 10.230.54.93 Feb 13 23:18:57.049781 ignition[779]: Ignition 2.20.0 Feb 13 23:18:57.049802 ignition[779]: Stage: fetch Feb 13 23:18:57.050063 ignition[779]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:18:57.050084 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:18:57.050233 ignition[779]: parsed url from cmdline: "" Feb 13 23:18:57.050240 ignition[779]: no config URL provided Feb 13 23:18:57.050250 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 23:18:57.050265 ignition[779]: no config at "/usr/lib/ignition/user.ign" Feb 13 23:18:57.050434 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 23:18:57.050575 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 23:18:57.050637 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 23:18:57.066525 ignition[779]: GET result: OK Feb 13 23:18:57.067440 ignition[779]: parsing config with SHA512: ae95f0b1e9aeb8d89f61daa3610166c23eef908bb1159baf368215f09a9e7db527a369650974c33de24173ff56accb89fb98a8cf887ed4f8f5cdab32c0810f6d Feb 13 23:18:57.071113 unknown[779]: fetched base config from "system" Feb 13 23:18:57.071131 unknown[779]: fetched base config from "system" Feb 13 23:18:57.071513 ignition[779]: fetch: fetch complete Feb 13 23:18:57.071141 unknown[779]: fetched user config from "openstack" Feb 13 23:18:57.071522 ignition[779]: fetch: fetch passed Feb 13 23:18:57.074434 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 23:18:57.071590 ignition[779]: Ignition finished successfully Feb 13 23:18:57.081566 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 23:18:57.102985 ignition[787]: Ignition 2.20.0 Feb 13 23:18:57.103006 ignition[787]: Stage: kargs Feb 13 23:18:57.103231 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:18:57.103252 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:18:57.104162 ignition[787]: kargs: kargs passed Feb 13 23:18:57.104233 ignition[787]: Ignition finished successfully Feb 13 23:18:57.107452 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 23:18:57.119698 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 23:18:57.135679 ignition[793]: Ignition 2.20.0 Feb 13 23:18:57.135700 ignition[793]: Stage: disks Feb 13 23:18:57.135939 ignition[793]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:18:57.135959 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:18:57.138143 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 23:18:57.136917 ignition[793]: disks: disks passed Feb 13 23:18:57.139804 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 23:18:57.136998 ignition[793]: Ignition finished successfully Feb 13 23:18:57.140808 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 23:18:57.142217 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 23:18:57.143514 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 23:18:57.145080 systemd[1]: Reached target basic.target - Basic System. Feb 13 23:18:57.160611 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 23:18:57.182704 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 23:18:57.185801 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 23:18:57.192478 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 23:18:57.315392 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 23:18:57.316871 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 23:18:57.318934 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 23:18:57.329517 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 23:18:57.332109 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 23:18:57.333258 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 23:18:57.337576 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 23:18:57.340463 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (810) Feb 13 23:18:57.339323 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 23:18:57.353545 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 23:18:57.353579 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:18:57.353599 kernel: BTRFS info (device vda6): using free space tree Feb 13 23:18:57.353631 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 23:18:57.339394 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 23:18:57.358060 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 23:18:57.359052 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 23:18:57.366566 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 23:18:57.429108 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 23:18:57.437440 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Feb 13 23:18:57.445531 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 23:18:57.452657 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 23:18:57.552276 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 23:18:57.557498 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 23:18:57.559513 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 23:18:57.574387 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 23:18:57.595456 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 23:18:57.611394 ignition[928]: INFO : Ignition 2.20.0 Feb 13 23:18:57.611394 ignition[928]: INFO : Stage: mount Feb 13 23:18:57.613886 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 23:18:57.613886 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:18:57.613886 ignition[928]: INFO : mount: mount passed Feb 13 23:18:57.613886 ignition[928]: INFO : Ignition finished successfully Feb 13 23:18:57.614740 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 23:18:57.696164 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 23:18:58.400651 systemd-networkd[775]: eth0: Gained IPv6LL Feb 13 23:18:59.906122 systemd-networkd[775]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8d97:24:19ff:fee6:365e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8d97:24:19ff:fee6:365e/64 assigned by NDisc. Feb 13 23:18:59.906148 systemd-networkd[775]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 23:19:04.509057 coreos-metadata[812]: Feb 13 23:19:04.508 WARN failed to locate config-drive, using the metadata service API instead Feb 13 23:19:04.530650 coreos-metadata[812]: Feb 13 23:19:04.530 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 23:19:04.543887 coreos-metadata[812]: Feb 13 23:19:04.543 INFO Fetch successful Feb 13 23:19:04.545758 coreos-metadata[812]: Feb 13 23:19:04.545 INFO wrote hostname srv-41mqz.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 23:19:04.551719 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 23:19:04.552077 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 23:19:04.565567 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 23:19:04.595664 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 23:19:04.607386 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) Feb 13 23:19:04.614220 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 23:19:04.614327 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:19:04.614417 kernel: BTRFS info (device vda6): using free space tree Feb 13 23:19:04.618394 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 23:19:04.621682 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 23:19:04.673979 ignition[961]: INFO : Ignition 2.20.0 Feb 13 23:19:04.673979 ignition[961]: INFO : Stage: files Feb 13 23:19:04.676136 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 23:19:04.676136 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:19:04.676136 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Feb 13 23:19:04.679468 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 23:19:04.679468 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 23:19:04.681864 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 23:19:04.681864 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 23:19:04.683929 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 23:19:04.683929 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 23:19:04.683929 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 23:19:04.683929 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 23:19:04.681912 unknown[961]: wrote ssh authorized keys file for user: core Feb 13 23:19:04.689830 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 23:19:04.689830 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 23:19:04.689830 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 23:19:04.689830 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 23:19:04.689830 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 23:19:04.689830 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 23:19:04.689830 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 23:19:05.305168 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 13 23:19:08.014705 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 23:19:08.014705 ignition[961]: INFO : files: op(8): [started] processing unit "containerd.service" Feb 13 23:19:08.017892 ignition[961]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 23:19:08.017892 ignition[961]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 23:19:08.017892 ignition[961]: INFO : files: op(8): [finished] processing unit "containerd.service" Feb 13 23:19:08.024410 ignition[961]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 23:19:08.024410 ignition[961]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 23:19:08.024410 ignition[961]: INFO : files: files passed Feb 13 23:19:08.024410 ignition[961]: INFO : Ignition finished successfully Feb 13 23:19:08.019911 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 23:19:08.033661 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 23:19:08.038558 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 23:19:08.040935 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 23:19:08.041085 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 23:19:08.056245 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 23:19:08.056245 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 23:19:08.060177 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 23:19:08.062816 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 23:19:08.064224 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 23:19:08.071557 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 23:19:08.101764 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 23:19:08.101955 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 23:19:08.103882 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 23:19:08.105107 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 23:19:08.106640 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 23:19:08.112551 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 23:19:08.134682 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 23:19:08.142567 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 23:19:08.156964 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 23:19:08.157944 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 23:19:08.159676 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 23:19:08.161130 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 23:19:08.161325 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 23:19:08.163099 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 23:19:08.164087 systemd[1]: Stopped target basic.target - Basic System. Feb 13 23:19:08.165596 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 23:19:08.166962 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 23:19:08.168373 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 23:19:08.169807 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 23:19:08.171328 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 23:19:08.172869 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 23:19:08.174457 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 23:19:08.175892 systemd[1]: Stopped target swap.target - Swaps. Feb 13 23:19:08.177191 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 23:19:08.177484 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 23:19:08.179097 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 23:19:08.180106 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 23:19:08.182360 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 23:19:08.182564 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 23:19:08.183965 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 23:19:08.184170 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 23:19:08.186115 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 23:19:08.186318 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 23:19:08.188206 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 23:19:08.188433 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 23:19:08.195708 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 23:19:08.199493 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 23:19:08.202092 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 23:19:08.204487 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 23:19:08.207715 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 23:19:08.207928 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 23:19:08.216257 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 23:19:08.217225 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 23:19:08.229976 ignition[1013]: INFO : Ignition 2.20.0 Feb 13 23:19:08.231881 ignition[1013]: INFO : Stage: umount Feb 13 23:19:08.231881 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 23:19:08.231881 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:19:08.238416 ignition[1013]: INFO : umount: umount passed Feb 13 23:19:08.238416 ignition[1013]: INFO : Ignition finished successfully Feb 13 23:19:08.237853 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 23:19:08.239796 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 23:19:08.239945 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 23:19:08.242825 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 23:19:08.242957 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 23:19:08.244939 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 23:19:08.245005 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 23:19:08.245770 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 23:19:08.245832 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 23:19:08.247331 systemd[1]: Stopped target network.target - Network. Feb 13 23:19:08.247987 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 23:19:08.248094 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 23:19:08.248874 systemd[1]: Stopped target paths.target - Path Units. Feb 13 23:19:08.249511 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 23:19:08.259325 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 23:19:08.260230 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 23:19:08.260934 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 23:19:08.263380 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 23:19:08.263463 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 23:19:08.264181 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 23:19:08.264238 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 23:19:08.265023 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 23:19:08.265114 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 23:19:08.274373 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 23:19:08.274459 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 23:19:08.275483 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 23:19:08.276280 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 23:19:08.280418 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 23:19:08.280594 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 23:19:08.287636 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 23:19:08.287820 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 23:19:08.297699 systemd-networkd[775]: eth0: DHCPv6 lease lost Feb 13 23:19:08.297820 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 23:19:08.297997 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 23:19:08.301045 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 23:19:08.301120 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 23:19:08.302391 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 23:19:08.302657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 23:19:08.306427 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 23:19:08.306504 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 23:19:08.315535 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 23:19:08.316906 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 23:19:08.316994 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 23:19:08.321198 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 23:19:08.321283 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:19:08.323787 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 23:19:08.323894 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 23:19:08.324810 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 23:19:08.340757 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 23:19:08.341015 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 23:19:08.344075 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 23:19:08.345198 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 23:19:08.347670 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 23:19:08.347804 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 23:19:08.348789 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 23:19:08.348863 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 23:19:08.349600 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 23:19:08.349699 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 23:19:08.350643 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 23:19:08.350722 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 23:19:08.351546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 23:19:08.351633 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:19:08.375725 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 23:19:08.376574 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 23:19:08.376679 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 23:19:08.377505 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 23:19:08.377570 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 23:19:08.379248 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 23:19:08.379392 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 23:19:08.386274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 23:19:08.386387 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:19:08.391767 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 23:19:08.391931 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 23:19:08.398722 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 23:19:08.409231 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 23:19:08.422873 systemd[1]: Switching root. Feb 13 23:19:08.463154 systemd-journald[202]: Journal stopped Feb 13 23:19:10.059144 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Feb 13 23:19:10.059236 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 23:19:10.059268 kernel: SELinux: policy capability open_perms=1 Feb 13 23:19:10.059287 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 23:19:10.059332 kernel: SELinux: policy capability always_check_network=0 Feb 13 23:19:10.059373 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 23:19:10.059403 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 23:19:10.059422 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 23:19:10.059440 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 23:19:10.059469 kernel: audit: type=1403 audit(1739488748.855:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 23:19:10.059496 systemd[1]: Successfully loaded SELinux policy in 54.159ms. Feb 13 23:19:10.059538 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.229ms. Feb 13 23:19:10.059562 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 23:19:10.059612 systemd[1]: Detected virtualization kvm. Feb 13 23:19:10.059634 systemd[1]: Detected architecture x86-64. Feb 13 23:19:10.059654 systemd[1]: Detected first boot. Feb 13 23:19:10.059673 systemd[1]: Hostname set to . Feb 13 23:19:10.059729 systemd[1]: Initializing machine ID from VM UUID. Feb 13 23:19:10.059756 zram_generator::config[1073]: No configuration found. Feb 13 23:19:10.059788 systemd[1]: Populated /etc with preset unit settings. Feb 13 23:19:10.059809 systemd[1]: Queued start job for default target multi-user.target. Feb 13 23:19:10.059849 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 23:19:10.059880 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 23:19:10.059901 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 23:19:10.059921 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 23:19:10.059940 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 23:19:10.059959 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 23:19:10.059979 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 23:19:10.059998 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 23:19:10.060032 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 23:19:10.060054 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 23:19:10.060073 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 23:19:10.060093 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 23:19:10.060112 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 23:19:10.060133 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 23:19:10.060154 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 23:19:10.060172 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 23:19:10.060192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 23:19:10.060226 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 23:19:10.060256 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 23:19:10.060277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 23:19:10.060305 systemd[1]: Reached target slices.target - Slice Units. Feb 13 23:19:10.060333 systemd[1]: Reached target swap.target - Swaps. Feb 13 23:19:10.061642 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 23:19:10.061680 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 23:19:10.061718 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 23:19:10.061740 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 23:19:10.061760 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 23:19:10.061779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 23:19:10.061800 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 23:19:10.061819 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 23:19:10.061848 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 23:19:10.061869 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 23:19:10.061889 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 23:19:10.061923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:19:10.061945 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 23:19:10.061965 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 23:19:10.061983 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 23:19:10.062003 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 23:19:10.062023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:19:10.062043 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 23:19:10.062104 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 23:19:10.062171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 23:19:10.062208 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 23:19:10.062229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 23:19:10.062248 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 23:19:10.062267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 23:19:10.062287 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 23:19:10.062331 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 23:19:10.062370 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 23:19:10.062391 kernel: fuse: init (API version 7.39) Feb 13 23:19:10.062410 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 23:19:10.062430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 23:19:10.062450 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 23:19:10.062469 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 23:19:10.062487 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 23:19:10.062508 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:19:10.062542 kernel: loop: module loaded Feb 13 23:19:10.062563 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 23:19:10.062583 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 23:19:10.062602 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 23:19:10.062621 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 23:19:10.062641 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 23:19:10.062660 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 23:19:10.062679 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 23:19:10.062712 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 23:19:10.062761 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 23:19:10.062795 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 23:19:10.062815 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 23:19:10.062835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 23:19:10.062868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 23:19:10.062889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 23:19:10.062909 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 23:19:10.062928 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 23:19:10.062946 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 23:19:10.062965 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 23:19:10.062985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 23:19:10.063039 systemd-journald[1184]: Collecting audit messages is disabled. Feb 13 23:19:10.063087 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 23:19:10.063109 kernel: ACPI: bus type drm_connector registered Feb 13 23:19:10.063127 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 23:19:10.063148 systemd-journald[1184]: Journal started Feb 13 23:19:10.063193 systemd-journald[1184]: Runtime Journal (/run/log/journal/b336589ef4004984acd020ce8bfe63b9) is 4.7M, max 38.0M, 33.2M free. Feb 13 23:19:10.069401 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 23:19:10.071759 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 23:19:10.075629 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 23:19:10.091361 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 23:19:10.099482 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 23:19:10.111529 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 23:19:10.112445 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 23:19:10.122522 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 23:19:10.143681 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 23:19:10.144726 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 23:19:10.153531 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 23:19:10.156473 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 23:19:10.159523 systemd-journald[1184]: Time spent on flushing to /var/log/journal/b336589ef4004984acd020ce8bfe63b9 is 165.773ms for 1107 entries. Feb 13 23:19:10.159523 systemd-journald[1184]: System Journal (/var/log/journal/b336589ef4004984acd020ce8bfe63b9) is 8.0M, max 584.8M, 576.8M free. Feb 13 23:19:10.370629 systemd-journald[1184]: Received client request to flush runtime journal. Feb 13 23:19:10.167589 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:19:10.178250 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 23:19:10.195024 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 23:19:10.210496 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 23:19:10.212932 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 23:19:10.218690 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 23:19:10.363950 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 23:19:10.376636 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 23:19:10.386466 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 23:19:10.405940 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:19:10.413855 udevadm[1238]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 23:19:10.423492 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Feb 13 23:19:10.423957 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Feb 13 23:19:10.433710 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 23:19:10.444555 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 23:19:10.478456 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 23:19:10.486734 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 23:19:10.509284 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Feb 13 23:19:10.509855 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Feb 13 23:19:10.517036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 23:19:11.138162 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 23:19:11.146602 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 23:19:11.182514 systemd-udevd[1258]: Using default interface naming scheme 'v255'. Feb 13 23:19:11.210783 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 23:19:11.222562 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 23:19:11.247756 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 23:19:11.273410 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 23:19:11.406625 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 23:19:11.459383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1273) Feb 13 23:19:11.499978 systemd-networkd[1266]: lo: Link UP Feb 13 23:19:11.499992 systemd-networkd[1266]: lo: Gained carrier Feb 13 23:19:11.502719 systemd-networkd[1266]: Enumeration completed Feb 13 23:19:11.503246 systemd-networkd[1266]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:19:11.503252 systemd-networkd[1266]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 23:19:11.505822 systemd-networkd[1266]: eth0: Link UP Feb 13 23:19:11.505835 systemd-networkd[1266]: eth0: Gained carrier Feb 13 23:19:11.505853 systemd-networkd[1266]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:19:11.508620 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 23:19:11.522444 systemd-networkd[1266]: eth0: DHCPv4 address 10.230.54.94/30, gateway 10.230.54.93 acquired from 10.230.54.93 Feb 13 23:19:11.544701 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 23:19:11.579367 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 23:19:11.583364 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Feb 13 23:19:11.592434 kernel: ACPI: button: Power Button [PWRF] Feb 13 23:19:11.613957 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 23:19:11.637601 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 23:19:11.644530 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 23:19:11.644820 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 23:19:11.649380 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 23:19:11.726785 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:19:11.878898 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 23:19:11.891573 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 23:19:11.959137 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:19:11.974143 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 23:19:12.008890 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 23:19:12.010809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 23:19:12.017548 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 23:19:12.025331 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 23:19:12.057799 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 23:19:12.059522 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 23:19:12.060454 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 23:19:12.060640 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 23:19:12.061468 systemd[1]: Reached target machines.target - Containers. Feb 13 23:19:12.064394 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 23:19:12.071556 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 23:19:12.075386 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 23:19:12.077581 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:19:12.080590 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 23:19:12.086578 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 23:19:12.093544 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 23:19:12.096920 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 23:19:12.126436 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 23:19:12.129806 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 23:19:12.137312 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 23:19:12.147565 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 23:19:12.174030 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 23:19:12.214388 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 23:19:12.255594 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 23:19:12.303584 kernel: loop3: detected capacity change from 0 to 8 Feb 13 23:19:12.398406 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 23:19:12.422385 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 23:19:12.440363 kernel: loop6: detected capacity change from 0 to 140992 Feb 13 23:19:12.461366 kernel: loop7: detected capacity change from 0 to 8 Feb 13 23:19:12.462745 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 23:19:12.463716 (sd-merge)[1323]: Merged extensions into '/usr'. Feb 13 23:19:12.471973 systemd[1]: Reloading requested from client PID 1309 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 23:19:12.472894 systemd[1]: Reloading... Feb 13 23:19:12.608422 zram_generator::config[1349]: No configuration found. Feb 13 23:19:12.832631 ldconfig[1305]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 23:19:12.839802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:19:12.865068 systemd-networkd[1266]: eth0: Gained IPv6LL Feb 13 23:19:12.924879 systemd[1]: Reloading finished in 450 ms. Feb 13 23:19:12.950308 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 23:19:12.951996 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 23:19:12.953201 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 23:19:12.964680 systemd[1]: Starting ensure-sysext.service... Feb 13 23:19:12.967533 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 23:19:12.977152 systemd[1]: Reloading requested from client PID 1416 ('systemctl') (unit ensure-sysext.service)... Feb 13 23:19:12.977177 systemd[1]: Reloading... Feb 13 23:19:13.018075 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 23:19:13.019585 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 23:19:13.021567 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 23:19:13.022189 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Feb 13 23:19:13.022319 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Feb 13 23:19:13.029748 systemd-tmpfiles[1417]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 23:19:13.029956 systemd-tmpfiles[1417]: Skipping /boot Feb 13 23:19:13.060936 systemd-tmpfiles[1417]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 23:19:13.061375 systemd-tmpfiles[1417]: Skipping /boot Feb 13 23:19:13.134412 zram_generator::config[1443]: No configuration found. Feb 13 23:19:13.307919 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:19:13.390172 systemd[1]: Reloading finished in 412 ms. Feb 13 23:19:13.421204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 23:19:13.428852 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 23:19:13.432580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 23:19:13.440522 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 23:19:13.451510 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 23:19:13.465957 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 23:19:13.479363 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:19:13.479649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:19:13.487754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 23:19:13.494782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 23:19:13.503446 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 23:19:13.504552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:19:13.505131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:19:13.514096 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:19:13.515721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:19:13.521552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:19:13.521720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:19:13.526625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 23:19:13.527995 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 23:19:13.534247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 23:19:13.541657 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 23:19:13.544259 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 23:19:13.555110 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 23:19:13.559035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 23:19:13.583594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:19:13.583957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:19:13.600671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 23:19:13.615226 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 23:19:13.619679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 23:19:13.631673 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 23:19:13.632699 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:19:13.633365 augenrules[1548]: No rules Feb 13 23:19:13.655548 systemd-resolved[1517]: Positive Trust Anchors: Feb 13 23:19:13.656386 systemd-resolved[1517]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 23:19:13.656431 systemd-resolved[1517]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 23:19:13.658156 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 23:19:13.658972 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:19:13.662794 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 23:19:13.663137 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 23:19:13.672400 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 23:19:13.674682 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 23:19:13.675074 systemd-resolved[1517]: Using system hostname 'srv-41mqz.gb1.brightbox.com'. Feb 13 23:19:13.676761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 23:19:13.677000 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 23:19:13.679033 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 23:19:13.679572 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 23:19:13.680969 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 23:19:13.681562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 23:19:13.682779 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 23:19:13.684559 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 23:19:13.684989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 23:19:13.689532 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 23:19:13.694229 systemd[1]: Finished ensure-sysext.service. Feb 13 23:19:13.701792 systemd[1]: Reached target network.target - Network. Feb 13 23:19:13.702665 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 23:19:13.703540 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 23:19:13.704572 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 23:19:13.704767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 23:19:13.716543 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 23:19:13.717496 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 23:19:13.784865 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 23:19:13.787266 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 23:19:13.788122 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 23:19:13.788936 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 23:19:13.789738 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 23:19:13.790544 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 23:19:13.790595 systemd[1]: Reached target paths.target - Path Units. Feb 13 23:19:13.791267 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 23:19:13.792246 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 23:19:13.793108 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 23:19:13.793885 systemd[1]: Reached target timers.target - Timer Units. Feb 13 23:19:13.796322 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 23:19:13.799660 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 23:19:13.803526 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 23:19:13.806760 systemd-networkd[1266]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8d97:24:19ff:fee6:365e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8d97:24:19ff:fee6:365e/64 assigned by NDisc. Feb 13 23:19:13.806774 systemd-networkd[1266]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 23:19:13.807251 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 23:19:13.807992 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 23:19:13.808707 systemd[1]: Reached target basic.target - Basic System. Feb 13 23:19:13.809736 systemd[1]: System is tainted: cgroupsv1 Feb 13 23:19:13.809801 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 23:19:13.809848 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 23:19:13.816487 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 23:19:13.821539 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 23:19:13.827540 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 23:19:13.832066 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 23:19:13.847394 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 23:19:13.848976 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 23:19:13.856392 jq[1582]: false Feb 13 23:19:13.865490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:19:13.878189 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 23:19:13.879715 extend-filesystems[1585]: Found loop4 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found loop5 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found loop6 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found loop7 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found vda Feb 13 23:19:13.886514 extend-filesystems[1585]: Found vda1 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found vda2 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found vda3 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found usr Feb 13 23:19:13.886514 extend-filesystems[1585]: Found vda4 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found vda6 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found vda7 Feb 13 23:19:13.886514 extend-filesystems[1585]: Found vda9 Feb 13 23:19:13.886514 extend-filesystems[1585]: Checking size of /dev/vda9 Feb 13 23:19:13.900762 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 23:19:13.909692 extend-filesystems[1585]: Resized partition /dev/vda9 Feb 13 23:19:13.917737 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 23:19:13.899079 dbus-daemon[1581]: [system] SELinux support is enabled Feb 13 23:19:13.918832 extend-filesystems[1596]: resize2fs 1.47.1 (20-May-2024) Feb 13 23:19:13.909599 dbus-daemon[1581]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1266 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 23:19:13.918534 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 23:19:13.932040 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 23:19:13.955551 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 23:19:13.961029 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 23:19:13.972521 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 23:19:13.990376 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1270) Feb 13 23:19:13.996525 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 23:19:13.999746 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 23:19:14.007931 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 23:19:14.009616 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 23:19:14.012824 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 23:19:14.013179 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 23:19:14.031430 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 23:19:14.035484 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 23:19:14.035814 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 23:19:14.048679 jq[1616]: true Feb 13 23:19:14.073284 update_engine[1612]: I20250213 23:19:14.071630 1612 main.cc:92] Flatcar Update Engine starting Feb 13 23:19:14.074140 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 23:19:14.088387 update_engine[1612]: I20250213 23:19:14.083595 1612 update_check_scheduler.cc:74] Next update check in 11m7s Feb 13 23:19:14.087327 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 23:19:14.097511 systemd[1]: Started update-engine.service - Update Engine. Feb 13 23:19:14.101504 jq[1627]: true Feb 13 23:19:14.114459 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 23:19:14.114508 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 23:19:14.123558 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 23:19:14.125962 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 23:19:14.126004 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 23:19:14.130616 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 23:19:15.305552 systemd-resolved[1517]: Clock change detected. Flushing caches. Feb 13 23:19:15.306415 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 23:19:15.311994 systemd-timesyncd[1574]: Contacted time server 131.111.8.60:123 (0.flatcar.pool.ntp.org). Feb 13 23:19:15.312077 systemd-timesyncd[1574]: Initial clock synchronization to Thu 2025-02-13 23:19:15.303416 UTC. Feb 13 23:19:15.453691 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 23:19:15.501012 extend-filesystems[1596]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 23:19:15.501012 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 23:19:15.501012 extend-filesystems[1596]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 23:19:15.527214 extend-filesystems[1585]: Resized filesystem in /dev/vda9 Feb 13 23:19:15.526280 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 23:19:15.528916 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 23:19:15.552766 bash[1652]: Updated "/home/core/.ssh/authorized_keys" Feb 13 23:19:15.557358 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 23:19:15.583900 systemd[1]: Starting sshkeys.service... Feb 13 23:19:15.621088 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 23:19:15.623315 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 23:19:15.630889 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 23:19:15.635725 dbus-daemon[1581]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1637 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 23:19:15.640052 systemd-logind[1609]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 23:19:15.645719 systemd-logind[1609]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 23:19:15.655799 systemd-logind[1609]: New seat seat0. Feb 13 23:19:15.660195 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 23:19:15.663505 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 23:19:15.684826 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 23:19:15.753223 polkitd[1660]: Started polkitd version 121 Feb 13 23:19:15.775796 polkitd[1660]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 23:19:15.775931 polkitd[1660]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 23:19:15.780928 polkitd[1660]: Finished loading, compiling and executing 2 rules Feb 13 23:19:15.783466 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 23:19:15.783805 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 23:19:15.786081 polkitd[1660]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 23:19:15.809636 locksmithd[1638]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 23:19:15.863143 systemd-hostnamed[1637]: Hostname set to (static) Feb 13 23:19:16.064117 containerd[1622]: time="2025-02-13T23:19:16.062198148Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 23:19:16.126294 containerd[1622]: time="2025-02-13T23:19:16.126194359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.129884922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.129926910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.129953171Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.130331529Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.130364706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.130516431Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.130541032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.130879634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.130904343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.130924872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:19:16.131702 containerd[1622]: time="2025-02-13T23:19:16.130941052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 23:19:16.132229 containerd[1622]: time="2025-02-13T23:19:16.131118670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:19:16.132229 containerd[1622]: time="2025-02-13T23:19:16.131606963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:19:16.137234 containerd[1622]: time="2025-02-13T23:19:16.133734848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:19:16.137234 containerd[1622]: time="2025-02-13T23:19:16.133769171Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 23:19:16.137234 containerd[1622]: time="2025-02-13T23:19:16.133915127Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 23:19:16.137234 containerd[1622]: time="2025-02-13T23:19:16.133999304Z" level=info msg="metadata content store policy set" policy=shared Feb 13 23:19:16.151432 containerd[1622]: time="2025-02-13T23:19:16.150702548Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 23:19:16.151432 containerd[1622]: time="2025-02-13T23:19:16.150795248Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 23:19:16.151432 containerd[1622]: time="2025-02-13T23:19:16.150839091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 23:19:16.151432 containerd[1622]: time="2025-02-13T23:19:16.150864259Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 23:19:16.151432 containerd[1622]: time="2025-02-13T23:19:16.150885926Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 23:19:16.151432 containerd[1622]: time="2025-02-13T23:19:16.151132280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 23:19:16.151806 containerd[1622]: time="2025-02-13T23:19:16.151572542Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 23:19:16.154867 containerd[1622]: time="2025-02-13T23:19:16.154755849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 23:19:16.154867 containerd[1622]: time="2025-02-13T23:19:16.154791063Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 23:19:16.154867 containerd[1622]: time="2025-02-13T23:19:16.154813344Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 23:19:16.154867 containerd[1622]: time="2025-02-13T23:19:16.154834787Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 23:19:16.154867 containerd[1622]: time="2025-02-13T23:19:16.154854221Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.154873285Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.154893069Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.154913323Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.154940449Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.154973877Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.154994177Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.155033341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.155055130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.155081890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.155109095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.155138269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155177 containerd[1622]: time="2025-02-13T23:19:16.155159901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155180730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155202378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155222099Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155243490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155262335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155280844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155299534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155320695Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155349962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155375748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155393261Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155454947Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155497818Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 23:19:16.155630 containerd[1622]: time="2025-02-13T23:19:16.155521380Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 23:19:16.156073 containerd[1622]: time="2025-02-13T23:19:16.155540848Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 23:19:16.156073 containerd[1622]: time="2025-02-13T23:19:16.155556805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.156073 containerd[1622]: time="2025-02-13T23:19:16.155575900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 23:19:16.156073 containerd[1622]: time="2025-02-13T23:19:16.155597456Z" level=info msg="NRI interface is disabled by configuration." Feb 13 23:19:16.156073 containerd[1622]: time="2025-02-13T23:19:16.155615577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 23:19:16.158704 containerd[1622]: time="2025-02-13T23:19:16.158146632Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 23:19:16.158704 containerd[1622]: time="2025-02-13T23:19:16.158225146Z" level=info msg="Connect containerd service" Feb 13 23:19:16.158704 containerd[1622]: time="2025-02-13T23:19:16.158265666Z" level=info msg="using legacy CRI server" Feb 13 23:19:16.158704 containerd[1622]: time="2025-02-13T23:19:16.158280714Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 23:19:16.158704 containerd[1622]: time="2025-02-13T23:19:16.158439962Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 23:19:16.162102 containerd[1622]: time="2025-02-13T23:19:16.161944807Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 23:19:16.163008 containerd[1622]: time="2025-02-13T23:19:16.162218489Z" level=info msg="Start subscribing containerd event" Feb 13 23:19:16.163008 containerd[1622]: time="2025-02-13T23:19:16.162303676Z" level=info msg="Start recovering state" Feb 13 23:19:16.163008 containerd[1622]: time="2025-02-13T23:19:16.162414196Z" level=info msg="Start event monitor" Feb 13 23:19:16.163008 containerd[1622]: time="2025-02-13T23:19:16.162444699Z" level=info msg="Start snapshots syncer" Feb 13 23:19:16.163008 containerd[1622]: time="2025-02-13T23:19:16.162472734Z" level=info msg="Start cni network conf syncer for default" Feb 13 23:19:16.163008 containerd[1622]: time="2025-02-13T23:19:16.162497141Z" level=info msg="Start streaming server" Feb 13 23:19:16.174261 containerd[1622]: time="2025-02-13T23:19:16.164945862Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 23:19:16.174261 containerd[1622]: time="2025-02-13T23:19:16.165039535Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 23:19:16.174261 containerd[1622]: time="2025-02-13T23:19:16.165130799Z" level=info msg="containerd successfully booted in 0.108185s" Feb 13 23:19:16.167102 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 23:19:16.516587 sshd_keygen[1620]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 23:19:16.550043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 23:19:16.566411 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 23:19:16.605756 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 23:19:16.606210 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 23:19:16.620822 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 23:19:16.648395 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 23:19:16.661264 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 23:19:16.673223 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 23:19:16.674423 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 23:19:17.071872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:19:17.076823 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:19:17.795012 kubelet[1722]: E0213 23:19:17.794426 1722 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:19:17.798018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:19:17.798577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:19:21.769991 login[1712]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Feb 13 23:19:21.770518 login[1711]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 23:19:21.794500 systemd-logind[1609]: New session 1 of user core. Feb 13 23:19:21.797709 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 23:19:21.813249 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 23:19:21.833903 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 23:19:21.843254 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 23:19:21.860735 (systemd)[1741]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 23:19:21.993837 systemd[1741]: Queued start job for default target default.target. Feb 13 23:19:21.994421 systemd[1741]: Created slice app.slice - User Application Slice. Feb 13 23:19:21.994453 systemd[1741]: Reached target paths.target - Paths. Feb 13 23:19:21.994475 systemd[1741]: Reached target timers.target - Timers. Feb 13 23:19:22.003829 systemd[1741]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 23:19:22.012774 systemd[1741]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 23:19:22.012857 systemd[1741]: Reached target sockets.target - Sockets. Feb 13 23:19:22.012881 systemd[1741]: Reached target basic.target - Basic System. Feb 13 23:19:22.012972 systemd[1741]: Reached target default.target - Main User Target. Feb 13 23:19:22.013045 systemd[1741]: Startup finished in 143ms. Feb 13 23:19:22.013911 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 23:19:22.025395 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 23:19:22.142309 coreos-metadata[1579]: Feb 13 23:19:22.142 WARN failed to locate config-drive, using the metadata service API instead Feb 13 23:19:22.170136 coreos-metadata[1579]: Feb 13 23:19:22.170 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 23:19:22.176789 coreos-metadata[1579]: Feb 13 23:19:22.176 INFO Fetch failed with 404: resource not found Feb 13 23:19:22.176898 coreos-metadata[1579]: Feb 13 23:19:22.176 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 23:19:22.177594 coreos-metadata[1579]: Feb 13 23:19:22.177 INFO Fetch successful Feb 13 23:19:22.177787 coreos-metadata[1579]: Feb 13 23:19:22.177 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 23:19:22.189696 coreos-metadata[1579]: Feb 13 23:19:22.189 INFO Fetch successful Feb 13 23:19:22.189827 coreos-metadata[1579]: Feb 13 23:19:22.189 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 23:19:22.203086 coreos-metadata[1579]: Feb 13 23:19:22.203 INFO Fetch successful Feb 13 23:19:22.203178 coreos-metadata[1579]: Feb 13 23:19:22.203 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 23:19:22.216358 coreos-metadata[1579]: Feb 13 23:19:22.216 INFO Fetch successful Feb 13 23:19:22.216449 coreos-metadata[1579]: Feb 13 23:19:22.216 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 23:19:22.231789 coreos-metadata[1579]: Feb 13 23:19:22.231 INFO Fetch successful Feb 13 23:19:22.267077 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 23:19:22.269628 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 23:19:22.772554 login[1712]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 23:19:22.781745 systemd-logind[1609]: New session 2 of user core. Feb 13 23:19:22.788115 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 23:19:23.012450 coreos-metadata[1659]: Feb 13 23:19:23.012 WARN failed to locate config-drive, using the metadata service API instead Feb 13 23:19:23.034451 coreos-metadata[1659]: Feb 13 23:19:23.034 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 23:19:23.056607 coreos-metadata[1659]: Feb 13 23:19:23.056 INFO Fetch successful Feb 13 23:19:23.056882 coreos-metadata[1659]: Feb 13 23:19:23.056 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 23:19:23.085774 coreos-metadata[1659]: Feb 13 23:19:23.085 INFO Fetch successful Feb 13 23:19:23.093143 unknown[1659]: wrote ssh authorized keys file for user: core Feb 13 23:19:23.116674 update-ssh-keys[1784]: Updated "/home/core/.ssh/authorized_keys" Feb 13 23:19:23.120012 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 23:19:23.125597 systemd[1]: Finished sshkeys.service. Feb 13 23:19:23.130727 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 23:19:23.131241 systemd[1]: Startup finished in 16.507s (kernel) + 13.165s (userspace) = 29.673s. Feb 13 23:19:24.685357 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 23:19:24.699001 systemd[1]: Started sshd@0-10.230.54.94:22-147.75.109.163:59604.service - OpenSSH per-connection server daemon (147.75.109.163:59604). Feb 13 23:19:25.601777 sshd[1792]: Accepted publickey for core from 147.75.109.163 port 59604 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 23:19:25.603792 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:19:25.610082 systemd-logind[1609]: New session 3 of user core. Feb 13 23:19:25.618055 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 23:19:26.371100 systemd[1]: Started sshd@1-10.230.54.94:22-147.75.109.163:59612.service - OpenSSH per-connection server daemon (147.75.109.163:59612). Feb 13 23:19:27.263920 sshd[1797]: Accepted publickey for core from 147.75.109.163 port 59612 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 23:19:27.265772 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:19:27.271939 systemd-logind[1609]: New session 4 of user core. Feb 13 23:19:27.281091 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 23:19:27.885769 sshd[1800]: Connection closed by 147.75.109.163 port 59612 Feb 13 23:19:27.886725 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Feb 13 23:19:27.891320 systemd[1]: sshd@1-10.230.54.94:22-147.75.109.163:59612.service: Deactivated successfully. Feb 13 23:19:27.895207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 23:19:27.896494 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 23:19:27.897678 systemd-logind[1609]: Session 4 logged out. Waiting for processes to exit. Feb 13 23:19:27.903890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:19:27.905994 systemd-logind[1609]: Removed session 4. Feb 13 23:19:28.042466 systemd[1]: Started sshd@2-10.230.54.94:22-147.75.109.163:59628.service - OpenSSH per-connection server daemon (147.75.109.163:59628). Feb 13 23:19:28.222861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:19:28.234279 (kubelet)[1819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:19:28.356901 kubelet[1819]: E0213 23:19:28.356581 1819 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:19:28.361901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:19:28.362415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:19:28.932933 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 59628 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 23:19:28.935051 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:19:28.942161 systemd-logind[1609]: New session 5 of user core. Feb 13 23:19:28.949128 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 23:19:29.548344 sshd[1829]: Connection closed by 147.75.109.163 port 59628 Feb 13 23:19:29.549443 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Feb 13 23:19:29.553434 systemd-logind[1609]: Session 5 logged out. Waiting for processes to exit. Feb 13 23:19:29.554244 systemd[1]: sshd@2-10.230.54.94:22-147.75.109.163:59628.service: Deactivated successfully. Feb 13 23:19:29.558229 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 23:19:29.559301 systemd-logind[1609]: Removed session 5. Feb 13 23:19:29.698419 systemd[1]: Started sshd@3-10.230.54.94:22-147.75.109.163:42342.service - OpenSSH per-connection server daemon (147.75.109.163:42342). Feb 13 23:19:30.596898 sshd[1834]: Accepted publickey for core from 147.75.109.163 port 42342 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 23:19:30.599586 sshd-session[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:19:30.606498 systemd-logind[1609]: New session 6 of user core. Feb 13 23:19:30.617381 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 23:19:31.217259 sshd[1837]: Connection closed by 147.75.109.163 port 42342 Feb 13 23:19:31.216490 sshd-session[1834]: pam_unix(sshd:session): session closed for user core Feb 13 23:19:31.220257 systemd[1]: sshd@3-10.230.54.94:22-147.75.109.163:42342.service: Deactivated successfully. Feb 13 23:19:31.224392 systemd-logind[1609]: Session 6 logged out. Waiting for processes to exit. Feb 13 23:19:31.225312 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 23:19:31.226842 systemd-logind[1609]: Removed session 6. Feb 13 23:19:31.371071 systemd[1]: Started sshd@4-10.230.54.94:22-147.75.109.163:42344.service - OpenSSH per-connection server daemon (147.75.109.163:42344). Feb 13 23:19:32.255559 sshd[1842]: Accepted publickey for core from 147.75.109.163 port 42344 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 23:19:32.257552 sshd-session[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:19:32.263641 systemd-logind[1609]: New session 7 of user core. Feb 13 23:19:32.274344 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 23:19:32.751119 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 23:19:32.751579 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:19:32.769237 sudo[1846]: pam_unix(sudo:session): session closed for user root Feb 13 23:19:32.912732 sshd[1845]: Connection closed by 147.75.109.163 port 42344 Feb 13 23:19:32.913889 sshd-session[1842]: pam_unix(sshd:session): session closed for user core Feb 13 23:19:32.919121 systemd[1]: sshd@4-10.230.54.94:22-147.75.109.163:42344.service: Deactivated successfully. Feb 13 23:19:32.922737 systemd-logind[1609]: Session 7 logged out. Waiting for processes to exit. Feb 13 23:19:32.923705 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 23:19:32.925926 systemd-logind[1609]: Removed session 7. Feb 13 23:19:33.063993 systemd[1]: Started sshd@5-10.230.54.94:22-147.75.109.163:42346.service - OpenSSH per-connection server daemon (147.75.109.163:42346). Feb 13 23:19:33.965356 sshd[1851]: Accepted publickey for core from 147.75.109.163 port 42346 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 23:19:33.967639 sshd-session[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:19:33.974480 systemd-logind[1609]: New session 8 of user core. Feb 13 23:19:33.986219 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 23:19:34.445347 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 23:19:34.445777 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:19:34.452518 sudo[1856]: pam_unix(sudo:session): session closed for user root Feb 13 23:19:34.460760 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 23:19:34.461241 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:19:34.490151 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 23:19:34.532210 augenrules[1878]: No rules Feb 13 23:19:34.533782 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 23:19:34.534217 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 23:19:34.536455 sudo[1855]: pam_unix(sudo:session): session closed for user root Feb 13 23:19:34.681741 sshd[1854]: Connection closed by 147.75.109.163 port 42346 Feb 13 23:19:34.682847 sshd-session[1851]: pam_unix(sshd:session): session closed for user core Feb 13 23:19:34.687641 systemd[1]: sshd@5-10.230.54.94:22-147.75.109.163:42346.service: Deactivated successfully. Feb 13 23:19:34.691859 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 23:19:34.693198 systemd-logind[1609]: Session 8 logged out. Waiting for processes to exit. Feb 13 23:19:34.695246 systemd-logind[1609]: Removed session 8. Feb 13 23:19:34.835422 systemd[1]: Started sshd@6-10.230.54.94:22-147.75.109.163:42362.service - OpenSSH per-connection server daemon (147.75.109.163:42362). Feb 13 23:19:35.726006 sshd[1887]: Accepted publickey for core from 147.75.109.163 port 42362 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 23:19:35.728016 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:19:35.736374 systemd-logind[1609]: New session 9 of user core. Feb 13 23:19:35.739061 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 23:19:36.205318 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 23:19:36.205832 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:19:37.145808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:19:37.160057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:19:37.185560 systemd[1]: Reloading requested from client PID 1930 ('systemctl') (unit session-9.scope)... Feb 13 23:19:37.185589 systemd[1]: Reloading... Feb 13 23:19:37.319887 zram_generator::config[1969]: No configuration found. Feb 13 23:19:37.516205 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:19:37.614561 systemd[1]: Reloading finished in 428 ms. Feb 13 23:19:37.690976 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 23:19:37.691124 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 23:19:37.691686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:19:37.706058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:19:37.832918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:19:37.850222 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 23:19:37.914783 kubelet[2048]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 23:19:37.915519 kubelet[2048]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 23:19:37.915519 kubelet[2048]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 23:19:37.917665 kubelet[2048]: I0213 23:19:37.916517 2048 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 23:19:38.446166 kubelet[2048]: I0213 23:19:38.446095 2048 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 23:19:38.446166 kubelet[2048]: I0213 23:19:38.446145 2048 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 23:19:38.446448 kubelet[2048]: I0213 23:19:38.446407 2048 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 23:19:38.468064 kubelet[2048]: I0213 23:19:38.467796 2048 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 23:19:38.490172 kubelet[2048]: I0213 23:19:38.490099 2048 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 23:19:38.493197 kubelet[2048]: I0213 23:19:38.493107 2048 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 23:19:38.493448 kubelet[2048]: I0213 23:19:38.493174 2048 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.54.94","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 23:19:38.494203 kubelet[2048]: I0213 23:19:38.494140 2048 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 23:19:38.494203 kubelet[2048]: I0213 23:19:38.494176 2048 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 23:19:38.494510 kubelet[2048]: I0213 23:19:38.494470 2048 state_mem.go:36] "Initialized new in-memory state store" Feb 13 23:19:38.495620 kubelet[2048]: I0213 23:19:38.495590 2048 kubelet.go:400] "Attempting to sync node with API server" Feb 13 23:19:38.495620 kubelet[2048]: I0213 23:19:38.495618 2048 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 23:19:38.496594 kubelet[2048]: I0213 23:19:38.495710 2048 kubelet.go:312] "Adding apiserver pod source" Feb 13 23:19:38.496594 kubelet[2048]: I0213 23:19:38.495779 2048 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 23:19:38.496594 kubelet[2048]: E0213 23:19:38.496196 2048 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:38.496594 kubelet[2048]: E0213 23:19:38.496304 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:38.500499 kubelet[2048]: I0213 23:19:38.500475 2048 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 23:19:38.504512 kubelet[2048]: I0213 23:19:38.504481 2048 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 23:19:38.504669 kubelet[2048]: W0213 23:19:38.504630 2048 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 23:19:38.505998 kubelet[2048]: I0213 23:19:38.505972 2048 server.go:1264] "Started kubelet" Feb 13 23:19:38.506713 kubelet[2048]: I0213 23:19:38.506674 2048 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 23:19:38.508388 kubelet[2048]: I0213 23:19:38.508365 2048 server.go:455] "Adding debug handlers to kubelet server" Feb 13 23:19:38.512084 kubelet[2048]: I0213 23:19:38.512006 2048 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 23:19:38.512440 kubelet[2048]: I0213 23:19:38.512413 2048 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 23:19:38.513721 kubelet[2048]: W0213 23:19:38.513220 2048 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 23:19:38.513721 kubelet[2048]: E0213 23:19:38.513305 2048 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 23:19:38.513721 kubelet[2048]: I0213 23:19:38.513337 2048 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 23:19:38.513721 kubelet[2048]: W0213 23:19:38.513455 2048 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.54.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 23:19:38.513721 kubelet[2048]: E0213 23:19:38.513482 2048 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.230.54.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 23:19:38.524628 kubelet[2048]: I0213 23:19:38.524450 2048 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 23:19:38.525111 kubelet[2048]: I0213 23:19:38.525070 2048 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 23:19:38.526162 kubelet[2048]: I0213 23:19:38.526127 2048 reconciler.go:26] "Reconciler: start to sync state" Feb 13 23:19:38.536677 kubelet[2048]: I0213 23:19:38.535434 2048 factory.go:221] Registration of the systemd container factory successfully Feb 13 23:19:38.536677 kubelet[2048]: I0213 23:19:38.535575 2048 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 23:19:38.538868 kubelet[2048]: E0213 23:19:38.538826 2048 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.54.94\" not found" node="10.230.54.94" Feb 13 23:19:38.540551 kubelet[2048]: I0213 23:19:38.540523 2048 factory.go:221] Registration of the containerd container factory successfully Feb 13 23:19:38.542002 kubelet[2048]: E0213 23:19:38.541819 2048 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 23:19:38.580577 kubelet[2048]: I0213 23:19:38.580247 2048 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 23:19:38.580577 kubelet[2048]: I0213 23:19:38.580287 2048 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 23:19:38.580577 kubelet[2048]: I0213 23:19:38.580365 2048 state_mem.go:36] "Initialized new in-memory state store" Feb 13 23:19:38.585092 kubelet[2048]: I0213 23:19:38.585065 2048 policy_none.go:49] "None policy: Start" Feb 13 23:19:38.586598 kubelet[2048]: I0213 23:19:38.586119 2048 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 23:19:38.586598 kubelet[2048]: I0213 23:19:38.586154 2048 state_mem.go:35] "Initializing new in-memory state store" Feb 13 23:19:38.615255 kubelet[2048]: I0213 23:19:38.615111 2048 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 23:19:38.615498 kubelet[2048]: I0213 23:19:38.615413 2048 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 23:19:38.615751 kubelet[2048]: I0213 23:19:38.615727 2048 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 23:19:38.619886 kubelet[2048]: E0213 23:19:38.619841 2048 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.54.94\" not found" Feb 13 23:19:38.626661 kubelet[2048]: I0213 23:19:38.626346 2048 kubelet_node_status.go:73] "Attempting to register node" node="10.230.54.94" Feb 13 23:19:38.633992 kubelet[2048]: I0213 23:19:38.633821 2048 kubelet_node_status.go:76] "Successfully registered node" node="10.230.54.94" Feb 13 23:19:38.638187 kubelet[2048]: I0213 23:19:38.638114 2048 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 23:19:38.639803 kubelet[2048]: I0213 23:19:38.639735 2048 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 23:19:38.639970 kubelet[2048]: I0213 23:19:38.639930 2048 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 23:19:38.640091 kubelet[2048]: I0213 23:19:38.640073 2048 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 23:19:38.640751 kubelet[2048]: E0213 23:19:38.640258 2048 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 23:19:38.653448 kubelet[2048]: E0213 23:19:38.653403 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:38.754493 kubelet[2048]: E0213 23:19:38.754388 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:38.855581 kubelet[2048]: E0213 23:19:38.855504 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:38.859326 sudo[1891]: pam_unix(sudo:session): session closed for user root Feb 13 23:19:38.956401 kubelet[2048]: E0213 23:19:38.956318 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.004680 sshd[1890]: Connection closed by 147.75.109.163 port 42362 Feb 13 23:19:39.006189 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Feb 13 23:19:39.012637 systemd[1]: sshd@6-10.230.54.94:22-147.75.109.163:42362.service: Deactivated successfully. Feb 13 23:19:39.018481 systemd-logind[1609]: Session 9 logged out. Waiting for processes to exit. Feb 13 23:19:39.019151 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 23:19:39.022026 systemd-logind[1609]: Removed session 9. Feb 13 23:19:39.056606 kubelet[2048]: E0213 23:19:39.056546 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.157140 kubelet[2048]: E0213 23:19:39.157072 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.257923 kubelet[2048]: E0213 23:19:39.257722 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.358463 kubelet[2048]: E0213 23:19:39.358401 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.449514 kubelet[2048]: I0213 23:19:39.449434 2048 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 23:19:39.449840 kubelet[2048]: W0213 23:19:39.449769 2048 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 23:19:39.449840 kubelet[2048]: W0213 23:19:39.449795 2048 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 23:19:39.458720 kubelet[2048]: E0213 23:19:39.458679 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.497204 kubelet[2048]: E0213 23:19:39.497129 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:39.559708 kubelet[2048]: E0213 23:19:39.559526 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.660177 kubelet[2048]: E0213 23:19:39.660108 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.760907 kubelet[2048]: E0213 23:19:39.760805 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.861702 kubelet[2048]: E0213 23:19:39.861489 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:39.962555 kubelet[2048]: E0213 23:19:39.962426 2048 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.54.94\" not found" Feb 13 23:19:40.063851 kubelet[2048]: I0213 23:19:40.063788 2048 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 23:19:40.064661 containerd[1622]: time="2025-02-13T23:19:40.064501466Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 23:19:40.065916 kubelet[2048]: I0213 23:19:40.064953 2048 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 23:19:40.497983 kubelet[2048]: I0213 23:19:40.497883 2048 apiserver.go:52] "Watching apiserver" Feb 13 23:19:40.498335 kubelet[2048]: E0213 23:19:40.498310 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:40.503725 kubelet[2048]: I0213 23:19:40.503315 2048 topology_manager.go:215] "Topology Admit Handler" podUID="9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" podNamespace="kube-system" podName="cilium-7scfr" Feb 13 23:19:40.503725 kubelet[2048]: I0213 23:19:40.503684 2048 topology_manager.go:215] "Topology Admit Handler" podUID="4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb" podNamespace="kube-system" podName="kube-proxy-9xhmf" Feb 13 23:19:40.527315 kubelet[2048]: I0213 23:19:40.527248 2048 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 23:19:40.539962 kubelet[2048]: I0213 23:19:40.539510 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cni-path\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.539962 kubelet[2048]: I0213 23:19:40.539611 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-clustermesh-secrets\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.539962 kubelet[2048]: I0213 23:19:40.539684 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-config-path\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.539962 kubelet[2048]: I0213 23:19:40.539715 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb-kube-proxy\") pod \"kube-proxy-9xhmf\" (UID: \"4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb\") " pod="kube-system/kube-proxy-9xhmf" Feb 13 23:19:40.539962 kubelet[2048]: I0213 23:19:40.539741 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-xtables-lock\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.539962 kubelet[2048]: I0213 23:19:40.539768 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-host-proc-sys-kernel\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.540486 kubelet[2048]: I0213 23:19:40.539795 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-hubble-tls\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.540486 kubelet[2048]: I0213 23:19:40.539818 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb-lib-modules\") pod \"kube-proxy-9xhmf\" (UID: \"4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb\") " pod="kube-system/kube-proxy-9xhmf" Feb 13 23:19:40.540486 kubelet[2048]: I0213 23:19:40.539844 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-run\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.540486 kubelet[2048]: I0213 23:19:40.539870 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-hostproc\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.540486 kubelet[2048]: I0213 23:19:40.539911 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-cgroup\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.540486 kubelet[2048]: I0213 23:19:40.539939 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-lib-modules\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.540784 kubelet[2048]: I0213 23:19:40.539963 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfk5v\" (UniqueName: \"kubernetes.io/projected/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-kube-api-access-bfk5v\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.540784 kubelet[2048]: I0213 23:19:40.539990 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nphkh\" (UniqueName: \"kubernetes.io/projected/4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb-kube-api-access-nphkh\") pod \"kube-proxy-9xhmf\" (UID: \"4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb\") " pod="kube-system/kube-proxy-9xhmf" Feb 13 23:19:40.540784 kubelet[2048]: I0213 23:19:40.540021 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-bpf-maps\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.540784 kubelet[2048]: I0213 23:19:40.540046 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-etc-cni-netd\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.540784 kubelet[2048]: I0213 23:19:40.540076 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-host-proc-sys-net\") pod \"cilium-7scfr\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " pod="kube-system/cilium-7scfr" Feb 13 23:19:40.541015 kubelet[2048]: I0213 23:19:40.540111 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb-xtables-lock\") pod \"kube-proxy-9xhmf\" (UID: \"4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb\") " pod="kube-system/kube-proxy-9xhmf" Feb 13 23:19:40.812752 containerd[1622]: time="2025-02-13T23:19:40.811777246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7scfr,Uid:9d3e9a01-ab3c-4024-9604-8a1e8ac263f0,Namespace:kube-system,Attempt:0,}" Feb 13 23:19:40.812752 containerd[1622]: time="2025-02-13T23:19:40.812720053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xhmf,Uid:4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb,Namespace:kube-system,Attempt:0,}" Feb 13 23:19:41.498692 kubelet[2048]: E0213 23:19:41.498622 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:41.599762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849182319.mount: Deactivated successfully. Feb 13 23:19:41.606917 containerd[1622]: time="2025-02-13T23:19:41.606492949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:19:41.609186 containerd[1622]: time="2025-02-13T23:19:41.608085494Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:19:41.609186 containerd[1622]: time="2025-02-13T23:19:41.609138292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 23:19:41.611669 containerd[1622]: time="2025-02-13T23:19:41.610105201Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:19:41.611669 containerd[1622]: time="2025-02-13T23:19:41.610284243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 23:19:41.613053 containerd[1622]: time="2025-02-13T23:19:41.613015287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:19:41.616314 containerd[1622]: time="2025-02-13T23:19:41.616274026Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 804.044271ms" Feb 13 23:19:41.618423 containerd[1622]: time="2025-02-13T23:19:41.618388489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 805.527468ms" Feb 13 23:19:41.812706 containerd[1622]: time="2025-02-13T23:19:41.805391468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:19:41.812706 containerd[1622]: time="2025-02-13T23:19:41.811749293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:19:41.812706 containerd[1622]: time="2025-02-13T23:19:41.811770740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:19:41.812706 containerd[1622]: time="2025-02-13T23:19:41.812017289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:19:41.821034 containerd[1622]: time="2025-02-13T23:19:41.818975938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:19:41.821034 containerd[1622]: time="2025-02-13T23:19:41.819040400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:19:41.821034 containerd[1622]: time="2025-02-13T23:19:41.819057919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:19:41.821034 containerd[1622]: time="2025-02-13T23:19:41.819160843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:19:41.994707 containerd[1622]: time="2025-02-13T23:19:41.992919699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xhmf,Uid:4dc3e788-7e8d-4d7d-9ee4-26a4d7f1b1cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"985a3a39550826f9645ad696426f2300d676e618b987d4446b97546d06d29711\"" Feb 13 23:19:41.999597 containerd[1622]: time="2025-02-13T23:19:41.999504965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7scfr,Uid:9d3e9a01-ab3c-4024-9604-8a1e8ac263f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\"" Feb 13 23:19:42.001416 containerd[1622]: time="2025-02-13T23:19:42.001384506Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 23:19:42.499688 kubelet[2048]: E0213 23:19:42.499596 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:43.500676 kubelet[2048]: E0213 23:19:43.500548 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:43.617639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2833435841.mount: Deactivated successfully. Feb 13 23:19:44.373241 containerd[1622]: time="2025-02-13T23:19:44.373052051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:19:44.374399 containerd[1622]: time="2025-02-13T23:19:44.374180895Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057866" Feb 13 23:19:44.375187 containerd[1622]: time="2025-02-13T23:19:44.375124823Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:19:44.379060 containerd[1622]: time="2025-02-13T23:19:44.378994444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:19:44.380711 containerd[1622]: time="2025-02-13T23:19:44.380468865Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.378772068s" Feb 13 23:19:44.380711 containerd[1622]: time="2025-02-13T23:19:44.380518565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 23:19:44.383223 containerd[1622]: time="2025-02-13T23:19:44.382957786Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 23:19:44.385508 containerd[1622]: time="2025-02-13T23:19:44.385475423Z" level=info msg="CreateContainer within sandbox \"985a3a39550826f9645ad696426f2300d676e618b987d4446b97546d06d29711\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 23:19:44.403244 containerd[1622]: time="2025-02-13T23:19:44.403052015Z" level=info msg="CreateContainer within sandbox \"985a3a39550826f9645ad696426f2300d676e618b987d4446b97546d06d29711\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b0ae708533177560f8a63e8d77960e2f6c8230c3ff278eefc35446015882b894\"" Feb 13 23:19:44.403997 containerd[1622]: time="2025-02-13T23:19:44.403964246Z" level=info msg="StartContainer for \"b0ae708533177560f8a63e8d77960e2f6c8230c3ff278eefc35446015882b894\"" Feb 13 23:19:44.446051 systemd[1]: run-containerd-runc-k8s.io-b0ae708533177560f8a63e8d77960e2f6c8230c3ff278eefc35446015882b894-runc.MrrAXD.mount: Deactivated successfully. Feb 13 23:19:44.501338 kubelet[2048]: E0213 23:19:44.501289 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:44.526496 containerd[1622]: time="2025-02-13T23:19:44.526226844Z" level=info msg="StartContainer for \"b0ae708533177560f8a63e8d77960e2f6c8230c3ff278eefc35446015882b894\" returns successfully" Feb 13 23:19:44.687247 kubelet[2048]: I0213 23:19:44.686561 2048 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9xhmf" podStartSLOduration=4.303007509 podStartE2EDuration="6.686529536s" podCreationTimestamp="2025-02-13 23:19:38 +0000 UTC" firstStartedPulling="2025-02-13 23:19:41.99832519 +0000 UTC m=+4.141479154" lastFinishedPulling="2025-02-13 23:19:44.381847217 +0000 UTC m=+6.525001181" observedRunningTime="2025-02-13 23:19:44.686202968 +0000 UTC m=+6.829356952" watchObservedRunningTime="2025-02-13 23:19:44.686529536 +0000 UTC m=+6.829683507" Feb 13 23:19:45.501795 kubelet[2048]: E0213 23:19:45.501698 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:45.874618 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 23:19:46.502695 kubelet[2048]: E0213 23:19:46.502593 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:47.503605 kubelet[2048]: E0213 23:19:47.503527 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:48.504555 kubelet[2048]: E0213 23:19:48.504478 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:49.505570 kubelet[2048]: E0213 23:19:49.505493 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:50.506321 kubelet[2048]: E0213 23:19:50.506220 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:51.506792 kubelet[2048]: E0213 23:19:51.506593 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:52.509669 kubelet[2048]: E0213 23:19:52.509509 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:53.509707 kubelet[2048]: E0213 23:19:53.509607 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:53.641482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3084650042.mount: Deactivated successfully. Feb 13 23:19:54.510573 kubelet[2048]: E0213 23:19:54.510518 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:55.512264 kubelet[2048]: E0213 23:19:55.512111 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:56.513463 kubelet[2048]: E0213 23:19:56.513318 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:57.118614 containerd[1622]: time="2025-02-13T23:19:57.118551249Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:19:57.119941 containerd[1622]: time="2025-02-13T23:19:57.119888346Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 23:19:57.120827 containerd[1622]: time="2025-02-13T23:19:57.120686502Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:19:57.123381 containerd[1622]: time="2025-02-13T23:19:57.123122480Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.740121717s" Feb 13 23:19:57.123381 containerd[1622]: time="2025-02-13T23:19:57.123173415Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 23:19:57.127117 containerd[1622]: time="2025-02-13T23:19:57.126574334Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 23:19:57.140782 containerd[1622]: time="2025-02-13T23:19:57.140692860Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\"" Feb 13 23:19:57.141668 containerd[1622]: time="2025-02-13T23:19:57.141406976Z" level=info msg="StartContainer for \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\"" Feb 13 23:19:57.205668 systemd[1]: run-containerd-runc-k8s.io-580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317-runc.agvDdy.mount: Deactivated successfully. Feb 13 23:19:57.261327 containerd[1622]: time="2025-02-13T23:19:57.261276697Z" level=info msg="StartContainer for \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\" returns successfully" Feb 13 23:19:57.398245 containerd[1622]: time="2025-02-13T23:19:57.397980289Z" level=info msg="shim disconnected" id=580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317 namespace=k8s.io Feb 13 23:19:57.398245 containerd[1622]: time="2025-02-13T23:19:57.398151351Z" level=warning msg="cleaning up after shim disconnected" id=580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317 namespace=k8s.io Feb 13 23:19:57.398245 containerd[1622]: time="2025-02-13T23:19:57.398175020Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:19:57.513947 kubelet[2048]: E0213 23:19:57.513867 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:57.736231 containerd[1622]: time="2025-02-13T23:19:57.736183213Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 23:19:57.748913 containerd[1622]: time="2025-02-13T23:19:57.748801710Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\"" Feb 13 23:19:57.749683 containerd[1622]: time="2025-02-13T23:19:57.749432691Z" level=info msg="StartContainer for \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\"" Feb 13 23:19:57.819311 containerd[1622]: time="2025-02-13T23:19:57.819101647Z" level=info msg="StartContainer for \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\" returns successfully" Feb 13 23:19:57.835578 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 23:19:57.836132 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:19:57.836237 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:19:57.844129 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:19:57.872866 containerd[1622]: time="2025-02-13T23:19:57.872592682Z" level=info msg="shim disconnected" id=7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f namespace=k8s.io Feb 13 23:19:57.872866 containerd[1622]: time="2025-02-13T23:19:57.872748519Z" level=warning msg="cleaning up after shim disconnected" id=7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f namespace=k8s.io Feb 13 23:19:57.872866 containerd[1622]: time="2025-02-13T23:19:57.872767090Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:19:57.880727 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:19:58.137816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317-rootfs.mount: Deactivated successfully. Feb 13 23:19:58.496515 kubelet[2048]: E0213 23:19:58.496385 2048 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:58.514685 kubelet[2048]: E0213 23:19:58.514598 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:58.741369 containerd[1622]: time="2025-02-13T23:19:58.741231491Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 23:19:58.779895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410822415.mount: Deactivated successfully. Feb 13 23:19:58.799108 containerd[1622]: time="2025-02-13T23:19:58.798981199Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\"" Feb 13 23:19:58.800105 containerd[1622]: time="2025-02-13T23:19:58.800061267Z" level=info msg="StartContainer for \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\"" Feb 13 23:19:58.889757 containerd[1622]: time="2025-02-13T23:19:58.889577871Z" level=info msg="StartContainer for \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\" returns successfully" Feb 13 23:19:58.918344 containerd[1622]: time="2025-02-13T23:19:58.918078455Z" level=info msg="shim disconnected" id=35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899 namespace=k8s.io Feb 13 23:19:58.918344 containerd[1622]: time="2025-02-13T23:19:58.918152092Z" level=warning msg="cleaning up after shim disconnected" id=35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899 namespace=k8s.io Feb 13 23:19:58.918344 containerd[1622]: time="2025-02-13T23:19:58.918167385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:19:59.137696 systemd[1]: run-containerd-runc-k8s.io-35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899-runc.cCYEci.mount: Deactivated successfully. Feb 13 23:19:59.137995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899-rootfs.mount: Deactivated successfully. Feb 13 23:19:59.515795 kubelet[2048]: E0213 23:19:59.515728 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:19:59.747065 containerd[1622]: time="2025-02-13T23:19:59.746993848Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 23:19:59.771208 containerd[1622]: time="2025-02-13T23:19:59.771022810Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\"" Feb 13 23:19:59.772717 containerd[1622]: time="2025-02-13T23:19:59.771989686Z" level=info msg="StartContainer for \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\"" Feb 13 23:19:59.851399 containerd[1622]: time="2025-02-13T23:19:59.851345207Z" level=info msg="StartContainer for \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\" returns successfully" Feb 13 23:19:59.878296 containerd[1622]: time="2025-02-13T23:19:59.877952692Z" level=info msg="shim disconnected" id=0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76 namespace=k8s.io Feb 13 23:19:59.878296 containerd[1622]: time="2025-02-13T23:19:59.878046436Z" level=warning msg="cleaning up after shim disconnected" id=0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76 namespace=k8s.io Feb 13 23:19:59.878296 containerd[1622]: time="2025-02-13T23:19:59.878062492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:20:00.138028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76-rootfs.mount: Deactivated successfully. Feb 13 23:20:00.473716 update_engine[1612]: I20250213 23:20:00.472597 1612 update_attempter.cc:509] Updating boot flags... Feb 13 23:20:00.532180 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2631) Feb 13 23:20:00.532420 kubelet[2048]: E0213 23:20:00.531752 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:00.627252 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2635) Feb 13 23:20:00.762459 containerd[1622]: time="2025-02-13T23:20:00.761264738Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 23:20:00.784714 containerd[1622]: time="2025-02-13T23:20:00.784013249Z" level=info msg="CreateContainer within sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\"" Feb 13 23:20:00.785950 containerd[1622]: time="2025-02-13T23:20:00.785839781Z" level=info msg="StartContainer for \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\"" Feb 13 23:20:00.851299 systemd[1]: run-containerd-runc-k8s.io-3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036-runc.E3b1Kn.mount: Deactivated successfully. Feb 13 23:20:00.921586 containerd[1622]: time="2025-02-13T23:20:00.921511438Z" level=info msg="StartContainer for \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\" returns successfully" Feb 13 23:20:01.145076 kubelet[2048]: I0213 23:20:01.143323 2048 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 23:20:01.532735 kubelet[2048]: E0213 23:20:01.532282 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:01.578359 kernel: Initializing XFRM netlink socket Feb 13 23:20:01.788536 kubelet[2048]: I0213 23:20:01.788331 2048 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7scfr" podStartSLOduration=8.665136368 podStartE2EDuration="23.78828457s" podCreationTimestamp="2025-02-13 23:19:38 +0000 UTC" firstStartedPulling="2025-02-13 23:19:42.00159952 +0000 UTC m=+4.144753477" lastFinishedPulling="2025-02-13 23:19:57.124747723 +0000 UTC m=+19.267901679" observedRunningTime="2025-02-13 23:20:01.787867795 +0000 UTC m=+23.931021792" watchObservedRunningTime="2025-02-13 23:20:01.78828457 +0000 UTC m=+23.931438548" Feb 13 23:20:02.533033 kubelet[2048]: E0213 23:20:02.532967 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:03.315127 systemd-networkd[1266]: cilium_host: Link UP Feb 13 23:20:03.316349 systemd-networkd[1266]: cilium_net: Link UP Feb 13 23:20:03.318035 systemd-networkd[1266]: cilium_net: Gained carrier Feb 13 23:20:03.319125 systemd-networkd[1266]: cilium_host: Gained carrier Feb 13 23:20:03.319610 systemd-networkd[1266]: cilium_net: Gained IPv6LL Feb 13 23:20:03.322271 systemd-networkd[1266]: cilium_host: Gained IPv6LL Feb 13 23:20:03.442691 kubelet[2048]: I0213 23:20:03.442603 2048 topology_manager.go:215] "Topology Admit Handler" podUID="9dafc346-a90d-4b7c-9ed6-f5c954e9165e" podNamespace="default" podName="nginx-deployment-85f456d6dd-w7xsg" Feb 13 23:20:03.467702 kubelet[2048]: I0213 23:20:03.466434 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvspn\" (UniqueName: \"kubernetes.io/projected/9dafc346-a90d-4b7c-9ed6-f5c954e9165e-kube-api-access-jvspn\") pod \"nginx-deployment-85f456d6dd-w7xsg\" (UID: \"9dafc346-a90d-4b7c-9ed6-f5c954e9165e\") " pod="default/nginx-deployment-85f456d6dd-w7xsg" Feb 13 23:20:03.481843 systemd-networkd[1266]: cilium_vxlan: Link UP Feb 13 23:20:03.481856 systemd-networkd[1266]: cilium_vxlan: Gained carrier Feb 13 23:20:03.534154 kubelet[2048]: E0213 23:20:03.534058 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:03.774380 containerd[1622]: time="2025-02-13T23:20:03.773062317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-w7xsg,Uid:9dafc346-a90d-4b7c-9ed6-f5c954e9165e,Namespace:default,Attempt:0,}" Feb 13 23:20:03.911229 kernel: NET: Registered PF_ALG protocol family Feb 13 23:20:04.536964 kubelet[2048]: E0213 23:20:04.536854 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:05.004540 systemd-networkd[1266]: lxc_health: Link UP Feb 13 23:20:05.014031 systemd-networkd[1266]: lxc_health: Gained carrier Feb 13 23:20:05.356337 systemd-networkd[1266]: lxc0403bcfb27dc: Link UP Feb 13 23:20:05.380548 kernel: eth0: renamed from tmpe8481 Feb 13 23:20:05.388419 systemd-networkd[1266]: lxc0403bcfb27dc: Gained carrier Feb 13 23:20:05.537364 kubelet[2048]: E0213 23:20:05.537260 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:05.548762 systemd-networkd[1266]: cilium_vxlan: Gained IPv6LL Feb 13 23:20:06.251923 systemd-networkd[1266]: lxc_health: Gained IPv6LL Feb 13 23:20:06.537843 kubelet[2048]: E0213 23:20:06.537623 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:06.891995 systemd-networkd[1266]: lxc0403bcfb27dc: Gained IPv6LL Feb 13 23:20:07.538983 kubelet[2048]: E0213 23:20:07.538885 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:08.539941 kubelet[2048]: E0213 23:20:08.539846 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:09.540140 kubelet[2048]: E0213 23:20:09.540062 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:10.522778 containerd[1622]: time="2025-02-13T23:20:10.522571481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:20:10.522778 containerd[1622]: time="2025-02-13T23:20:10.522724389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:20:10.523846 containerd[1622]: time="2025-02-13T23:20:10.522759723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:10.523846 containerd[1622]: time="2025-02-13T23:20:10.522962812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:10.540505 kubelet[2048]: E0213 23:20:10.540422 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:10.575019 systemd[1]: run-containerd-runc-k8s.io-e848143c1f1f8e919e96a606f4190e8ce1b8486224c2bb636f957ad51ba363aa-runc.896e92.mount: Deactivated successfully. Feb 13 23:20:10.633899 containerd[1622]: time="2025-02-13T23:20:10.633721417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-w7xsg,Uid:9dafc346-a90d-4b7c-9ed6-f5c954e9165e,Namespace:default,Attempt:0,} returns sandbox id \"e848143c1f1f8e919e96a606f4190e8ce1b8486224c2bb636f957ad51ba363aa\"" Feb 13 23:20:10.636625 containerd[1622]: time="2025-02-13T23:20:10.636593087Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 23:20:11.540730 kubelet[2048]: E0213 23:20:11.540602 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:12.541413 kubelet[2048]: E0213 23:20:12.541346 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:13.543889 kubelet[2048]: E0213 23:20:13.543033 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:14.543701 kubelet[2048]: E0213 23:20:14.543623 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:14.578129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711781009.mount: Deactivated successfully. Feb 13 23:20:15.544448 kubelet[2048]: E0213 23:20:15.544377 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:16.544691 kubelet[2048]: E0213 23:20:16.544592 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:16.548391 containerd[1622]: time="2025-02-13T23:20:16.548332527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:20:16.549758 containerd[1622]: time="2025-02-13T23:20:16.549698715Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 23:20:16.551855 containerd[1622]: time="2025-02-13T23:20:16.551147100Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:20:16.554683 containerd[1622]: time="2025-02-13T23:20:16.554596257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:20:16.556183 containerd[1622]: time="2025-02-13T23:20:16.556128511Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.91948517s" Feb 13 23:20:16.556183 containerd[1622]: time="2025-02-13T23:20:16.556173568Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 23:20:16.559896 containerd[1622]: time="2025-02-13T23:20:16.559860040Z" level=info msg="CreateContainer within sandbox \"e848143c1f1f8e919e96a606f4190e8ce1b8486224c2bb636f957ad51ba363aa\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 23:20:16.590009 containerd[1622]: time="2025-02-13T23:20:16.589784181Z" level=info msg="CreateContainer within sandbox \"e848143c1f1f8e919e96a606f4190e8ce1b8486224c2bb636f957ad51ba363aa\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"80709157a4e14e2f2b7774fb907fca8114fc27387495db459274883dea16ea5c\"" Feb 13 23:20:16.591912 containerd[1622]: time="2025-02-13T23:20:16.591883132Z" level=info msg="StartContainer for \"80709157a4e14e2f2b7774fb907fca8114fc27387495db459274883dea16ea5c\"" Feb 13 23:20:16.704166 containerd[1622]: time="2025-02-13T23:20:16.704085225Z" level=info msg="StartContainer for \"80709157a4e14e2f2b7774fb907fca8114fc27387495db459274883dea16ea5c\" returns successfully" Feb 13 23:20:16.824501 kubelet[2048]: I0213 23:20:16.823860 2048 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-w7xsg" podStartSLOduration=7.901712498 podStartE2EDuration="13.823840945s" podCreationTimestamp="2025-02-13 23:20:03 +0000 UTC" firstStartedPulling="2025-02-13 23:20:10.63594754 +0000 UTC m=+32.779101497" lastFinishedPulling="2025-02-13 23:20:16.558075971 +0000 UTC m=+38.701229944" observedRunningTime="2025-02-13 23:20:16.823620284 +0000 UTC m=+38.966774259" watchObservedRunningTime="2025-02-13 23:20:16.823840945 +0000 UTC m=+38.966994916" Feb 13 23:20:17.545197 kubelet[2048]: E0213 23:20:17.545097 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:18.496344 kubelet[2048]: E0213 23:20:18.496261 2048 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:18.545849 kubelet[2048]: E0213 23:20:18.545747 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:19.546693 kubelet[2048]: E0213 23:20:19.546587 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:20.547876 kubelet[2048]: E0213 23:20:20.547772 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:21.548925 kubelet[2048]: E0213 23:20:21.548830 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:22.549780 kubelet[2048]: E0213 23:20:22.549622 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:23.550980 kubelet[2048]: E0213 23:20:23.550886 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:24.551223 kubelet[2048]: E0213 23:20:24.551121 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:24.925693 kubelet[2048]: I0213 23:20:24.925321 2048 topology_manager.go:215] "Topology Admit Handler" podUID="1225278a-4ac9-4f64-9d09-da2dcc035fd0" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 23:20:25.109146 kubelet[2048]: I0213 23:20:25.108964 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1225278a-4ac9-4f64-9d09-da2dcc035fd0-data\") pod \"nfs-server-provisioner-0\" (UID: \"1225278a-4ac9-4f64-9d09-da2dcc035fd0\") " pod="default/nfs-server-provisioner-0" Feb 13 23:20:25.109146 kubelet[2048]: I0213 23:20:25.109043 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c22h\" (UniqueName: \"kubernetes.io/projected/1225278a-4ac9-4f64-9d09-da2dcc035fd0-kube-api-access-7c22h\") pod \"nfs-server-provisioner-0\" (UID: \"1225278a-4ac9-4f64-9d09-da2dcc035fd0\") " pod="default/nfs-server-provisioner-0" Feb 13 23:20:25.231093 containerd[1622]: time="2025-02-13T23:20:25.230991125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1225278a-4ac9-4f64-9d09-da2dcc035fd0,Namespace:default,Attempt:0,}" Feb 13 23:20:25.290993 systemd-networkd[1266]: lxc406ccf38e611: Link UP Feb 13 23:20:25.308757 kernel: eth0: renamed from tmp6f786 Feb 13 23:20:25.320751 systemd-networkd[1266]: lxc406ccf38e611: Gained carrier Feb 13 23:20:25.551787 kubelet[2048]: E0213 23:20:25.551565 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:25.606245 containerd[1622]: time="2025-02-13T23:20:25.605422107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:20:25.606890 containerd[1622]: time="2025-02-13T23:20:25.606593690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:20:25.606890 containerd[1622]: time="2025-02-13T23:20:25.606621714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:25.606890 containerd[1622]: time="2025-02-13T23:20:25.606770426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:25.696095 containerd[1622]: time="2025-02-13T23:20:25.695969009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1225278a-4ac9-4f64-9d09-da2dcc035fd0,Namespace:default,Attempt:0,} returns sandbox id \"6f78674e61b4534c8de6c3d87e9ffc518018e5e1326f176572949b1015a30b3e\"" Feb 13 23:20:25.698339 containerd[1622]: time="2025-02-13T23:20:25.698296870Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 23:20:26.553672 kubelet[2048]: E0213 23:20:26.552797 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:26.988960 systemd-networkd[1266]: lxc406ccf38e611: Gained IPv6LL Feb 13 23:20:27.553626 kubelet[2048]: E0213 23:20:27.553493 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:28.553822 kubelet[2048]: E0213 23:20:28.553734 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:29.338696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773161849.mount: Deactivated successfully. Feb 13 23:20:29.553998 kubelet[2048]: E0213 23:20:29.553930 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:30.554729 kubelet[2048]: E0213 23:20:30.554613 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:31.556184 kubelet[2048]: E0213 23:20:31.556114 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:32.468283 containerd[1622]: time="2025-02-13T23:20:32.467689035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:20:32.469828 containerd[1622]: time="2025-02-13T23:20:32.469764456Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Feb 13 23:20:32.470891 containerd[1622]: time="2025-02-13T23:20:32.470824061Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:20:32.474573 containerd[1622]: time="2025-02-13T23:20:32.474485774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:20:32.478206 containerd[1622]: time="2025-02-13T23:20:32.478153438Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.779809946s" Feb 13 23:20:32.478285 containerd[1622]: time="2025-02-13T23:20:32.478212129Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 23:20:32.483045 containerd[1622]: time="2025-02-13T23:20:32.483011679Z" level=info msg="CreateContainer within sandbox \"6f78674e61b4534c8de6c3d87e9ffc518018e5e1326f176572949b1015a30b3e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 23:20:32.496562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2530622495.mount: Deactivated successfully. Feb 13 23:20:32.505401 containerd[1622]: time="2025-02-13T23:20:32.505338757Z" level=info msg="CreateContainer within sandbox \"6f78674e61b4534c8de6c3d87e9ffc518018e5e1326f176572949b1015a30b3e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6b3ba9b3a51bfebfea8341596624bccaa104a469368ade17ddd93bcf02530c05\"" Feb 13 23:20:32.506226 containerd[1622]: time="2025-02-13T23:20:32.506195156Z" level=info msg="StartContainer for \"6b3ba9b3a51bfebfea8341596624bccaa104a469368ade17ddd93bcf02530c05\"" Feb 13 23:20:32.558803 kubelet[2048]: E0213 23:20:32.556696 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:32.589000 containerd[1622]: time="2025-02-13T23:20:32.588509842Z" level=info msg="StartContainer for \"6b3ba9b3a51bfebfea8341596624bccaa104a469368ade17ddd93bcf02530c05\" returns successfully" Feb 13 23:20:32.866901 kubelet[2048]: I0213 23:20:32.866241 2048 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.083964762 podStartE2EDuration="8.86616069s" podCreationTimestamp="2025-02-13 23:20:24 +0000 UTC" firstStartedPulling="2025-02-13 23:20:25.697906745 +0000 UTC m=+47.841060708" lastFinishedPulling="2025-02-13 23:20:32.480102673 +0000 UTC m=+54.623256636" observedRunningTime="2025-02-13 23:20:32.86564083 +0000 UTC m=+55.008794810" watchObservedRunningTime="2025-02-13 23:20:32.86616069 +0000 UTC m=+55.009314652" Feb 13 23:20:33.559954 kubelet[2048]: E0213 23:20:33.559871 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:34.560960 kubelet[2048]: E0213 23:20:34.560889 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:35.561901 kubelet[2048]: E0213 23:20:35.561809 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:36.562599 kubelet[2048]: E0213 23:20:36.562513 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:37.563289 kubelet[2048]: E0213 23:20:37.563186 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:38.496314 kubelet[2048]: E0213 23:20:38.496193 2048 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:38.564134 kubelet[2048]: E0213 23:20:38.564050 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:39.564699 kubelet[2048]: E0213 23:20:39.564625 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:40.565826 kubelet[2048]: E0213 23:20:40.565743 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:41.566713 kubelet[2048]: E0213 23:20:41.566476 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:42.354346 kubelet[2048]: I0213 23:20:42.354257 2048 topology_manager.go:215] "Topology Admit Handler" podUID="e223872a-c8dc-4310-a7f6-d44cb1b199d0" podNamespace="default" podName="test-pod-1" Feb 13 23:20:42.526498 kubelet[2048]: I0213 23:20:42.526053 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-457cd802-8130-490d-bad4-a7460b635999\" (UniqueName: \"kubernetes.io/nfs/e223872a-c8dc-4310-a7f6-d44cb1b199d0-pvc-457cd802-8130-490d-bad4-a7460b635999\") pod \"test-pod-1\" (UID: \"e223872a-c8dc-4310-a7f6-d44cb1b199d0\") " pod="default/test-pod-1" Feb 13 23:20:42.526498 kubelet[2048]: I0213 23:20:42.526159 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw62f\" (UniqueName: \"kubernetes.io/projected/e223872a-c8dc-4310-a7f6-d44cb1b199d0-kube-api-access-nw62f\") pod \"test-pod-1\" (UID: \"e223872a-c8dc-4310-a7f6-d44cb1b199d0\") " pod="default/test-pod-1" Feb 13 23:20:42.567760 kubelet[2048]: E0213 23:20:42.567687 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:42.677753 kernel: FS-Cache: Loaded Feb 13 23:20:42.772097 kernel: RPC: Registered named UNIX socket transport module. Feb 13 23:20:42.772305 kernel: RPC: Registered udp transport module. Feb 13 23:20:42.772363 kernel: RPC: Registered tcp transport module. Feb 13 23:20:42.773063 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 23:20:42.774092 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 23:20:43.134143 kernel: NFS: Registering the id_resolver key type Feb 13 23:20:43.134546 kernel: Key type id_resolver registered Feb 13 23:20:43.136786 kernel: Key type id_legacy registered Feb 13 23:20:43.187570 nfsidmap[3453]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 23:20:43.194872 nfsidmap[3456]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 23:20:43.262612 containerd[1622]: time="2025-02-13T23:20:43.262387264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e223872a-c8dc-4310-a7f6-d44cb1b199d0,Namespace:default,Attempt:0,}" Feb 13 23:20:43.319942 systemd-networkd[1266]: lxc99c8fcc5ee7c: Link UP Feb 13 23:20:43.331053 kernel: eth0: renamed from tmpdc545 Feb 13 23:20:43.341348 systemd-networkd[1266]: lxc99c8fcc5ee7c: Gained carrier Feb 13 23:20:43.568756 kubelet[2048]: E0213 23:20:43.568589 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:43.621737 containerd[1622]: time="2025-02-13T23:20:43.619328306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:20:43.621737 containerd[1622]: time="2025-02-13T23:20:43.619663855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:20:43.621737 containerd[1622]: time="2025-02-13T23:20:43.619711419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:43.621737 containerd[1622]: time="2025-02-13T23:20:43.619915958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:43.714621 containerd[1622]: time="2025-02-13T23:20:43.714542395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e223872a-c8dc-4310-a7f6-d44cb1b199d0,Namespace:default,Attempt:0,} returns sandbox id \"dc54590a92b5c8fd325849cc0a166d96ff90983e0946bbb55894324551509cde\"" Feb 13 23:20:43.717689 containerd[1622]: time="2025-02-13T23:20:43.717284236Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 23:20:44.129353 containerd[1622]: time="2025-02-13T23:20:44.128506500Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:20:44.129353 containerd[1622]: time="2025-02-13T23:20:44.129138029Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 23:20:44.134361 containerd[1622]: time="2025-02-13T23:20:44.134321819Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 416.966228ms" Feb 13 23:20:44.134438 containerd[1622]: time="2025-02-13T23:20:44.134369195Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 23:20:44.137697 containerd[1622]: time="2025-02-13T23:20:44.137623700Z" level=info msg="CreateContainer within sandbox \"dc54590a92b5c8fd325849cc0a166d96ff90983e0946bbb55894324551509cde\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 23:20:44.221421 containerd[1622]: time="2025-02-13T23:20:44.221346756Z" level=info msg="CreateContainer within sandbox \"dc54590a92b5c8fd325849cc0a166d96ff90983e0946bbb55894324551509cde\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b5c14dd1522a9a9b1c8fe9e4742006be4685ca757881ff1572103991a26e5e23\"" Feb 13 23:20:44.223235 containerd[1622]: time="2025-02-13T23:20:44.223170619Z" level=info msg="StartContainer for \"b5c14dd1522a9a9b1c8fe9e4742006be4685ca757881ff1572103991a26e5e23\"" Feb 13 23:20:44.332461 containerd[1622]: time="2025-02-13T23:20:44.331931380Z" level=info msg="StartContainer for \"b5c14dd1522a9a9b1c8fe9e4742006be4685ca757881ff1572103991a26e5e23\" returns successfully" Feb 13 23:20:44.569702 kubelet[2048]: E0213 23:20:44.569609 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:44.588041 systemd-networkd[1266]: lxc99c8fcc5ee7c: Gained IPv6LL Feb 13 23:20:44.917099 kubelet[2048]: I0213 23:20:44.916614 2048 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.497970711 podStartE2EDuration="18.91650334s" podCreationTimestamp="2025-02-13 23:20:26 +0000 UTC" firstStartedPulling="2025-02-13 23:20:43.716814307 +0000 UTC m=+65.859968263" lastFinishedPulling="2025-02-13 23:20:44.135346924 +0000 UTC m=+66.278500892" observedRunningTime="2025-02-13 23:20:44.916235344 +0000 UTC m=+67.059389319" watchObservedRunningTime="2025-02-13 23:20:44.91650334 +0000 UTC m=+67.059657304" Feb 13 23:20:45.570669 kubelet[2048]: E0213 23:20:45.570590 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:46.571320 kubelet[2048]: E0213 23:20:46.571239 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:47.571843 kubelet[2048]: E0213 23:20:47.571754 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:48.572336 kubelet[2048]: E0213 23:20:48.572247 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:49.573336 kubelet[2048]: E0213 23:20:49.573247 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:50.573494 kubelet[2048]: E0213 23:20:50.573426 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:51.574197 kubelet[2048]: E0213 23:20:51.574118 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:52.392970 containerd[1622]: time="2025-02-13T23:20:52.392868147Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 23:20:52.400496 containerd[1622]: time="2025-02-13T23:20:52.400430472Z" level=info msg="StopContainer for \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\" with timeout 2 (s)" Feb 13 23:20:52.411484 containerd[1622]: time="2025-02-13T23:20:52.411453809Z" level=info msg="Stop container \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\" with signal terminated" Feb 13 23:20:52.422101 systemd-networkd[1266]: lxc_health: Link DOWN Feb 13 23:20:52.422112 systemd-networkd[1266]: lxc_health: Lost carrier Feb 13 23:20:52.472003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036-rootfs.mount: Deactivated successfully. Feb 13 23:20:52.574710 kubelet[2048]: E0213 23:20:52.574611 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:52.630020 containerd[1622]: time="2025-02-13T23:20:52.603723655Z" level=info msg="shim disconnected" id=3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036 namespace=k8s.io Feb 13 23:20:52.630271 containerd[1622]: time="2025-02-13T23:20:52.630027600Z" level=warning msg="cleaning up after shim disconnected" id=3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036 namespace=k8s.io Feb 13 23:20:52.630271 containerd[1622]: time="2025-02-13T23:20:52.630060384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:20:52.652764 containerd[1622]: time="2025-02-13T23:20:52.652499821Z" level=info msg="StopContainer for \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\" returns successfully" Feb 13 23:20:52.656672 containerd[1622]: time="2025-02-13T23:20:52.656639568Z" level=info msg="StopPodSandbox for \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\"" Feb 13 23:20:52.661755 containerd[1622]: time="2025-02-13T23:20:52.656715563Z" level=info msg="Container to stop \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:20:52.661755 containerd[1622]: time="2025-02-13T23:20:52.661720672Z" level=info msg="Container to stop \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:20:52.661755 containerd[1622]: time="2025-02-13T23:20:52.661740036Z" level=info msg="Container to stop \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:20:52.661755 containerd[1622]: time="2025-02-13T23:20:52.661755726Z" level=info msg="Container to stop \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:20:52.662029 containerd[1622]: time="2025-02-13T23:20:52.661770474Z" level=info msg="Container to stop \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:20:52.664913 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0-shm.mount: Deactivated successfully. Feb 13 23:20:52.695344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0-rootfs.mount: Deactivated successfully. Feb 13 23:20:52.701189 containerd[1622]: time="2025-02-13T23:20:52.700754406Z" level=info msg="shim disconnected" id=a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0 namespace=k8s.io Feb 13 23:20:52.701189 containerd[1622]: time="2025-02-13T23:20:52.700813681Z" level=warning msg="cleaning up after shim disconnected" id=a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0 namespace=k8s.io Feb 13 23:20:52.701189 containerd[1622]: time="2025-02-13T23:20:52.700842172Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:20:52.745029 containerd[1622]: time="2025-02-13T23:20:52.744927440Z" level=info msg="TearDown network for sandbox \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" successfully" Feb 13 23:20:52.745029 containerd[1622]: time="2025-02-13T23:20:52.745010167Z" level=info msg="StopPodSandbox for \"a25d81949766dcae2f3fda6313c39e39e146ffc2795cd81577e90eed7b8d31c0\" returns successfully" Feb 13 23:20:52.900673 kubelet[2048]: I0213 23:20:52.898634 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-cgroup\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.900673 kubelet[2048]: I0213 23:20:52.898721 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cni-path\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.900673 kubelet[2048]: I0213 23:20:52.898770 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-config-path\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.900673 kubelet[2048]: I0213 23:20:52.898814 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-hubble-tls\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.900673 kubelet[2048]: I0213 23:20:52.898808 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.900673 kubelet[2048]: I0213 23:20:52.898848 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-lib-modules\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901130 kubelet[2048]: I0213 23:20:52.898884 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfk5v\" (UniqueName: \"kubernetes.io/projected/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-kube-api-access-bfk5v\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901130 kubelet[2048]: I0213 23:20:52.898924 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-host-proc-sys-net\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901130 kubelet[2048]: I0213 23:20:52.898952 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-run\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901130 kubelet[2048]: I0213 23:20:52.898975 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-hostproc\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901130 kubelet[2048]: I0213 23:20:52.899009 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-etc-cni-netd\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901130 kubelet[2048]: I0213 23:20:52.899054 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-clustermesh-secrets\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901510 kubelet[2048]: I0213 23:20:52.899101 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-xtables-lock\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901510 kubelet[2048]: I0213 23:20:52.899130 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-host-proc-sys-kernel\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901510 kubelet[2048]: I0213 23:20:52.899153 2048 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-bpf-maps\") pod \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\" (UID: \"9d3e9a01-ab3c-4024-9604-8a1e8ac263f0\") " Feb 13 23:20:52.901510 kubelet[2048]: I0213 23:20:52.899205 2048 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-cgroup\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:52.901510 kubelet[2048]: I0213 23:20:52.899244 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.902708 kubelet[2048]: I0213 23:20:52.902678 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.903968 kubelet[2048]: I0213 23:20:52.903876 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.903968 kubelet[2048]: I0213 23:20:52.903923 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.905104 kubelet[2048]: I0213 23:20:52.905066 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-hostproc" (OuterVolumeSpecName: "hostproc") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.905213 kubelet[2048]: I0213 23:20:52.905119 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.905341 kubelet[2048]: I0213 23:20:52.905312 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cni-path" (OuterVolumeSpecName: "cni-path") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.905605 kubelet[2048]: I0213 23:20:52.905578 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 23:20:52.905785 kubelet[2048]: I0213 23:20:52.905760 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.905924 kubelet[2048]: I0213 23:20:52.905900 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:20:52.906806 kubelet[2048]: I0213 23:20:52.906779 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 23:20:52.908898 kubelet[2048]: I0213 23:20:52.908864 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-kube-api-access-bfk5v" (OuterVolumeSpecName: "kube-api-access-bfk5v") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "kube-api-access-bfk5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 23:20:52.909621 kubelet[2048]: I0213 23:20:52.909591 2048 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" (UID: "9d3e9a01-ab3c-4024-9604-8a1e8ac263f0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 23:20:52.927233 kubelet[2048]: I0213 23:20:52.927205 2048 scope.go:117] "RemoveContainer" containerID="3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036" Feb 13 23:20:52.928958 containerd[1622]: time="2025-02-13T23:20:52.928908534Z" level=info msg="RemoveContainer for \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\"" Feb 13 23:20:52.933213 containerd[1622]: time="2025-02-13T23:20:52.933182515Z" level=info msg="RemoveContainer for \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\" returns successfully" Feb 13 23:20:52.933837 kubelet[2048]: I0213 23:20:52.933514 2048 scope.go:117] "RemoveContainer" containerID="0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76" Feb 13 23:20:52.935033 containerd[1622]: time="2025-02-13T23:20:52.935002571Z" level=info msg="RemoveContainer for \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\"" Feb 13 23:20:52.938022 containerd[1622]: time="2025-02-13T23:20:52.937968198Z" level=info msg="RemoveContainer for \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\" returns successfully" Feb 13 23:20:52.939742 kubelet[2048]: I0213 23:20:52.939464 2048 scope.go:117] "RemoveContainer" containerID="35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899" Feb 13 23:20:52.941198 containerd[1622]: time="2025-02-13T23:20:52.941167425Z" level=info msg="RemoveContainer for \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\"" Feb 13 23:20:52.951917 containerd[1622]: time="2025-02-13T23:20:52.951884988Z" level=info msg="RemoveContainer for \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\" returns successfully" Feb 13 23:20:52.952326 kubelet[2048]: I0213 23:20:52.952306 2048 scope.go:117] "RemoveContainer" containerID="7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f" Feb 13 23:20:52.953990 containerd[1622]: time="2025-02-13T23:20:52.953636049Z" level=info msg="RemoveContainer for \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\"" Feb 13 23:20:52.956685 containerd[1622]: time="2025-02-13T23:20:52.956569549Z" level=info msg="RemoveContainer for \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\" returns successfully" Feb 13 23:20:52.956892 kubelet[2048]: I0213 23:20:52.956772 2048 scope.go:117] "RemoveContainer" containerID="580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317" Feb 13 23:20:52.958293 containerd[1622]: time="2025-02-13T23:20:52.958183838Z" level=info msg="RemoveContainer for \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\"" Feb 13 23:20:52.960587 containerd[1622]: time="2025-02-13T23:20:52.960541179Z" level=info msg="RemoveContainer for \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\" returns successfully" Feb 13 23:20:52.960790 kubelet[2048]: I0213 23:20:52.960742 2048 scope.go:117] "RemoveContainer" containerID="3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036" Feb 13 23:20:52.961014 containerd[1622]: time="2025-02-13T23:20:52.960957727Z" level=error msg="ContainerStatus for \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\": not found" Feb 13 23:20:52.969024 kubelet[2048]: E0213 23:20:52.968751 2048 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\": not found" containerID="3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036" Feb 13 23:20:52.969024 kubelet[2048]: I0213 23:20:52.968812 2048 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036"} err="failed to get container status \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c6a3cda560b5a18ca4f920dc5ab2edd99bd6157155e519b55ec6669e8d58036\": not found" Feb 13 23:20:52.969024 kubelet[2048]: I0213 23:20:52.968933 2048 scope.go:117] "RemoveContainer" containerID="0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76" Feb 13 23:20:52.969237 containerd[1622]: time="2025-02-13T23:20:52.969158235Z" level=error msg="ContainerStatus for \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\": not found" Feb 13 23:20:52.969673 kubelet[2048]: E0213 23:20:52.969482 2048 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\": not found" containerID="0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76" Feb 13 23:20:52.969673 kubelet[2048]: I0213 23:20:52.969548 2048 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76"} err="failed to get container status \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ada00c40da93830504be09ca881032b40db7948bff484765d86b9be18230e76\": not found" Feb 13 23:20:52.969673 kubelet[2048]: I0213 23:20:52.969573 2048 scope.go:117] "RemoveContainer" containerID="35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899" Feb 13 23:20:52.970114 containerd[1622]: time="2025-02-13T23:20:52.969817685Z" level=error msg="ContainerStatus for \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\": not found" Feb 13 23:20:52.970548 kubelet[2048]: E0213 23:20:52.969989 2048 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\": not found" containerID="35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899" Feb 13 23:20:52.970548 kubelet[2048]: I0213 23:20:52.970196 2048 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899"} err="failed to get container status \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\": rpc error: code = NotFound desc = an error occurred when try to find container \"35cfcdbd3eb2a48c9f6bd70b79c42bd0528dccc762432e2a3a670806b20e6899\": not found" Feb 13 23:20:52.970548 kubelet[2048]: I0213 23:20:52.970220 2048 scope.go:117] "RemoveContainer" containerID="7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f" Feb 13 23:20:52.970738 containerd[1622]: time="2025-02-13T23:20:52.970447608Z" level=error msg="ContainerStatus for \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\": not found" Feb 13 23:20:52.971140 kubelet[2048]: E0213 23:20:52.970906 2048 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\": not found" containerID="7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f" Feb 13 23:20:52.971140 kubelet[2048]: I0213 23:20:52.970939 2048 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f"} err="failed to get container status \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f8e67097033154a7df809c7fb47a3814e291ed1fc2810aa1641fddc2a8af57f\": not found" Feb 13 23:20:52.971140 kubelet[2048]: I0213 23:20:52.970963 2048 scope.go:117] "RemoveContainer" containerID="580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317" Feb 13 23:20:52.971340 containerd[1622]: time="2025-02-13T23:20:52.971232578Z" level=error msg="ContainerStatus for \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\": not found" Feb 13 23:20:52.971600 kubelet[2048]: E0213 23:20:52.971472 2048 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\": not found" containerID="580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317" Feb 13 23:20:52.971600 kubelet[2048]: I0213 23:20:52.971549 2048 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317"} err="failed to get container status \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\": rpc error: code = NotFound desc = an error occurred when try to find container \"580ef3cf37b0f9ad80111903ae9b6760466c7ea3fd6832a0d699bb43e5f6b317\": not found" Feb 13 23:20:53.000073 kubelet[2048]: I0213 23:20:52.999835 2048 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bfk5v\" (UniqueName: \"kubernetes.io/projected/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-kube-api-access-bfk5v\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000073 kubelet[2048]: I0213 23:20:52.999870 2048 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-host-proc-sys-net\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000073 kubelet[2048]: I0213 23:20:52.999887 2048 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cni-path\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000073 kubelet[2048]: I0213 23:20:52.999903 2048 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-config-path\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000073 kubelet[2048]: I0213 23:20:52.999917 2048 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-hubble-tls\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000073 kubelet[2048]: I0213 23:20:52.999930 2048 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-lib-modules\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000073 kubelet[2048]: I0213 23:20:52.999943 2048 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-etc-cni-netd\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000073 kubelet[2048]: I0213 23:20:52.999955 2048 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-cilium-run\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000517 kubelet[2048]: I0213 23:20:52.999969 2048 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-hostproc\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000517 kubelet[2048]: I0213 23:20:53.000001 2048 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-clustermesh-secrets\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000517 kubelet[2048]: I0213 23:20:53.000019 2048 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-xtables-lock\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000517 kubelet[2048]: I0213 23:20:53.000034 2048 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-host-proc-sys-kernel\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.000517 kubelet[2048]: I0213 23:20:53.000047 2048 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0-bpf-maps\") on node \"10.230.54.94\" DevicePath \"\"" Feb 13 23:20:53.332283 systemd[1]: var-lib-kubelet-pods-9d3e9a01\x2dab3c\x2d4024\x2d9604\x2d8a1e8ac263f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbfk5v.mount: Deactivated successfully. Feb 13 23:20:53.332642 systemd[1]: var-lib-kubelet-pods-9d3e9a01\x2dab3c\x2d4024\x2d9604\x2d8a1e8ac263f0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 23:20:53.332877 systemd[1]: var-lib-kubelet-pods-9d3e9a01\x2dab3c\x2d4024\x2d9604\x2d8a1e8ac263f0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 23:20:53.575199 kubelet[2048]: E0213 23:20:53.575140 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:53.657151 kubelet[2048]: E0213 23:20:53.656950 2048 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 23:20:54.575822 kubelet[2048]: E0213 23:20:54.575736 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:54.644620 kubelet[2048]: I0213 23:20:54.644543 2048 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" path="/var/lib/kubelet/pods/9d3e9a01-ab3c-4024-9604-8a1e8ac263f0/volumes" Feb 13 23:20:55.576582 kubelet[2048]: E0213 23:20:55.576507 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:56.577175 kubelet[2048]: E0213 23:20:56.577054 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:57.039802 kubelet[2048]: I0213 23:20:57.039720 2048 topology_manager.go:215] "Topology Admit Handler" podUID="1da70a8b-a440-4321-b387-9db02c0ecda3" podNamespace="kube-system" podName="cilium-d7j6l" Feb 13 23:20:57.040038 kubelet[2048]: E0213 23:20:57.039895 2048 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" containerName="apply-sysctl-overwrites" Feb 13 23:20:57.040038 kubelet[2048]: E0213 23:20:57.039923 2048 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" containerName="mount-bpf-fs" Feb 13 23:20:57.040038 kubelet[2048]: E0213 23:20:57.039936 2048 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" containerName="clean-cilium-state" Feb 13 23:20:57.040038 kubelet[2048]: E0213 23:20:57.039946 2048 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" containerName="cilium-agent" Feb 13 23:20:57.040038 kubelet[2048]: E0213 23:20:57.039957 2048 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" containerName="mount-cgroup" Feb 13 23:20:57.040038 kubelet[2048]: I0213 23:20:57.040027 2048 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3e9a01-ab3c-4024-9604-8a1e8ac263f0" containerName="cilium-agent" Feb 13 23:20:57.043686 kubelet[2048]: I0213 23:20:57.042409 2048 topology_manager.go:215] "Topology Admit Handler" podUID="7315f879-290c-4c2b-88f6-47352cf28a55" podNamespace="kube-system" podName="cilium-operator-599987898-9tcmj" Feb 13 23:20:57.128698 kubelet[2048]: I0213 23:20:57.128599 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1da70a8b-a440-4321-b387-9db02c0ecda3-cilium-config-path\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.128972 kubelet[2048]: I0213 23:20:57.128923 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-host-proc-sys-net\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.129179 kubelet[2048]: I0213 23:20:57.129136 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-host-proc-sys-kernel\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.129404 kubelet[2048]: I0213 23:20:57.129371 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-hostproc\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.129542 kubelet[2048]: I0213 23:20:57.129519 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-etc-cni-netd\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.129715 kubelet[2048]: I0213 23:20:57.129684 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1da70a8b-a440-4321-b387-9db02c0ecda3-clustermesh-secrets\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.129878 kubelet[2048]: I0213 23:20:57.129846 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1da70a8b-a440-4321-b387-9db02c0ecda3-cilium-ipsec-secrets\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.130048 kubelet[2048]: I0213 23:20:57.130006 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql5l7\" (UniqueName: \"kubernetes.io/projected/1da70a8b-a440-4321-b387-9db02c0ecda3-kube-api-access-ql5l7\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.130243 kubelet[2048]: I0213 23:20:57.130197 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-cni-path\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.130398 kubelet[2048]: I0213 23:20:57.130375 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22bth\" (UniqueName: \"kubernetes.io/projected/7315f879-290c-4c2b-88f6-47352cf28a55-kube-api-access-22bth\") pod \"cilium-operator-599987898-9tcmj\" (UID: \"7315f879-290c-4c2b-88f6-47352cf28a55\") " pod="kube-system/cilium-operator-599987898-9tcmj" Feb 13 23:20:57.130543 kubelet[2048]: I0213 23:20:57.130521 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-lib-modules\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.130702 kubelet[2048]: I0213 23:20:57.130678 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-xtables-lock\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.130835 kubelet[2048]: I0213 23:20:57.130814 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-cilium-run\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.131107 kubelet[2048]: I0213 23:20:57.130973 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-bpf-maps\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.131107 kubelet[2048]: I0213 23:20:57.131036 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1da70a8b-a440-4321-b387-9db02c0ecda3-cilium-cgroup\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.131368 kubelet[2048]: I0213 23:20:57.131260 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1da70a8b-a440-4321-b387-9db02c0ecda3-hubble-tls\") pod \"cilium-d7j6l\" (UID: \"1da70a8b-a440-4321-b387-9db02c0ecda3\") " pod="kube-system/cilium-d7j6l" Feb 13 23:20:57.131368 kubelet[2048]: I0213 23:20:57.131325 2048 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7315f879-290c-4c2b-88f6-47352cf28a55-cilium-config-path\") pod \"cilium-operator-599987898-9tcmj\" (UID: \"7315f879-290c-4c2b-88f6-47352cf28a55\") " pod="kube-system/cilium-operator-599987898-9tcmj" Feb 13 23:20:57.349470 containerd[1622]: time="2025-02-13T23:20:57.349278230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7j6l,Uid:1da70a8b-a440-4321-b387-9db02c0ecda3,Namespace:kube-system,Attempt:0,}" Feb 13 23:20:57.350870 containerd[1622]: time="2025-02-13T23:20:57.350481139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9tcmj,Uid:7315f879-290c-4c2b-88f6-47352cf28a55,Namespace:kube-system,Attempt:0,}" Feb 13 23:20:57.384229 containerd[1622]: time="2025-02-13T23:20:57.384081748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:20:57.384558 containerd[1622]: time="2025-02-13T23:20:57.384168329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:20:57.384558 containerd[1622]: time="2025-02-13T23:20:57.384206013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:57.385183 containerd[1622]: time="2025-02-13T23:20:57.384902692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:57.393202 containerd[1622]: time="2025-02-13T23:20:57.392778473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:20:57.393202 containerd[1622]: time="2025-02-13T23:20:57.392892844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:20:57.393202 containerd[1622]: time="2025-02-13T23:20:57.392919978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:57.393202 containerd[1622]: time="2025-02-13T23:20:57.393093525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:20:57.453863 containerd[1622]: time="2025-02-13T23:20:57.453810546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d7j6l,Uid:1da70a8b-a440-4321-b387-9db02c0ecda3,Namespace:kube-system,Attempt:0,} returns sandbox id \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\"" Feb 13 23:20:57.458330 containerd[1622]: time="2025-02-13T23:20:57.458215848Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 23:20:57.492463 containerd[1622]: time="2025-02-13T23:20:57.492107238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9tcmj,Uid:7315f879-290c-4c2b-88f6-47352cf28a55,Namespace:kube-system,Attempt:0,} returns sandbox id \"64b8c3731ce2909c032a091efba0acc3a6dafce959601e3f6d79b6949c885ca2\"" Feb 13 23:20:57.494776 containerd[1622]: time="2025-02-13T23:20:57.494405382Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 23:20:57.518845 containerd[1622]: time="2025-02-13T23:20:57.518633097Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"93719261410963fc224aca11dd51295f2a04485797fdf9f31dd1b5186386c813\"" Feb 13 23:20:57.519338 containerd[1622]: time="2025-02-13T23:20:57.519304427Z" level=info msg="StartContainer for \"93719261410963fc224aca11dd51295f2a04485797fdf9f31dd1b5186386c813\"" Feb 13 23:20:57.578168 kubelet[2048]: E0213 23:20:57.578076 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:57.593169 containerd[1622]: time="2025-02-13T23:20:57.592874395Z" level=info msg="StartContainer for \"93719261410963fc224aca11dd51295f2a04485797fdf9f31dd1b5186386c813\" returns successfully" Feb 13 23:20:57.650925 containerd[1622]: time="2025-02-13T23:20:57.650040003Z" level=info msg="shim disconnected" id=93719261410963fc224aca11dd51295f2a04485797fdf9f31dd1b5186386c813 namespace=k8s.io Feb 13 23:20:57.650925 containerd[1622]: time="2025-02-13T23:20:57.650114093Z" level=warning msg="cleaning up after shim disconnected" id=93719261410963fc224aca11dd51295f2a04485797fdf9f31dd1b5186386c813 namespace=k8s.io Feb 13 23:20:57.650925 containerd[1622]: time="2025-02-13T23:20:57.650129027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:20:57.950878 containerd[1622]: time="2025-02-13T23:20:57.950475584Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 23:20:57.959943 containerd[1622]: time="2025-02-13T23:20:57.959829728Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"da1425d58ddc6000db49246fcbd5379ffa9d8929bafd04f3f2e0d0f023267bee\"" Feb 13 23:20:57.960469 containerd[1622]: time="2025-02-13T23:20:57.960380186Z" level=info msg="StartContainer for \"da1425d58ddc6000db49246fcbd5379ffa9d8929bafd04f3f2e0d0f023267bee\"" Feb 13 23:20:58.039379 containerd[1622]: time="2025-02-13T23:20:58.039213854Z" level=info msg="StartContainer for \"da1425d58ddc6000db49246fcbd5379ffa9d8929bafd04f3f2e0d0f023267bee\" returns successfully" Feb 13 23:20:58.079297 containerd[1622]: time="2025-02-13T23:20:58.079173440Z" level=info msg="shim disconnected" id=da1425d58ddc6000db49246fcbd5379ffa9d8929bafd04f3f2e0d0f023267bee namespace=k8s.io Feb 13 23:20:58.079297 containerd[1622]: time="2025-02-13T23:20:58.079301406Z" level=warning msg="cleaning up after shim disconnected" id=da1425d58ddc6000db49246fcbd5379ffa9d8929bafd04f3f2e0d0f023267bee namespace=k8s.io Feb 13 23:20:58.079638 containerd[1622]: time="2025-02-13T23:20:58.079319268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:20:58.496297 kubelet[2048]: E0213 23:20:58.496234 2048 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:58.578443 kubelet[2048]: E0213 23:20:58.578384 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:58.658233 kubelet[2048]: E0213 23:20:58.658153 2048 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 23:20:58.959041 containerd[1622]: time="2025-02-13T23:20:58.958926718Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 23:20:58.987967 containerd[1622]: time="2025-02-13T23:20:58.987918591Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b85409a6482afd2aa8c546fa97185d658b234ca800615e24020b68ac6c29855c\"" Feb 13 23:20:58.989350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990933308.mount: Deactivated successfully. Feb 13 23:20:58.990431 containerd[1622]: time="2025-02-13T23:20:58.990401088Z" level=info msg="StartContainer for \"b85409a6482afd2aa8c546fa97185d658b234ca800615e24020b68ac6c29855c\"" Feb 13 23:20:59.185027 containerd[1622]: time="2025-02-13T23:20:59.184960291Z" level=info msg="StartContainer for \"b85409a6482afd2aa8c546fa97185d658b234ca800615e24020b68ac6c29855c\" returns successfully" Feb 13 23:20:59.213947 containerd[1622]: time="2025-02-13T23:20:59.213484684Z" level=info msg="shim disconnected" id=b85409a6482afd2aa8c546fa97185d658b234ca800615e24020b68ac6c29855c namespace=k8s.io Feb 13 23:20:59.213947 containerd[1622]: time="2025-02-13T23:20:59.213581941Z" level=warning msg="cleaning up after shim disconnected" id=b85409a6482afd2aa8c546fa97185d658b234ca800615e24020b68ac6c29855c namespace=k8s.io Feb 13 23:20:59.213947 containerd[1622]: time="2025-02-13T23:20:59.213681843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:20:59.245256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b85409a6482afd2aa8c546fa97185d658b234ca800615e24020b68ac6c29855c-rootfs.mount: Deactivated successfully. Feb 13 23:20:59.579682 kubelet[2048]: E0213 23:20:59.579464 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:20:59.894274 containerd[1622]: time="2025-02-13T23:20:59.894002794Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:20:59.895101 containerd[1622]: time="2025-02-13T23:20:59.895066980Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 23:20:59.896071 containerd[1622]: time="2025-02-13T23:20:59.896039219Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:20:59.898169 containerd[1622]: time="2025-02-13T23:20:59.898054722Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.403604339s" Feb 13 23:20:59.898169 containerd[1622]: time="2025-02-13T23:20:59.898117938Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 23:20:59.902111 containerd[1622]: time="2025-02-13T23:20:59.901912464Z" level=info msg="CreateContainer within sandbox \"64b8c3731ce2909c032a091efba0acc3a6dafce959601e3f6d79b6949c885ca2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 23:20:59.913546 containerd[1622]: time="2025-02-13T23:20:59.913499717Z" level=info msg="CreateContainer within sandbox \"64b8c3731ce2909c032a091efba0acc3a6dafce959601e3f6d79b6949c885ca2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b5dbb9a4a11ea4447c8dcf0015f1d79909be08a22586df9a34c41c68a3bc113a\"" Feb 13 23:20:59.915471 containerd[1622]: time="2025-02-13T23:20:59.914345290Z" level=info msg="StartContainer for \"b5dbb9a4a11ea4447c8dcf0015f1d79909be08a22586df9a34c41c68a3bc113a\"" Feb 13 23:20:59.977170 containerd[1622]: time="2025-02-13T23:20:59.975032023Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 23:20:59.990033 containerd[1622]: time="2025-02-13T23:20:59.989985541Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"640282165f54c517a229dc823975ef072931fcd5a606ea43ba776bd9f895cad4\"" Feb 13 23:20:59.991173 containerd[1622]: time="2025-02-13T23:20:59.991139785Z" level=info msg="StartContainer for \"640282165f54c517a229dc823975ef072931fcd5a606ea43ba776bd9f895cad4\"" Feb 13 23:21:00.008486 containerd[1622]: time="2025-02-13T23:21:00.008428385Z" level=info msg="StartContainer for \"b5dbb9a4a11ea4447c8dcf0015f1d79909be08a22586df9a34c41c68a3bc113a\" returns successfully" Feb 13 23:21:00.094569 containerd[1622]: time="2025-02-13T23:21:00.094225140Z" level=info msg="StartContainer for \"640282165f54c517a229dc823975ef072931fcd5a606ea43ba776bd9f895cad4\" returns successfully" Feb 13 23:21:00.260811 containerd[1622]: time="2025-02-13T23:21:00.260732420Z" level=info msg="shim disconnected" id=640282165f54c517a229dc823975ef072931fcd5a606ea43ba776bd9f895cad4 namespace=k8s.io Feb 13 23:21:00.260811 containerd[1622]: time="2025-02-13T23:21:00.260808600Z" level=warning msg="cleaning up after shim disconnected" id=640282165f54c517a229dc823975ef072931fcd5a606ea43ba776bd9f895cad4 namespace=k8s.io Feb 13 23:21:00.260811 containerd[1622]: time="2025-02-13T23:21:00.260824301Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:21:00.279913 containerd[1622]: time="2025-02-13T23:21:00.278465565Z" level=warning msg="cleanup warnings time=\"2025-02-13T23:21:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 23:21:00.428432 kubelet[2048]: I0213 23:21:00.426981 2048 setters.go:580] "Node became not ready" node="10.230.54.94" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T23:21:00Z","lastTransitionTime":"2025-02-13T23:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 23:21:00.580712 kubelet[2048]: E0213 23:21:00.580463 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:00.981247 containerd[1622]: time="2025-02-13T23:21:00.981184929Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 23:21:00.987370 kubelet[2048]: I0213 23:21:00.987276 2048 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9tcmj" podStartSLOduration=1.58133224 podStartE2EDuration="3.987254386s" podCreationTimestamp="2025-02-13 23:20:57 +0000 UTC" firstStartedPulling="2025-02-13 23:20:57.493794303 +0000 UTC m=+79.636948259" lastFinishedPulling="2025-02-13 23:20:59.899716439 +0000 UTC m=+82.042870405" observedRunningTime="2025-02-13 23:21:00.986625704 +0000 UTC m=+83.129779706" watchObservedRunningTime="2025-02-13 23:21:00.987254386 +0000 UTC m=+83.130408351" Feb 13 23:21:01.003918 containerd[1622]: time="2025-02-13T23:21:01.003788399Z" level=info msg="CreateContainer within sandbox \"381da5e5d4244b90b79dfa9f61ae61865394890c1ce4d78455c143b622f8eb9c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9367c1a7dfb88f32ab826a349415bb361ce170781f05a112e7fc0b9ac109bab8\"" Feb 13 23:21:01.004754 containerd[1622]: time="2025-02-13T23:21:01.004722536Z" level=info msg="StartContainer for \"9367c1a7dfb88f32ab826a349415bb361ce170781f05a112e7fc0b9ac109bab8\"" Feb 13 23:21:01.095242 containerd[1622]: time="2025-02-13T23:21:01.095184755Z" level=info msg="StartContainer for \"9367c1a7dfb88f32ab826a349415bb361ce170781f05a112e7fc0b9ac109bab8\" returns successfully" Feb 13 23:21:01.581407 kubelet[2048]: E0213 23:21:01.581312 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:01.775693 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 23:21:02.581882 kubelet[2048]: E0213 23:21:02.581755 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:03.582584 kubelet[2048]: E0213 23:21:03.582471 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:04.582822 kubelet[2048]: E0213 23:21:04.582738 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:04.909013 systemd[1]: run-containerd-runc-k8s.io-9367c1a7dfb88f32ab826a349415bb361ce170781f05a112e7fc0b9ac109bab8-runc.eU78Qh.mount: Deactivated successfully. Feb 13 23:21:05.237259 systemd-networkd[1266]: lxc_health: Link UP Feb 13 23:21:05.248740 systemd-networkd[1266]: lxc_health: Gained carrier Feb 13 23:21:05.386685 kubelet[2048]: I0213 23:21:05.384260 2048 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d7j6l" podStartSLOduration=8.384215812 podStartE2EDuration="8.384215812s" podCreationTimestamp="2025-02-13 23:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 23:21:02.022527885 +0000 UTC m=+84.165681879" watchObservedRunningTime="2025-02-13 23:21:05.384215812 +0000 UTC m=+87.527369782" Feb 13 23:21:05.583998 kubelet[2048]: E0213 23:21:05.583839 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:06.586125 kubelet[2048]: E0213 23:21:06.586027 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:07.051901 systemd-networkd[1266]: lxc_health: Gained IPv6LL Feb 13 23:21:07.428055 systemd[1]: run-containerd-runc-k8s.io-9367c1a7dfb88f32ab826a349415bb361ce170781f05a112e7fc0b9ac109bab8-runc.UNlyCY.mount: Deactivated successfully. Feb 13 23:21:07.586874 kubelet[2048]: E0213 23:21:07.586783 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:08.587499 kubelet[2048]: E0213 23:21:08.587400 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:09.587754 kubelet[2048]: E0213 23:21:09.587626 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:10.588304 kubelet[2048]: E0213 23:21:10.588213 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:11.588912 kubelet[2048]: E0213 23:21:11.588837 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:12.041235 systemd[1]: run-containerd-runc-k8s.io-9367c1a7dfb88f32ab826a349415bb361ce170781f05a112e7fc0b9ac109bab8-runc.YoORHc.mount: Deactivated successfully. Feb 13 23:21:12.589412 kubelet[2048]: E0213 23:21:12.589341 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:13.589766 kubelet[2048]: E0213 23:21:13.589688 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:14.590815 kubelet[2048]: E0213 23:21:14.590740 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:15.592000 kubelet[2048]: E0213 23:21:15.591900 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:21:16.592457 kubelet[2048]: E0213 23:21:16.592378 2048 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"