Feb 13 23:53:18.917095 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 23:53:18.917126 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 23:53:18.917137 kernel: BIOS-provided physical RAM map: Feb 13 23:53:18.917148 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 23:53:18.917155 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 23:53:18.917162 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 23:53:18.917171 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 23:53:18.917180 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 23:53:18.917187 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 23:53:18.917195 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 23:53:18.917203 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 23:53:18.917210 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 23:53:18.917220 kernel: NX (Execute Disable) protection: active Feb 13 23:53:18.917228 kernel: APIC: Static calls initialized Feb 13 23:53:18.917238 kernel: SMBIOS 2.8 present. Feb 13 23:53:18.917247 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 23:53:18.917256 kernel: Hypervisor detected: KVM Feb 13 23:53:18.917267 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 23:53:18.917275 kernel: kvm-clock: using sched offset of 3774493402 cycles Feb 13 23:53:18.917285 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 23:53:18.917294 kernel: tsc: Detected 2294.576 MHz processor Feb 13 23:53:18.917303 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 23:53:18.917312 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 23:53:18.917321 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 23:53:18.917329 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 23:53:18.917338 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 23:53:18.917349 kernel: Using GB pages for direct mapping Feb 13 23:53:18.917358 kernel: ACPI: Early table checksum verification disabled Feb 13 23:53:18.917367 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 23:53:18.917376 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:53:18.917384 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:53:18.917393 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:53:18.917402 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 23:53:18.917411 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:53:18.917419 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:53:18.917431 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:53:18.917439 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 23:53:18.917448 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 23:53:18.917456 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 23:53:18.917465 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 23:53:18.917478 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 23:53:18.917487 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 23:53:18.917499 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 23:53:18.917508 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 23:53:18.917517 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 23:53:18.917526 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 23:53:18.917536 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 23:53:18.917545 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 23:53:18.917554 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 23:53:18.917563 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 23:53:18.917574 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 23:53:18.917583 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 23:53:18.917592 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 23:53:18.917602 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 23:53:18.917611 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 23:53:18.917620 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 23:53:18.917629 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 23:53:18.917638 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 23:53:18.917647 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 23:53:18.917658 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 23:53:18.917667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 23:53:18.917684 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 23:53:18.917693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 23:53:18.917703 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 23:53:18.917712 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 23:53:18.917721 kernel: Zone ranges: Feb 13 23:53:18.917731 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 23:53:18.917740 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 23:53:18.917751 kernel: Normal empty Feb 13 23:53:18.917760 kernel: Movable zone start for each node Feb 13 23:53:18.917770 kernel: Early memory node ranges Feb 13 23:53:18.917779 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 23:53:18.917788 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 23:53:18.917797 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 23:53:18.917806 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 23:53:18.917815 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 23:53:18.917825 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 23:53:18.917834 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 23:53:18.917846 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 23:53:18.917855 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 23:53:18.917865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 23:53:18.917874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 23:53:18.917884 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 23:53:18.917893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 23:53:18.917902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 23:53:18.917911 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 23:53:18.917920 kernel: TSC deadline timer available Feb 13 23:53:18.917932 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 23:53:18.917941 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 23:53:18.917950 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 23:53:18.917960 kernel: Booting paravirtualized kernel on KVM Feb 13 23:53:18.917969 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 23:53:18.917978 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 23:53:18.918026 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 23:53:18.918035 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 23:53:18.918044 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 23:53:18.918056 kernel: kvm-guest: PV spinlocks enabled Feb 13 23:53:18.918070 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 23:53:18.918081 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 23:53:18.918091 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 23:53:18.918100 kernel: random: crng init done Feb 13 23:53:18.918109 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 23:53:18.918130 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 23:53:18.918139 kernel: Fallback order for Node 0: 0 Feb 13 23:53:18.918151 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 23:53:18.918160 kernel: Policy zone: DMA32 Feb 13 23:53:18.918170 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 23:53:18.918179 kernel: software IO TLB: area num 16. Feb 13 23:53:18.918189 kernel: Memory: 1901536K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 194820K reserved, 0K cma-reserved) Feb 13 23:53:18.918198 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 23:53:18.918208 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 23:53:18.918217 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 23:53:18.918226 kernel: Dynamic Preempt: voluntary Feb 13 23:53:18.918238 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 23:53:18.918248 kernel: rcu: RCU event tracing is enabled. Feb 13 23:53:18.918258 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 23:53:18.918267 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 23:53:18.918277 kernel: Rude variant of Tasks RCU enabled. Feb 13 23:53:18.918297 kernel: Tracing variant of Tasks RCU enabled. Feb 13 23:53:18.918307 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 23:53:18.918316 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 23:53:18.918326 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 23:53:18.918336 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 23:53:18.918345 kernel: Console: colour VGA+ 80x25 Feb 13 23:53:18.918355 kernel: printk: console [tty0] enabled Feb 13 23:53:18.918367 kernel: printk: console [ttyS0] enabled Feb 13 23:53:18.918377 kernel: ACPI: Core revision 20230628 Feb 13 23:53:18.918387 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 23:53:18.918397 kernel: x2apic enabled Feb 13 23:53:18.918407 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 23:53:18.918420 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Feb 13 23:53:18.918430 kernel: Calibrating delay loop (skipped) preset value.. 4589.15 BogoMIPS (lpj=2294576) Feb 13 23:53:18.918440 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 23:53:18.918451 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 23:53:18.918460 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 23:53:18.918470 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 23:53:18.918480 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 23:53:18.918489 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 23:53:18.918499 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 23:53:18.918509 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 23:53:18.918522 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 23:53:18.918532 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 23:53:18.918541 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 23:53:18.918551 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 23:53:18.918561 kernel: TAA: Mitigation: Clear CPU buffers Feb 13 23:53:18.918570 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 23:53:18.918580 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 23:53:18.918590 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 23:53:18.918600 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 23:53:18.918609 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 23:53:18.918619 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 23:53:18.918631 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 23:53:18.918641 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 23:53:18.918651 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 23:53:18.918661 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 23:53:18.918671 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 13 23:53:18.918688 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 13 23:53:18.918697 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 13 23:53:18.918707 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Feb 13 23:53:18.918717 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Feb 13 23:53:18.918727 kernel: Freeing SMP alternatives memory: 32K Feb 13 23:53:18.918736 kernel: pid_max: default: 32768 minimum: 301 Feb 13 23:53:18.918749 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 23:53:18.918758 kernel: landlock: Up and running. Feb 13 23:53:18.918768 kernel: SELinux: Initializing. Feb 13 23:53:18.918778 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 23:53:18.918788 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 23:53:18.918798 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Feb 13 23:53:18.918808 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 23:53:18.918818 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 23:53:18.918828 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 23:53:18.918838 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 23:53:18.918850 kernel: signal: max sigframe size: 3632 Feb 13 23:53:18.918860 kernel: rcu: Hierarchical SRCU implementation. Feb 13 23:53:18.918870 kernel: rcu: Max phase no-delay instances is 400. Feb 13 23:53:18.918880 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 23:53:18.918890 kernel: smp: Bringing up secondary CPUs ... Feb 13 23:53:18.918900 kernel: smpboot: x86: Booting SMP configuration: Feb 13 23:53:18.918910 kernel: .... node #0, CPUs: #1 Feb 13 23:53:18.918919 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 23:53:18.918929 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 23:53:18.918942 kernel: smpboot: Max logical packages: 16 Feb 13 23:53:18.918952 kernel: smpboot: Total of 2 processors activated (9178.30 BogoMIPS) Feb 13 23:53:18.918962 kernel: devtmpfs: initialized Feb 13 23:53:18.918972 kernel: x86/mm: Memory block size: 128MB Feb 13 23:53:18.918981 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 23:53:18.919004 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 23:53:18.919014 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 23:53:18.919024 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 23:53:18.919034 kernel: audit: initializing netlink subsys (disabled) Feb 13 23:53:18.919047 kernel: audit: type=2000 audit(1739490798.058:1): state=initialized audit_enabled=0 res=1 Feb 13 23:53:18.919056 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 23:53:18.919066 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 23:53:18.919076 kernel: cpuidle: using governor menu Feb 13 23:53:18.919086 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 23:53:18.919096 kernel: dca service started, version 1.12.1 Feb 13 23:53:18.919106 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 23:53:18.919116 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 23:53:18.919126 kernel: PCI: Using configuration type 1 for base access Feb 13 23:53:18.919139 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 23:53:18.919149 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 23:53:18.919159 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 23:53:18.919169 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 23:53:18.919179 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 23:53:18.919189 kernel: ACPI: Added _OSI(Module Device) Feb 13 23:53:18.919198 kernel: ACPI: Added _OSI(Processor Device) Feb 13 23:53:18.919208 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 23:53:18.919218 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 23:53:18.919230 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 23:53:18.919240 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 23:53:18.919250 kernel: ACPI: Interpreter enabled Feb 13 23:53:18.919260 kernel: ACPI: PM: (supports S0 S5) Feb 13 23:53:18.919270 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 23:53:18.919280 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 23:53:18.919290 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 23:53:18.919300 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 23:53:18.919310 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 23:53:18.919465 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 23:53:18.919566 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 23:53:18.919656 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 23:53:18.919669 kernel: PCI host bridge to bus 0000:00 Feb 13 23:53:18.919773 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 23:53:18.919855 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 23:53:18.919940 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 23:53:18.920033 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 23:53:18.920115 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 23:53:18.920195 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 23:53:18.920276 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 23:53:18.920380 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 23:53:18.920484 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 23:53:18.920581 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 23:53:18.920672 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 23:53:18.920770 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 23:53:18.920861 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 23:53:18.920960 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 23:53:18.921076 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 23:53:18.921179 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 23:53:18.921275 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 23:53:18.921371 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 23:53:18.921462 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 23:53:18.921560 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 23:53:18.921652 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 23:53:18.921758 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 23:53:18.921855 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 23:53:18.921952 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 23:53:18.922054 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 23:53:18.922150 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 23:53:18.922242 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 23:53:18.922342 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 23:53:18.922437 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 23:53:18.922533 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 23:53:18.922623 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 23:53:18.922721 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 23:53:18.922812 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 23:53:18.922901 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 23:53:18.923014 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 23:53:18.923111 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 23:53:18.923200 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 23:53:18.923291 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 23:53:18.923387 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 23:53:18.923480 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 23:53:18.923576 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 23:53:18.923670 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 23:53:18.923816 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 23:53:18.923912 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 23:53:18.924053 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 23:53:18.924163 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 23:53:18.924257 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 23:53:18.924352 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 23:53:18.924442 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 23:53:18.924531 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 23:53:18.924628 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 23:53:18.924746 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 23:53:18.924843 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 23:53:18.924935 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 23:53:18.925088 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 23:53:18.925187 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 23:53:18.925278 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 23:53:18.925369 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 23:53:18.925459 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 23:53:18.925548 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 23:53:18.925663 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 23:53:18.925770 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 23:53:18.925861 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 23:53:18.925952 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 23:53:18.926066 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 23:53:18.926173 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 23:53:18.926264 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 23:53:18.926355 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 23:53:18.926446 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 23:53:18.926539 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 23:53:18.926628 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 23:53:18.926727 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 23:53:18.926817 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 23:53:18.926906 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 23:53:18.927035 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 23:53:18.927126 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 23:53:18.927215 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 23:53:18.927310 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 23:53:18.927401 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 23:53:18.927492 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 23:53:18.927505 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 23:53:18.927516 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 23:53:18.927526 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 23:53:18.927536 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 23:53:18.927546 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 23:53:18.927556 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 23:53:18.927570 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 23:53:18.927580 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 23:53:18.927591 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 23:53:18.927601 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 23:53:18.927611 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 23:53:18.927622 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 23:53:18.927632 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 23:53:18.927642 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 23:53:18.927652 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 23:53:18.927665 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 23:53:18.927675 kernel: iommu: Default domain type: Translated Feb 13 23:53:18.927692 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 23:53:18.927702 kernel: PCI: Using ACPI for IRQ routing Feb 13 23:53:18.927712 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 23:53:18.927722 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 23:53:18.927732 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 23:53:18.927826 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 23:53:18.927919 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 23:53:18.928016 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 23:53:18.928030 kernel: vgaarb: loaded Feb 13 23:53:18.928040 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 23:53:18.928051 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 23:53:18.928061 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 23:53:18.928071 kernel: pnp: PnP ACPI init Feb 13 23:53:18.928167 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 23:53:18.928185 kernel: pnp: PnP ACPI: found 5 devices Feb 13 23:53:18.928195 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 23:53:18.928205 kernel: NET: Registered PF_INET protocol family Feb 13 23:53:18.928216 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 23:53:18.928226 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 23:53:18.928236 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 23:53:18.928246 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 23:53:18.928256 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 23:53:18.928266 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 23:53:18.928279 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 23:53:18.928289 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 23:53:18.928299 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 23:53:18.928309 kernel: NET: Registered PF_XDP protocol family Feb 13 23:53:18.928401 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 23:53:18.928495 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 23:53:18.928586 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 23:53:18.928689 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 23:53:18.928783 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 23:53:18.928889 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 23:53:18.928996 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 23:53:18.929133 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 23:53:18.929226 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 23:53:18.929322 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 23:53:18.929412 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 23:53:18.929505 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 23:53:18.929598 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 23:53:18.929697 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 23:53:18.929789 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 23:53:18.929880 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 23:53:18.929980 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 23:53:18.930135 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 23:53:18.930232 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 23:53:18.930324 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 23:53:18.930413 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 23:53:18.930507 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 23:53:18.930598 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 23:53:18.930702 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 23:53:18.930794 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 23:53:18.930885 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 23:53:18.930977 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 23:53:18.931162 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 23:53:18.931253 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 23:53:18.931343 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 23:53:18.931433 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 23:53:18.931528 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 23:53:18.931623 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 23:53:18.931721 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 23:53:18.931813 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 23:53:18.931903 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 23:53:18.932000 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 23:53:18.932101 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 23:53:18.932192 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 23:53:18.932284 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 23:53:18.932373 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 23:53:18.932469 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 23:53:18.932560 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 23:53:18.932651 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 23:53:18.932751 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 23:53:18.932842 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 23:53:18.932945 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 23:53:18.933071 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 23:53:18.933172 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 23:53:18.933265 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 23:53:18.933355 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 23:53:18.933438 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 23:53:18.933520 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 23:53:18.933602 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 23:53:18.933698 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 23:53:18.933779 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 23:53:18.933875 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 23:53:18.933962 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 23:53:18.934079 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 23:53:18.934174 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 23:53:18.934270 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 23:53:18.934363 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 23:53:18.934447 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 23:53:18.934538 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 23:53:18.934624 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 23:53:18.934717 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 23:53:18.934814 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 23:53:18.934900 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 23:53:18.935000 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 23:53:18.935101 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 23:53:18.935188 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 23:53:18.935272 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 23:53:18.935362 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 23:53:18.935449 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 23:53:18.935534 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 23:53:18.935629 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 23:53:18.935735 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 23:53:18.935821 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 23:53:18.935911 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 23:53:18.936029 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 23:53:18.936121 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 23:53:18.936141 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 23:53:18.936152 kernel: PCI: CLS 0 bytes, default 64 Feb 13 23:53:18.936163 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 23:53:18.936175 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 23:53:18.936185 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 23:53:18.936197 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Feb 13 23:53:18.936208 kernel: Initialise system trusted keyrings Feb 13 23:53:18.936219 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 23:53:18.936230 kernel: Key type asymmetric registered Feb 13 23:53:18.936243 kernel: Asymmetric key parser 'x509' registered Feb 13 23:53:18.936254 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 23:53:18.936264 kernel: io scheduler mq-deadline registered Feb 13 23:53:18.936275 kernel: io scheduler kyber registered Feb 13 23:53:18.936286 kernel: io scheduler bfq registered Feb 13 23:53:18.936383 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 23:53:18.936476 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 23:53:18.936568 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:53:18.936666 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 23:53:18.936766 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 23:53:18.936857 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:53:18.936950 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 23:53:18.937777 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 23:53:18.937887 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:53:18.938026 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 23:53:18.938125 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 23:53:18.938216 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:53:18.940108 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 23:53:18.940217 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 23:53:18.940313 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:53:18.940413 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 23:53:18.940506 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 23:53:18.940596 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:53:18.940701 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 23:53:18.940794 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 23:53:18.940892 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:53:18.940999 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 23:53:18.941093 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 23:53:18.941185 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 23:53:18.941201 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 23:53:18.941214 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 23:53:18.941225 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 23:53:18.941236 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 23:53:18.941251 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 23:53:18.941262 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 23:53:18.941273 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 23:53:18.941284 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 23:53:18.941386 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 23:53:18.941401 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 23:53:18.941484 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 23:53:18.941570 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T23:53:18 UTC (1739490798) Feb 13 23:53:18.941659 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 23:53:18.941672 kernel: intel_pstate: CPU model not supported Feb 13 23:53:18.941694 kernel: NET: Registered PF_INET6 protocol family Feb 13 23:53:18.941705 kernel: Segment Routing with IPv6 Feb 13 23:53:18.941716 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 23:53:18.941727 kernel: NET: Registered PF_PACKET protocol family Feb 13 23:53:18.941738 kernel: Key type dns_resolver registered Feb 13 23:53:18.941749 kernel: IPI shorthand broadcast: enabled Feb 13 23:53:18.941760 kernel: sched_clock: Marking stable (893001638, 124519558)->(1187143021, -169621825) Feb 13 23:53:18.941774 kernel: registered taskstats version 1 Feb 13 23:53:18.941784 kernel: Loading compiled-in X.509 certificates Feb 13 23:53:18.941795 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 23:53:18.941806 kernel: Key type .fscrypt registered Feb 13 23:53:18.941817 kernel: Key type fscrypt-provisioning registered Feb 13 23:53:18.941828 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 23:53:18.941839 kernel: ima: Allocated hash algorithm: sha1 Feb 13 23:53:18.941849 kernel: ima: No architecture policies found Feb 13 23:53:18.941860 kernel: clk: Disabling unused clocks Feb 13 23:53:18.941873 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 23:53:18.941884 kernel: Write protecting the kernel read-only data: 36864k Feb 13 23:53:18.941895 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 23:53:18.941906 kernel: Run /init as init process Feb 13 23:53:18.941917 kernel: with arguments: Feb 13 23:53:18.941927 kernel: /init Feb 13 23:53:18.943034 kernel: with environment: Feb 13 23:53:18.943047 kernel: HOME=/ Feb 13 23:53:18.943058 kernel: TERM=linux Feb 13 23:53:18.943073 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 23:53:18.943088 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 23:53:18.943101 systemd[1]: Detected virtualization kvm. Feb 13 23:53:18.943113 systemd[1]: Detected architecture x86-64. Feb 13 23:53:18.943124 systemd[1]: Running in initrd. Feb 13 23:53:18.943135 systemd[1]: No hostname configured, using default hostname. Feb 13 23:53:18.943146 systemd[1]: Hostname set to . Feb 13 23:53:18.943160 systemd[1]: Initializing machine ID from VM UUID. Feb 13 23:53:18.943171 systemd[1]: Queued start job for default target initrd.target. Feb 13 23:53:18.943182 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 23:53:18.943194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 23:53:18.943206 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 23:53:18.943217 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 23:53:18.943228 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 23:53:18.943239 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 23:53:18.943255 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 23:53:18.943267 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 23:53:18.943279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 23:53:18.943290 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 23:53:18.943301 systemd[1]: Reached target paths.target - Path Units. Feb 13 23:53:18.943312 systemd[1]: Reached target slices.target - Slice Units. Feb 13 23:53:18.943323 systemd[1]: Reached target swap.target - Swaps. Feb 13 23:53:18.943337 systemd[1]: Reached target timers.target - Timer Units. Feb 13 23:53:18.943348 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 23:53:18.943359 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 23:53:18.943371 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 23:53:18.943382 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 23:53:18.943393 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 23:53:18.943405 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 23:53:18.943416 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 23:53:18.943428 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 23:53:18.943441 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 23:53:18.943453 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 23:53:18.943464 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 23:53:18.943475 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 23:53:18.943486 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 23:53:18.943497 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 23:53:18.943509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:53:18.943556 systemd-journald[200]: Collecting audit messages is disabled. Feb 13 23:53:18.943585 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 23:53:18.943596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 23:53:18.943608 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 23:53:18.943622 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 23:53:18.943635 systemd-journald[200]: Journal started Feb 13 23:53:18.943659 systemd-journald[200]: Runtime Journal (/run/log/journal/3e189cc951c6470a9fc27981f0e1f28e) is 4.7M, max 38.0M, 33.2M free. Feb 13 23:53:18.944033 systemd-modules-load[201]: Inserted module 'overlay' Feb 13 23:53:18.949008 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 23:53:18.974034 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 23:53:18.976130 systemd-modules-load[201]: Inserted module 'br_netfilter' Feb 13 23:53:18.998786 kernel: Bridge firewalling registered Feb 13 23:53:18.998738 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 23:53:18.999370 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 23:53:19.000425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:53:19.008169 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 23:53:19.010875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:53:19.014140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 23:53:19.017175 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 23:53:19.034122 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:53:19.034740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 23:53:19.041180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 23:53:19.042380 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:53:19.043573 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 23:53:19.047300 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 23:53:19.064567 dracut-cmdline[236]: dracut-dracut-053 Feb 13 23:53:19.067134 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 23:53:19.075186 systemd-resolved[233]: Positive Trust Anchors: Feb 13 23:53:19.075812 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 23:53:19.076436 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 23:53:19.080220 systemd-resolved[233]: Defaulting to hostname 'linux'. Feb 13 23:53:19.081914 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 23:53:19.082862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 23:53:19.184074 kernel: SCSI subsystem initialized Feb 13 23:53:19.194186 kernel: Loading iSCSI transport class v2.0-870. Feb 13 23:53:19.206531 kernel: iscsi: registered transport (tcp) Feb 13 23:53:19.229041 kernel: iscsi: registered transport (qla4xxx) Feb 13 23:53:19.229155 kernel: QLogic iSCSI HBA Driver Feb 13 23:53:19.300969 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 23:53:19.307099 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 23:53:19.338255 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 23:53:19.338310 kernel: device-mapper: uevent: version 1.0.3 Feb 13 23:53:19.339599 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 23:53:19.385049 kernel: raid6: avx512x4 gen() 17874 MB/s Feb 13 23:53:19.402042 kernel: raid6: avx512x2 gen() 17852 MB/s Feb 13 23:53:19.419033 kernel: raid6: avx512x1 gen() 17753 MB/s Feb 13 23:53:19.436052 kernel: raid6: avx2x4 gen() 17743 MB/s Feb 13 23:53:19.453123 kernel: raid6: avx2x2 gen() 17771 MB/s Feb 13 23:53:19.470065 kernel: raid6: avx2x1 gen() 13873 MB/s Feb 13 23:53:19.470179 kernel: raid6: using algorithm avx512x4 gen() 17874 MB/s Feb 13 23:53:19.488166 kernel: raid6: .... xor() 7646 MB/s, rmw enabled Feb 13 23:53:19.488263 kernel: raid6: using avx512x2 recovery algorithm Feb 13 23:53:19.511079 kernel: xor: automatically using best checksumming function avx Feb 13 23:53:19.679027 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 23:53:19.693037 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 23:53:19.699180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 23:53:19.732083 systemd-udevd[418]: Using default interface naming scheme 'v255'. Feb 13 23:53:19.737435 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 23:53:19.746250 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 23:53:19.775220 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Feb 13 23:53:19.811088 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 23:53:19.817108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 23:53:19.889661 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 23:53:19.898166 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 23:53:19.923477 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 23:53:19.925633 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 23:53:19.927600 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 23:53:19.928569 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 23:53:19.933142 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 23:53:19.954218 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 23:53:19.983011 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 23:53:20.051397 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 23:53:20.051418 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 23:53:20.051548 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 23:53:20.051562 kernel: AES CTR mode by8 optimization enabled Feb 13 23:53:20.051576 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 23:53:20.051589 kernel: GPT:17805311 != 125829119 Feb 13 23:53:20.051612 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 23:53:20.051625 kernel: GPT:17805311 != 125829119 Feb 13 23:53:20.051637 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 23:53:20.051650 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:53:20.051662 kernel: libata version 3.00 loaded. Feb 13 23:53:20.009855 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 23:53:20.102080 kernel: ACPI: bus type USB registered Feb 13 23:53:20.102106 kernel: usbcore: registered new interface driver usbfs Feb 13 23:53:20.102120 kernel: usbcore: registered new interface driver hub Feb 13 23:53:20.102134 kernel: usbcore: registered new device driver usb Feb 13 23:53:20.102146 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (473) Feb 13 23:53:20.102160 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (466) Feb 13 23:53:20.009965 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:53:20.104713 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 23:53:20.157905 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 23:53:20.157926 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 23:53:20.158208 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 23:53:20.158335 kernel: scsi host0: ahci Feb 13 23:53:20.158460 kernel: scsi host1: ahci Feb 13 23:53:20.158573 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 23:53:20.158710 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 23:53:20.158825 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 23:53:20.158935 kernel: scsi host2: ahci Feb 13 23:53:20.159069 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 23:53:20.159186 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 23:53:20.159296 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 23:53:20.159407 kernel: scsi host3: ahci Feb 13 23:53:20.159512 kernel: hub 1-0:1.0: USB hub found Feb 13 23:53:20.159656 kernel: scsi host4: ahci Feb 13 23:53:20.159764 kernel: hub 1-0:1.0: 4 ports detected Feb 13 23:53:20.159885 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 23:53:20.160561 kernel: hub 2-0:1.0: USB hub found Feb 13 23:53:20.160743 kernel: scsi host5: ahci Feb 13 23:53:20.160865 kernel: hub 2-0:1.0: 4 ports detected Feb 13 23:53:20.161013 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Feb 13 23:53:20.161029 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Feb 13 23:53:20.161043 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Feb 13 23:53:20.161057 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Feb 13 23:53:20.161075 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Feb 13 23:53:20.161088 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Feb 13 23:53:20.010519 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 23:53:20.011227 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 23:53:20.011352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:53:20.012079 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:53:20.020213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:53:20.109036 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 23:53:20.111064 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:53:20.118320 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 23:53:20.125959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 23:53:20.131142 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 23:53:20.137447 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 23:53:20.145204 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 23:53:20.172188 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 23:53:20.177604 disk-uuid[562]: Primary Header is updated. Feb 13 23:53:20.177604 disk-uuid[562]: Secondary Entries is updated. Feb 13 23:53:20.177604 disk-uuid[562]: Secondary Header is updated. Feb 13 23:53:20.182134 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:53:20.187744 kernel: GPT:disk_guids don't match. Feb 13 23:53:20.187787 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 23:53:20.187801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:53:20.192010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:53:20.199134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:53:20.388090 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 23:53:20.465419 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 23:53:20.465555 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 23:53:20.471622 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 23:53:20.471726 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 23:53:20.474327 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 23:53:20.475338 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 23:53:20.530035 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 23:53:20.535192 kernel: usbcore: registered new interface driver usbhid Feb 13 23:53:20.535263 kernel: usbhid: USB HID core driver Feb 13 23:53:20.541045 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Feb 13 23:53:20.541141 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 23:53:21.199136 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 23:53:21.200328 disk-uuid[563]: The operation has completed successfully. Feb 13 23:53:21.248096 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 23:53:21.248225 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 23:53:21.253143 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 23:53:21.258384 sh[586]: Success Feb 13 23:53:21.272015 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 23:53:21.323730 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 23:53:21.337222 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 23:53:21.337859 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 23:53:21.366287 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 23:53:21.366356 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:53:21.368770 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 23:53:21.368879 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 23:53:21.370494 kernel: BTRFS info (device dm-0): using free space tree Feb 13 23:53:21.376170 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 23:53:21.377249 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 23:53:21.389114 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 23:53:21.391137 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 23:53:21.406015 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:53:21.408477 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:53:21.408512 kernel: BTRFS info (device vda6): using free space tree Feb 13 23:53:21.412013 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 23:53:21.421277 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 23:53:21.423358 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:53:21.428238 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 23:53:21.435110 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 23:53:21.525412 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 23:53:21.540415 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 23:53:21.550370 ignition[685]: Ignition 2.19.0 Feb 13 23:53:21.550384 ignition[685]: Stage: fetch-offline Feb 13 23:53:21.550433 ignition[685]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:53:21.550447 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:53:21.550564 ignition[685]: parsed url from cmdline: "" Feb 13 23:53:21.550568 ignition[685]: no config URL provided Feb 13 23:53:21.550573 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 23:53:21.553671 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 23:53:21.550581 ignition[685]: no config at "/usr/lib/ignition/user.ign" Feb 13 23:53:21.550586 ignition[685]: failed to fetch config: resource requires networking Feb 13 23:53:21.550785 ignition[685]: Ignition finished successfully Feb 13 23:53:21.572160 systemd-networkd[773]: lo: Link UP Feb 13 23:53:21.572173 systemd-networkd[773]: lo: Gained carrier Feb 13 23:53:21.573447 systemd-networkd[773]: Enumeration completed Feb 13 23:53:21.573803 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:53:21.573807 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 23:53:21.574782 systemd-networkd[773]: eth0: Link UP Feb 13 23:53:21.574785 systemd-networkd[773]: eth0: Gained carrier Feb 13 23:53:21.574793 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:53:21.575051 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 23:53:21.577318 systemd[1]: Reached target network.target - Network. Feb 13 23:53:21.585241 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 23:53:21.599177 systemd-networkd[773]: eth0: DHCPv4 address 10.244.103.218/30, gateway 10.244.103.217 acquired from 10.244.103.217 Feb 13 23:53:21.605228 ignition[776]: Ignition 2.19.0 Feb 13 23:53:21.605242 ignition[776]: Stage: fetch Feb 13 23:53:21.605478 ignition[776]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:53:21.605506 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:53:21.605618 ignition[776]: parsed url from cmdline: "" Feb 13 23:53:21.605622 ignition[776]: no config URL provided Feb 13 23:53:21.605628 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 23:53:21.605638 ignition[776]: no config at "/usr/lib/ignition/user.ign" Feb 13 23:53:21.605794 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 23:53:21.606253 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 23:53:21.606268 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 23:53:21.624755 ignition[776]: GET result: OK Feb 13 23:53:21.625105 ignition[776]: parsing config with SHA512: 396105a688b372ae68eda0c94f71dacc2d44d8b7c9ed911b54eb3e13cbadbaa60b64398909a5f6bc39699dc40e47c0e5549aa72f11791f38f98984ee796802dd Feb 13 23:53:21.632592 unknown[776]: fetched base config from "system" Feb 13 23:53:21.632612 unknown[776]: fetched base config from "system" Feb 13 23:53:21.633081 ignition[776]: fetch: fetch complete Feb 13 23:53:21.632622 unknown[776]: fetched user config from "openstack" Feb 13 23:53:21.633089 ignition[776]: fetch: fetch passed Feb 13 23:53:21.635701 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 23:53:21.633157 ignition[776]: Ignition finished successfully Feb 13 23:53:21.649217 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 23:53:21.669406 ignition[784]: Ignition 2.19.0 Feb 13 23:53:21.669419 ignition[784]: Stage: kargs Feb 13 23:53:21.669634 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:53:21.669646 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:53:21.670410 ignition[784]: kargs: kargs passed Feb 13 23:53:21.672115 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 23:53:21.670459 ignition[784]: Ignition finished successfully Feb 13 23:53:21.678150 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 23:53:21.696971 ignition[790]: Ignition 2.19.0 Feb 13 23:53:21.697022 ignition[790]: Stage: disks Feb 13 23:53:21.697352 ignition[790]: no configs at "/usr/lib/ignition/base.d" Feb 13 23:53:21.697371 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:53:21.698744 ignition[790]: disks: disks passed Feb 13 23:53:21.698820 ignition[790]: Ignition finished successfully Feb 13 23:53:21.700410 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 23:53:21.702085 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 23:53:21.703069 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 23:53:21.703576 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 23:53:21.704476 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 23:53:21.705307 systemd[1]: Reached target basic.target - Basic System. Feb 13 23:53:21.711340 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 23:53:21.729399 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 23:53:21.732132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 23:53:21.739070 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 23:53:21.836010 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 23:53:21.836516 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 23:53:21.837434 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 23:53:21.843066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 23:53:21.846098 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 23:53:21.847256 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 23:53:21.849189 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 23:53:21.850237 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 23:53:21.850264 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 23:53:21.854015 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Feb 13 23:53:21.855767 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 23:53:21.859100 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:53:21.859124 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:53:21.859138 kernel: BTRFS info (device vda6): using free space tree Feb 13 23:53:21.867037 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 23:53:21.867671 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 23:53:21.871627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 23:53:21.931038 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 23:53:21.935722 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Feb 13 23:53:21.942069 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 23:53:21.948278 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 23:53:22.047485 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 23:53:22.054073 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 23:53:22.057142 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 23:53:22.065005 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:53:22.084699 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 23:53:22.095524 ignition[924]: INFO : Ignition 2.19.0 Feb 13 23:53:22.095524 ignition[924]: INFO : Stage: mount Feb 13 23:53:22.097803 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 23:53:22.097803 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:53:22.097803 ignition[924]: INFO : mount: mount passed Feb 13 23:53:22.097803 ignition[924]: INFO : Ignition finished successfully Feb 13 23:53:22.098192 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 23:53:22.367733 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 23:53:23.607558 systemd-networkd[773]: eth0: Gained IPv6LL Feb 13 23:53:25.116477 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:19f6:24:19ff:fef4:67da/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:19f6:24:19ff:fef4:67da/64 assigned by NDisc. Feb 13 23:53:25.116495 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 23:53:29.004813 coreos-metadata[808]: Feb 13 23:53:29.004 WARN failed to locate config-drive, using the metadata service API instead Feb 13 23:53:29.024742 coreos-metadata[808]: Feb 13 23:53:29.024 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 23:53:29.038209 coreos-metadata[808]: Feb 13 23:53:29.038 INFO Fetch successful Feb 13 23:53:29.039794 coreos-metadata[808]: Feb 13 23:53:29.038 INFO wrote hostname srv-7sq2h.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 23:53:29.040878 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 23:53:29.041032 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 23:53:29.049197 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 23:53:29.065175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 23:53:29.078863 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Feb 13 23:53:29.078932 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 23:53:29.080571 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 23:53:29.083996 kernel: BTRFS info (device vda6): using free space tree Feb 13 23:53:29.088006 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 23:53:29.089661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 23:53:29.125026 ignition[957]: INFO : Ignition 2.19.0 Feb 13 23:53:29.125026 ignition[957]: INFO : Stage: files Feb 13 23:53:29.125026 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 23:53:29.125026 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:53:29.127203 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Feb 13 23:53:29.127203 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 23:53:29.127203 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 23:53:29.129252 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 23:53:29.129864 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 23:53:29.129864 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 23:53:29.129648 unknown[957]: wrote ssh authorized keys file for user: core Feb 13 23:53:29.135405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 23:53:29.135405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 23:53:29.135405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 23:53:29.135405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 23:53:29.135405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 23:53:29.135405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 23:53:29.135405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 23:53:29.135405 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 23:53:29.677727 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 23:53:30.915374 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 23:53:30.919284 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 23:53:30.920011 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 23:53:30.920011 ignition[957]: INFO : files: files passed Feb 13 23:53:30.920011 ignition[957]: INFO : Ignition finished successfully Feb 13 23:53:30.921703 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 23:53:30.930202 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 23:53:30.932174 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 23:53:30.936577 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 23:53:30.936705 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 23:53:30.951706 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 23:53:30.951706 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 23:53:30.953893 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 23:53:30.955622 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 23:53:30.956634 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 23:53:30.970403 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 23:53:30.998086 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 23:53:30.998259 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 23:53:30.999583 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 23:53:31.000312 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 23:53:31.001228 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 23:53:31.006132 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 23:53:31.025064 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 23:53:31.030128 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 23:53:31.041061 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 23:53:31.041606 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 23:53:31.042193 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 23:53:31.043931 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 23:53:31.044064 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 23:53:31.046411 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 23:53:31.047176 systemd[1]: Stopped target basic.target - Basic System. Feb 13 23:53:31.048642 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 23:53:31.050197 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 23:53:31.051037 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 23:53:31.051853 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 23:53:31.052762 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 23:53:31.053745 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 23:53:31.054711 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 23:53:31.056552 systemd[1]: Stopped target swap.target - Swaps. Feb 13 23:53:31.058164 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 23:53:31.058557 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 23:53:31.060586 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 23:53:31.061902 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 23:53:31.063036 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 23:53:31.063256 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 23:53:31.064380 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 23:53:31.064637 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 23:53:31.066066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 23:53:31.066348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 23:53:31.067556 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 23:53:31.067798 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 23:53:31.074197 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 23:53:31.075171 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 23:53:31.075708 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 23:53:31.079185 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 23:53:31.079587 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 23:53:31.079699 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 23:53:31.081212 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 23:53:31.081324 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 23:53:31.088249 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 23:53:31.088760 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 23:53:31.093136 ignition[1010]: INFO : Ignition 2.19.0 Feb 13 23:53:31.093136 ignition[1010]: INFO : Stage: umount Feb 13 23:53:31.102137 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 23:53:31.102137 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 23:53:31.102137 ignition[1010]: INFO : umount: umount passed Feb 13 23:53:31.102137 ignition[1010]: INFO : Ignition finished successfully Feb 13 23:53:31.098423 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 23:53:31.098557 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 23:53:31.099740 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 23:53:31.099875 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 23:53:31.100892 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 23:53:31.100955 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 23:53:31.101802 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 23:53:31.101869 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 23:53:31.102732 systemd[1]: Stopped target network.target - Network. Feb 13 23:53:31.104321 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 23:53:31.104392 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 23:53:31.105117 systemd[1]: Stopped target paths.target - Path Units. Feb 13 23:53:31.106140 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 23:53:31.106312 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 23:53:31.112049 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 23:53:31.113053 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 23:53:31.113440 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 23:53:31.113481 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 23:53:31.114358 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 23:53:31.114395 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 23:53:31.114833 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 23:53:31.114879 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 23:53:31.115767 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 23:53:31.115809 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 23:53:31.117417 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 23:53:31.119097 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 23:53:31.121127 systemd-networkd[773]: eth0: DHCPv6 lease lost Feb 13 23:53:31.123209 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 23:53:31.123752 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 23:53:31.123860 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 23:53:31.128100 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 23:53:31.128171 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 23:53:31.135218 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 23:53:31.135958 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 23:53:31.136018 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 23:53:31.139338 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 23:53:31.140909 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 23:53:31.141051 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 23:53:31.150944 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 23:53:31.151056 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:53:31.151774 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 23:53:31.151825 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 23:53:31.152580 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 23:53:31.152622 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 23:53:31.154523 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 23:53:31.154672 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 23:53:31.155935 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 23:53:31.156029 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 23:53:31.156635 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 23:53:31.156710 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 23:53:31.158442 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 23:53:31.158526 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 23:53:31.159571 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 23:53:31.159604 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 23:53:31.160281 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 23:53:31.160321 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 23:53:31.161292 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 23:53:31.161329 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 23:53:31.162033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 23:53:31.162073 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 23:53:31.163075 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 23:53:31.163158 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 23:53:31.170257 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 23:53:31.172032 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 23:53:31.172086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 23:53:31.173588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 23:53:31.173629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:53:31.178964 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 23:53:31.179087 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 23:53:31.180316 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 23:53:31.187138 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 23:53:31.199812 systemd[1]: Switching root. Feb 13 23:53:31.231289 systemd-journald[200]: Journal stopped Feb 13 23:53:32.257203 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Feb 13 23:53:32.257311 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 23:53:32.257332 kernel: SELinux: policy capability open_perms=1 Feb 13 23:53:32.257349 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 23:53:32.257366 kernel: SELinux: policy capability always_check_network=0 Feb 13 23:53:32.257378 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 23:53:32.257391 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 23:53:32.257404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 23:53:32.257420 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 23:53:32.257433 kernel: audit: type=1403 audit(1739490811.355:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 23:53:32.257448 systemd[1]: Successfully loaded SELinux policy in 41.929ms. Feb 13 23:53:32.257471 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.257ms. Feb 13 23:53:32.257485 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 23:53:32.257500 systemd[1]: Detected virtualization kvm. Feb 13 23:53:32.257514 systemd[1]: Detected architecture x86-64. Feb 13 23:53:32.257527 systemd[1]: Detected first boot. Feb 13 23:53:32.257545 systemd[1]: Hostname set to . Feb 13 23:53:32.257561 systemd[1]: Initializing machine ID from VM UUID. Feb 13 23:53:32.257576 zram_generator::config[1052]: No configuration found. Feb 13 23:53:32.257591 systemd[1]: Populated /etc with preset unit settings. Feb 13 23:53:32.257604 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 23:53:32.257621 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 23:53:32.257635 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 23:53:32.257649 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 23:53:32.257663 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 23:53:32.257681 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 23:53:32.257694 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 23:53:32.257708 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 23:53:32.257723 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 23:53:32.257745 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 23:53:32.257763 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 23:53:32.257776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 23:53:32.257790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 23:53:32.257804 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 23:53:32.257817 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 23:53:32.257831 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 23:53:32.257844 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 23:53:32.257858 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 23:53:32.257877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 23:53:32.257891 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 23:53:32.257908 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 23:53:32.257922 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 23:53:32.257937 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 23:53:32.257950 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 23:53:32.257963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 23:53:32.257980 systemd[1]: Reached target slices.target - Slice Units. Feb 13 23:53:32.259038 systemd[1]: Reached target swap.target - Swaps. Feb 13 23:53:32.259056 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 23:53:32.259071 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 23:53:32.259084 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 23:53:32.259098 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 23:53:32.259113 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 23:53:32.259126 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 23:53:32.259141 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 23:53:32.259167 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 23:53:32.259184 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 23:53:32.259198 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:53:32.259211 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 23:53:32.259225 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 23:53:32.259241 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 23:53:32.259255 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 23:53:32.259273 systemd[1]: Reached target machines.target - Containers. Feb 13 23:53:32.259287 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 23:53:32.259301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:53:32.259315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 23:53:32.259329 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 23:53:32.259342 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 23:53:32.259360 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 23:53:32.259377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 23:53:32.259390 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 23:53:32.259404 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 23:53:32.259418 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 23:53:32.259432 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 23:53:32.259446 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 23:53:32.259459 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 23:53:32.259473 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 23:53:32.259493 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 23:53:32.259507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 23:53:32.259520 kernel: fuse: init (API version 7.39) Feb 13 23:53:32.259533 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 23:53:32.259547 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 23:53:32.259560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 23:53:32.259574 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 23:53:32.259590 systemd[1]: Stopped verity-setup.service. Feb 13 23:53:32.259604 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:53:32.259624 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 23:53:32.259637 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 23:53:32.259651 kernel: ACPI: bus type drm_connector registered Feb 13 23:53:32.259665 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 23:53:32.259678 kernel: loop: module loaded Feb 13 23:53:32.259694 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 23:53:32.259708 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 23:53:32.259722 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 23:53:32.259744 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 23:53:32.259758 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 23:53:32.259772 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 23:53:32.259786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 23:53:32.259799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 23:53:32.259813 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 23:53:32.259830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 23:53:32.259846 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 23:53:32.259860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 23:53:32.259874 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 23:53:32.259887 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 23:53:32.259904 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 23:53:32.259918 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 23:53:32.259952 systemd-journald[1135]: Collecting audit messages is disabled. Feb 13 23:53:32.259979 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 23:53:32.260855 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 23:53:32.260874 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 23:53:32.260893 systemd-journald[1135]: Journal started Feb 13 23:53:32.260922 systemd-journald[1135]: Runtime Journal (/run/log/journal/3e189cc951c6470a9fc27981f0e1f28e) is 4.7M, max 38.0M, 33.2M free. Feb 13 23:53:31.955052 systemd[1]: Queued start job for default target multi-user.target. Feb 13 23:53:31.981123 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 23:53:31.981649 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 23:53:32.263022 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 23:53:32.279533 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 23:53:32.292905 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 23:53:32.300121 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 23:53:32.300583 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 23:53:32.300618 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 23:53:32.301975 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 23:53:32.312123 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 23:53:32.314162 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 23:53:32.315798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:53:32.320101 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 23:53:32.325180 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 23:53:32.325652 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 23:53:32.328795 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 23:53:32.329261 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 23:53:32.332841 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:53:32.347182 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 23:53:32.351024 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 23:53:32.351830 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 23:53:32.352323 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 23:53:32.353370 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 23:53:32.367351 systemd-journald[1135]: Time spent on flushing to /var/log/journal/3e189cc951c6470a9fc27981f0e1f28e is 104.483ms for 1136 entries. Feb 13 23:53:32.367351 systemd-journald[1135]: System Journal (/var/log/journal/3e189cc951c6470a9fc27981f0e1f28e) is 8.0M, max 584.8M, 576.8M free. Feb 13 23:53:32.498867 systemd-journald[1135]: Received client request to flush runtime journal. Feb 13 23:53:32.498920 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 23:53:32.498946 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 23:53:32.498966 kernel: loop1: detected capacity change from 0 to 140768 Feb 13 23:53:32.374194 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 23:53:32.393275 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 23:53:32.394271 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 23:53:32.406135 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 23:53:32.406864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 23:53:32.417210 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 23:53:32.445639 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:53:32.472135 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 23:53:32.475847 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 23:53:32.477505 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 23:53:32.502051 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 23:53:32.511210 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 23:53:32.520027 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 23:53:32.521123 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 23:53:32.558017 kernel: loop3: detected capacity change from 0 to 8 Feb 13 23:53:32.569559 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Feb 13 23:53:32.570059 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Feb 13 23:53:32.581322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 23:53:32.584018 kernel: loop4: detected capacity change from 0 to 142488 Feb 13 23:53:32.609768 kernel: loop5: detected capacity change from 0 to 140768 Feb 13 23:53:32.628027 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 23:53:32.644014 kernel: loop7: detected capacity change from 0 to 8 Feb 13 23:53:32.645237 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 23:53:32.645731 (sd-merge)[1209]: Merged extensions into '/usr'. Feb 13 23:53:32.659930 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 23:53:32.659947 systemd[1]: Reloading... Feb 13 23:53:32.807014 zram_generator::config[1236]: No configuration found. Feb 13 23:53:32.858336 ldconfig[1179]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 23:53:32.968520 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:53:33.019528 systemd[1]: Reloading finished in 359 ms. Feb 13 23:53:33.044849 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 23:53:33.045707 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 23:53:33.055320 systemd[1]: Starting ensure-sysext.service... Feb 13 23:53:33.057224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 23:53:33.072251 systemd[1]: Reloading requested from client PID 1292 ('systemctl') (unit ensure-sysext.service)... Feb 13 23:53:33.072391 systemd[1]: Reloading... Feb 13 23:53:33.108008 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 23:53:33.108367 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 23:53:33.109257 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 23:53:33.109533 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Feb 13 23:53:33.109598 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Feb 13 23:53:33.116089 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 23:53:33.116099 systemd-tmpfiles[1293]: Skipping /boot Feb 13 23:53:33.138592 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 23:53:33.138605 systemd-tmpfiles[1293]: Skipping /boot Feb 13 23:53:33.163016 zram_generator::config[1319]: No configuration found. Feb 13 23:53:33.330826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:53:33.379929 systemd[1]: Reloading finished in 307 ms. Feb 13 23:53:33.396588 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 23:53:33.397569 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 23:53:33.413250 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 23:53:33.426473 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 23:53:33.432450 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 23:53:33.442155 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 23:53:33.446258 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 23:53:33.452198 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 23:53:33.468086 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 23:53:33.470435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:53:33.470631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:53:33.473487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 23:53:33.477245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 23:53:33.481285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 23:53:33.481926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:53:33.482056 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:53:33.486073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:53:33.486266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:53:33.486412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:53:33.486498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:53:33.489371 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:53:33.489588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 23:53:33.497240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 23:53:33.498180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 23:53:33.498332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 23:53:33.504336 systemd[1]: Finished ensure-sysext.service. Feb 13 23:53:33.517024 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 23:53:33.517817 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 23:53:33.518537 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 23:53:33.518702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 23:53:33.542343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 23:53:33.543754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 23:53:33.544972 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 23:53:33.550315 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 23:53:33.553444 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 23:53:33.555583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 23:53:33.557061 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 23:53:33.559361 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 23:53:33.560245 augenrules[1409]: No rules Feb 13 23:53:33.562116 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 23:53:33.562278 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 23:53:33.563630 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 23:53:33.563710 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 23:53:33.565407 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 23:53:33.572204 systemd-udevd[1386]: Using default interface naming scheme 'v255'. Feb 13 23:53:33.590090 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 23:53:33.591230 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 23:53:33.610329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 23:53:33.620146 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 23:53:33.658837 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 23:53:33.659432 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 23:53:33.720574 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 23:53:33.740132 systemd-resolved[1382]: Positive Trust Anchors: Feb 13 23:53:33.740460 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 23:53:33.740556 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 23:53:33.745649 systemd-networkd[1426]: lo: Link UP Feb 13 23:53:33.745657 systemd-networkd[1426]: lo: Gained carrier Feb 13 23:53:33.746885 systemd-networkd[1426]: Enumeration completed Feb 13 23:53:33.746992 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 23:53:33.752159 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:53:33.752275 systemd-networkd[1426]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 23:53:33.754155 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 23:53:33.756317 systemd-networkd[1426]: eth0: Link UP Feb 13 23:53:33.756804 systemd-resolved[1382]: Using system hostname 'srv-7sq2h.gb1.brightbox.com'. Feb 13 23:53:33.756889 systemd-networkd[1426]: eth0: Gained carrier Feb 13 23:53:33.756907 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:53:33.766013 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1432) Feb 13 23:53:33.767098 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 23:53:33.773417 systemd[1]: Reached target network.target - Network. Feb 13 23:53:33.773866 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 23:53:33.777079 systemd-networkd[1426]: eth0: DHCPv4 address 10.244.103.218/30, gateway 10.244.103.217 acquired from 10.244.103.217 Feb 13 23:53:33.777904 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Feb 13 23:53:33.820242 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 23:53:33.850394 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Feb 13 23:53:33.864700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 23:53:33.866843 kernel: ACPI: button: Power Button [PWRF] Feb 13 23:53:33.873007 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 23:53:33.875010 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 23:53:33.876195 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 23:53:33.898947 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 23:53:33.900398 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 23:53:33.900566 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 23:53:33.902492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 23:53:33.944257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 23:53:34.095160 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 23:53:34.120120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 23:53:34.130538 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 23:53:34.164318 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 23:53:34.193187 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 23:53:34.196095 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 23:53:34.196953 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 23:53:34.198139 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 23:53:34.199043 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 23:53:34.200227 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 23:53:34.201166 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 23:53:34.201982 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 23:53:34.202768 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 23:53:34.202820 systemd[1]: Reached target paths.target - Path Units. Feb 13 23:53:34.203476 systemd[1]: Reached target timers.target - Timer Units. Feb 13 23:53:34.205665 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 23:53:34.207785 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 23:53:34.217855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 23:53:34.228286 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 23:53:34.232213 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 23:53:34.233199 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 23:53:34.233935 systemd[1]: Reached target basic.target - Basic System. Feb 13 23:53:34.234756 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 23:53:34.234806 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 23:53:34.237547 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 23:53:34.241134 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 23:53:34.246159 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 23:53:34.248176 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 23:53:34.256104 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 23:53:34.258176 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 23:53:34.259398 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 23:53:34.262152 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 23:53:34.265150 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 23:53:34.268308 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 23:53:34.278292 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 23:53:34.279328 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 23:53:34.279886 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 23:53:34.281942 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 23:53:34.284137 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 23:53:34.286319 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 23:53:34.294978 jq[1472]: false Feb 13 23:53:34.300511 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 23:53:34.301049 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 23:53:34.313816 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 23:53:34.328300 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 23:53:34.328511 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 23:53:34.336914 dbus-daemon[1471]: [system] SELinux support is enabled Feb 13 23:53:34.337757 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 23:53:34.343274 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 23:53:34.343312 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 23:53:34.345044 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 23:53:34.345071 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 23:53:34.354018 jq[1481]: true Feb 13 23:53:34.356225 dbus-daemon[1471]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1426 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 23:53:34.356893 dbus-daemon[1471]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 23:53:34.370175 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 23:53:34.371794 update_engine[1480]: I20250213 23:53:34.370460 1480 main.cc:92] Flatcar Update Engine starting Feb 13 23:53:34.373598 extend-filesystems[1473]: Found loop4 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found loop5 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found loop6 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found loop7 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found vda Feb 13 23:53:34.373598 extend-filesystems[1473]: Found vda1 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found vda2 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found vda3 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found usr Feb 13 23:53:34.373598 extend-filesystems[1473]: Found vda4 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found vda6 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found vda7 Feb 13 23:53:34.373598 extend-filesystems[1473]: Found vda9 Feb 13 23:53:34.373598 extend-filesystems[1473]: Checking size of /dev/vda9 Feb 13 23:53:34.385570 systemd[1]: Started update-engine.service - Update Engine. Feb 13 23:53:34.386007 update_engine[1480]: I20250213 23:53:34.385686 1480 update_check_scheduler.cc:74] Next update check in 7m30s Feb 13 23:53:34.394190 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 23:53:34.410621 extend-filesystems[1473]: Resized partition /dev/vda9 Feb 13 23:53:34.423504 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Feb 13 23:53:34.427173 jq[1498]: true Feb 13 23:53:34.434158 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 23:53:34.436000 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 23:53:34.444012 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 23:53:34.455021 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1432) Feb 13 23:53:34.556674 locksmithd[1504]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 23:53:34.597020 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 23:53:34.611153 extend-filesystems[1507]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 23:53:34.611153 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 23:53:34.611153 extend-filesystems[1507]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 23:53:34.617262 extend-filesystems[1473]: Resized filesystem in /dev/vda9 Feb 13 23:53:34.613931 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 23:53:34.614178 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 23:53:34.617050 systemd-logind[1479]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 23:53:34.617080 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 23:53:34.619336 systemd-logind[1479]: New seat seat0. Feb 13 23:53:34.623941 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 23:53:34.654048 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Feb 13 23:53:34.657059 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 23:53:34.667276 systemd[1]: Starting sshkeys.service... Feb 13 23:53:34.673853 dbus-daemon[1471]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 23:53:34.674363 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 23:53:34.675100 dbus-daemon[1471]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1495 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 23:53:34.687621 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 23:53:34.703585 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 23:53:34.717458 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 23:53:34.723331 polkitd[1538]: Started polkitd version 121 Feb 13 23:53:34.732192 polkitd[1538]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 23:53:34.732254 polkitd[1538]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 23:53:34.742018 polkitd[1538]: Finished loading, compiling and executing 2 rules Feb 13 23:53:34.743758 dbus-daemon[1471]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 23:53:34.743938 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 23:53:34.745841 polkitd[1538]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 23:53:34.768725 systemd-hostnamed[1495]: Hostname set to (static) Feb 13 23:53:34.780979 containerd[1485]: time="2025-02-13T23:53:34.780296059Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 23:53:34.818895 containerd[1485]: time="2025-02-13T23:53:34.818782820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:53:34.822293 containerd[1485]: time="2025-02-13T23:53:34.822238825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:53:34.822293 containerd[1485]: time="2025-02-13T23:53:34.822278923Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 23:53:34.822293 containerd[1485]: time="2025-02-13T23:53:34.822297533Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 23:53:34.822504 containerd[1485]: time="2025-02-13T23:53:34.822480061Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 23:53:34.822614 containerd[1485]: time="2025-02-13T23:53:34.822511472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 23:53:34.822678 containerd[1485]: time="2025-02-13T23:53:34.822616131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:53:34.822678 containerd[1485]: time="2025-02-13T23:53:34.822632748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:53:34.822841 containerd[1485]: time="2025-02-13T23:53:34.822812409Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:53:34.822841 containerd[1485]: time="2025-02-13T23:53:34.822836911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 23:53:34.822962 containerd[1485]: time="2025-02-13T23:53:34.822851618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:53:34.822962 containerd[1485]: time="2025-02-13T23:53:34.822892454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 23:53:34.823106 containerd[1485]: time="2025-02-13T23:53:34.823002788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:53:34.823268 containerd[1485]: time="2025-02-13T23:53:34.823233599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 23:53:34.823396 containerd[1485]: time="2025-02-13T23:53:34.823370773Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 23:53:34.823396 containerd[1485]: time="2025-02-13T23:53:34.823393798Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 23:53:34.823513 containerd[1485]: time="2025-02-13T23:53:34.823475987Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 23:53:34.823587 containerd[1485]: time="2025-02-13T23:53:34.823520121Z" level=info msg="metadata content store policy set" policy=shared Feb 13 23:53:34.829842 containerd[1485]: time="2025-02-13T23:53:34.829793494Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 23:53:34.829842 containerd[1485]: time="2025-02-13T23:53:34.829855676Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 23:53:34.830097 containerd[1485]: time="2025-02-13T23:53:34.829876434Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 23:53:34.830097 containerd[1485]: time="2025-02-13T23:53:34.829893081Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 23:53:34.830097 containerd[1485]: time="2025-02-13T23:53:34.829913978Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 23:53:34.830097 containerd[1485]: time="2025-02-13T23:53:34.830092204Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 23:53:34.830413 containerd[1485]: time="2025-02-13T23:53:34.830384185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 23:53:34.830545 containerd[1485]: time="2025-02-13T23:53:34.830521671Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 23:53:34.830545 containerd[1485]: time="2025-02-13T23:53:34.830545995Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 23:53:34.830545 containerd[1485]: time="2025-02-13T23:53:34.830569448Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 23:53:34.830545 containerd[1485]: time="2025-02-13T23:53:34.830585704Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830608698Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830624929Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830640139Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830655477Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830669498Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830683146Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830696695Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830718340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830733239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830746484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830760575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830778589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830795902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.830872 containerd[1485]: time="2025-02-13T23:53:34.830809572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830823312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830837452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830852158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830865318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830879613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830895013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830912017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830933657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830946646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.830968918Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.831061609Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.831082300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 23:53:34.831923 containerd[1485]: time="2025-02-13T23:53:34.831095019Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 23:53:34.832530 containerd[1485]: time="2025-02-13T23:53:34.831109365Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 23:53:34.832530 containerd[1485]: time="2025-02-13T23:53:34.831121076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.832530 containerd[1485]: time="2025-02-13T23:53:34.831141793Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 23:53:34.832530 containerd[1485]: time="2025-02-13T23:53:34.831160044Z" level=info msg="NRI interface is disabled by configuration." Feb 13 23:53:34.832530 containerd[1485]: time="2025-02-13T23:53:34.831172486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 23:53:34.832750 containerd[1485]: time="2025-02-13T23:53:34.831505570Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 23:53:34.832750 containerd[1485]: time="2025-02-13T23:53:34.831571903Z" level=info msg="Connect containerd service" Feb 13 23:53:34.832750 containerd[1485]: time="2025-02-13T23:53:34.831645981Z" level=info msg="using legacy CRI server" Feb 13 23:53:34.832750 containerd[1485]: time="2025-02-13T23:53:34.831663062Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 23:53:34.832750 containerd[1485]: time="2025-02-13T23:53:34.831788601Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 23:53:34.837302 containerd[1485]: time="2025-02-13T23:53:34.837230998Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 23:53:34.840026 containerd[1485]: time="2025-02-13T23:53:34.837607025Z" level=info msg="Start subscribing containerd event" Feb 13 23:53:34.840026 containerd[1485]: time="2025-02-13T23:53:34.837770641Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 23:53:34.840026 containerd[1485]: time="2025-02-13T23:53:34.837780690Z" level=info msg="Start recovering state" Feb 13 23:53:34.840026 containerd[1485]: time="2025-02-13T23:53:34.838035227Z" level=info msg="Start event monitor" Feb 13 23:53:34.840026 containerd[1485]: time="2025-02-13T23:53:34.838065481Z" level=info msg="Start snapshots syncer" Feb 13 23:53:34.840026 containerd[1485]: time="2025-02-13T23:53:34.838097405Z" level=info msg="Start cni network conf syncer for default" Feb 13 23:53:34.840026 containerd[1485]: time="2025-02-13T23:53:34.838115268Z" level=info msg="Start streaming server" Feb 13 23:53:34.840026 containerd[1485]: time="2025-02-13T23:53:34.837825648Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 23:53:34.840026 containerd[1485]: time="2025-02-13T23:53:34.838489366Z" level=info msg="containerd successfully booted in 0.059172s" Feb 13 23:53:34.838597 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 23:53:34.888902 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 23:53:34.927146 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 23:53:34.932189 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 23:53:34.938427 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 23:53:34.941440 systemd[1]: Started sshd@0-10.244.103.218:22-147.75.109.163:48836.service - OpenSSH per-connection server daemon (147.75.109.163:48836). Feb 13 23:53:34.959296 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 23:53:34.959524 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 23:53:34.968325 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 23:53:34.981235 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 23:53:34.988371 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 23:53:34.992582 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 23:53:34.993926 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 23:53:34.999425 systemd-networkd[1426]: eth0: Gained IPv6LL Feb 13 23:53:35.000713 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Feb 13 23:53:35.002208 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 23:53:35.003978 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 23:53:35.010233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:53:35.012491 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 23:53:35.037095 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 23:53:35.800561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:53:35.811461 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:53:35.855295 sshd[1563]: Accepted publickey for core from 147.75.109.163 port 48836 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:53:35.861115 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:53:35.876079 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 23:53:35.883363 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 23:53:35.890269 systemd-logind[1479]: New session 1 of user core. Feb 13 23:53:35.900246 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 23:53:35.914332 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 23:53:35.928074 (systemd)[1594]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 23:53:36.030598 systemd[1594]: Queued start job for default target default.target. Feb 13 23:53:36.035484 systemd[1594]: Created slice app.slice - User Application Slice. Feb 13 23:53:36.035516 systemd[1594]: Reached target paths.target - Paths. Feb 13 23:53:36.035530 systemd[1594]: Reached target timers.target - Timers. Feb 13 23:53:36.038119 systemd[1594]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 23:53:36.051852 systemd[1594]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 23:53:36.051979 systemd[1594]: Reached target sockets.target - Sockets. Feb 13 23:53:36.052016 systemd[1594]: Reached target basic.target - Basic System. Feb 13 23:53:36.052208 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 23:53:36.053938 systemd[1594]: Reached target default.target - Main User Target. Feb 13 23:53:36.054011 systemd[1594]: Startup finished in 116ms. Feb 13 23:53:36.060223 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 23:53:36.463080 kubelet[1591]: E0213 23:53:36.462883 1591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:53:36.466277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:53:36.466451 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:53:36.466817 systemd[1]: kubelet.service: Consumed 1.161s CPU time. Feb 13 23:53:36.511839 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Feb 13 23:53:36.513411 systemd-networkd[1426]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:19f6:24:19ff:fef4:67da/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:19f6:24:19ff:fef4:67da/64 assigned by NDisc. Feb 13 23:53:36.513443 systemd-networkd[1426]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 23:53:36.695405 systemd[1]: Started sshd@1-10.244.103.218:22-147.75.109.163:48852.service - OpenSSH per-connection server daemon (147.75.109.163:48852). Feb 13 23:53:37.587597 sshd[1613]: Accepted publickey for core from 147.75.109.163 port 48852 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:53:37.591067 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:53:37.601753 systemd-logind[1479]: New session 2 of user core. Feb 13 23:53:37.610231 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 23:53:37.752585 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Feb 13 23:53:38.207193 sshd[1613]: pam_unix(sshd:session): session closed for user core Feb 13 23:53:38.214968 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Feb 13 23:53:38.216329 systemd[1]: sshd@1-10.244.103.218:22-147.75.109.163:48852.service: Deactivated successfully. Feb 13 23:53:38.219531 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 23:53:38.222856 systemd-logind[1479]: Removed session 2. Feb 13 23:53:38.373685 systemd[1]: Started sshd@2-10.244.103.218:22-147.75.109.163:48858.service - OpenSSH per-connection server daemon (147.75.109.163:48858). Feb 13 23:53:39.267946 sshd[1622]: Accepted publickey for core from 147.75.109.163 port 48858 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:53:39.270641 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:53:39.278763 systemd-logind[1479]: New session 3 of user core. Feb 13 23:53:39.289211 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 23:53:39.888602 sshd[1622]: pam_unix(sshd:session): session closed for user core Feb 13 23:53:39.892922 systemd[1]: sshd@2-10.244.103.218:22-147.75.109.163:48858.service: Deactivated successfully. Feb 13 23:53:39.895235 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 23:53:39.897009 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Feb 13 23:53:39.897928 systemd-logind[1479]: Removed session 3. Feb 13 23:53:40.041460 login[1571]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 23:53:40.048432 login[1572]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 23:53:40.052225 systemd-logind[1479]: New session 4 of user core. Feb 13 23:53:40.062478 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 23:53:40.065314 systemd-logind[1479]: New session 5 of user core. Feb 13 23:53:40.070279 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 23:53:41.364189 coreos-metadata[1470]: Feb 13 23:53:41.364 WARN failed to locate config-drive, using the metadata service API instead Feb 13 23:53:41.382907 coreos-metadata[1470]: Feb 13 23:53:41.382 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 23:53:41.388686 coreos-metadata[1470]: Feb 13 23:53:41.388 INFO Fetch failed with 404: resource not found Feb 13 23:53:41.388861 coreos-metadata[1470]: Feb 13 23:53:41.388 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 23:53:41.389497 coreos-metadata[1470]: Feb 13 23:53:41.389 INFO Fetch successful Feb 13 23:53:41.389641 coreos-metadata[1470]: Feb 13 23:53:41.389 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 23:53:41.401587 coreos-metadata[1470]: Feb 13 23:53:41.401 INFO Fetch successful Feb 13 23:53:41.401836 coreos-metadata[1470]: Feb 13 23:53:41.401 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 23:53:41.416538 coreos-metadata[1470]: Feb 13 23:53:41.416 INFO Fetch successful Feb 13 23:53:41.416969 coreos-metadata[1470]: Feb 13 23:53:41.416 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 23:53:41.431922 coreos-metadata[1470]: Feb 13 23:53:41.431 INFO Fetch successful Feb 13 23:53:41.432256 coreos-metadata[1470]: Feb 13 23:53:41.432 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 23:53:41.447963 coreos-metadata[1470]: Feb 13 23:53:41.447 INFO Fetch successful Feb 13 23:53:41.498432 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 23:53:41.499331 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 23:53:41.813634 coreos-metadata[1540]: Feb 13 23:53:41.813 WARN failed to locate config-drive, using the metadata service API instead Feb 13 23:53:41.833276 coreos-metadata[1540]: Feb 13 23:53:41.833 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 23:53:41.863081 coreos-metadata[1540]: Feb 13 23:53:41.862 INFO Fetch successful Feb 13 23:53:41.863081 coreos-metadata[1540]: Feb 13 23:53:41.863 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 23:53:41.903274 coreos-metadata[1540]: Feb 13 23:53:41.903 INFO Fetch successful Feb 13 23:53:41.905086 unknown[1540]: wrote ssh authorized keys file for user: core Feb 13 23:53:41.933617 update-ssh-keys[1663]: Updated "/home/core/.ssh/authorized_keys" Feb 13 23:53:41.934439 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 23:53:41.938030 systemd[1]: Finished sshkeys.service. Feb 13 23:53:41.938982 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 23:53:41.942067 systemd[1]: Startup finished in 1.045s (kernel) + 12.641s (initrd) + 10.626s (userspace) = 24.314s. Feb 13 23:53:46.645673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 23:53:46.655385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:53:46.792436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:53:46.807865 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:53:46.867733 kubelet[1675]: E0213 23:53:46.867436 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:53:46.871915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:53:46.872137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:53:50.048695 systemd[1]: Started sshd@3-10.244.103.218:22-147.75.109.163:46538.service - OpenSSH per-connection server daemon (147.75.109.163:46538). Feb 13 23:53:51.319127 sshd[1683]: Accepted publickey for core from 147.75.109.163 port 46538 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:53:51.322657 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:53:51.335196 systemd-logind[1479]: New session 6 of user core. Feb 13 23:53:51.350291 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 23:53:51.941433 sshd[1683]: pam_unix(sshd:session): session closed for user core Feb 13 23:53:51.947716 systemd[1]: sshd@3-10.244.103.218:22-147.75.109.163:46538.service: Deactivated successfully. Feb 13 23:53:51.950104 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 23:53:51.950943 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Feb 13 23:53:51.952497 systemd-logind[1479]: Removed session 6. Feb 13 23:53:52.109810 systemd[1]: Started sshd@4-10.244.103.218:22-147.75.109.163:46548.service - OpenSSH per-connection server daemon (147.75.109.163:46548). Feb 13 23:53:53.006081 sshd[1690]: Accepted publickey for core from 147.75.109.163 port 46548 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:53:53.009648 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:53:53.019569 systemd-logind[1479]: New session 7 of user core. Feb 13 23:53:53.026550 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 23:53:53.624651 sshd[1690]: pam_unix(sshd:session): session closed for user core Feb 13 23:53:53.631144 systemd[1]: sshd@4-10.244.103.218:22-147.75.109.163:46548.service: Deactivated successfully. Feb 13 23:53:53.634832 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 23:53:53.638556 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Feb 13 23:53:53.640836 systemd-logind[1479]: Removed session 7. Feb 13 23:53:53.792609 systemd[1]: Started sshd@5-10.244.103.218:22-147.75.109.163:46562.service - OpenSSH per-connection server daemon (147.75.109.163:46562). Feb 13 23:53:54.691347 sshd[1697]: Accepted publickey for core from 147.75.109.163 port 46562 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:53:54.694399 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:53:54.701831 systemd-logind[1479]: New session 8 of user core. Feb 13 23:53:54.713184 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 23:53:55.315086 sshd[1697]: pam_unix(sshd:session): session closed for user core Feb 13 23:53:55.322114 systemd[1]: sshd@5-10.244.103.218:22-147.75.109.163:46562.service: Deactivated successfully. Feb 13 23:53:55.325020 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 23:53:55.326813 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Feb 13 23:53:55.328457 systemd-logind[1479]: Removed session 8. Feb 13 23:53:55.477357 systemd[1]: Started sshd@6-10.244.103.218:22-147.75.109.163:46572.service - OpenSSH per-connection server daemon (147.75.109.163:46572). Feb 13 23:53:56.383541 sshd[1704]: Accepted publickey for core from 147.75.109.163 port 46572 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:53:56.386184 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:53:56.395098 systemd-logind[1479]: New session 9 of user core. Feb 13 23:53:56.413373 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 23:53:56.874417 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 23:53:56.874732 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:53:56.875696 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 23:53:56.886275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:53:56.899920 sudo[1707]: pam_unix(sudo:session): session closed for user root Feb 13 23:53:57.017169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:53:57.021063 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 23:53:57.045524 sshd[1704]: pam_unix(sshd:session): session closed for user core Feb 13 23:53:57.050919 systemd[1]: sshd@6-10.244.103.218:22-147.75.109.163:46572.service: Deactivated successfully. Feb 13 23:53:57.052709 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Feb 13 23:53:57.054930 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 23:53:57.057052 systemd-logind[1479]: Removed session 9. Feb 13 23:53:57.076769 kubelet[1717]: E0213 23:53:57.076624 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 23:53:57.081391 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 23:53:57.081877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 23:53:57.210435 systemd[1]: Started sshd@7-10.244.103.218:22-147.75.109.163:46588.service - OpenSSH per-connection server daemon (147.75.109.163:46588). Feb 13 23:53:58.110283 sshd[1727]: Accepted publickey for core from 147.75.109.163 port 46588 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:53:58.113171 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:53:58.118728 systemd-logind[1479]: New session 10 of user core. Feb 13 23:53:58.129476 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 23:53:58.589821 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 23:53:58.590477 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:53:58.596132 sudo[1731]: pam_unix(sudo:session): session closed for user root Feb 13 23:53:58.605021 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 23:53:58.605704 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:53:58.622255 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 23:53:58.623699 auditctl[1734]: No rules Feb 13 23:53:58.624719 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 23:53:58.624933 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 23:53:58.627377 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 23:53:58.667256 augenrules[1752]: No rules Feb 13 23:53:58.668798 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 23:53:58.670248 sudo[1730]: pam_unix(sudo:session): session closed for user root Feb 13 23:53:58.814476 sshd[1727]: pam_unix(sshd:session): session closed for user core Feb 13 23:53:58.822198 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Feb 13 23:53:58.822488 systemd[1]: sshd@7-10.244.103.218:22-147.75.109.163:46588.service: Deactivated successfully. Feb 13 23:53:58.826093 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 23:53:58.828475 systemd-logind[1479]: Removed session 10. Feb 13 23:53:58.975568 systemd[1]: Started sshd@8-10.244.103.218:22-147.75.109.163:40362.service - OpenSSH per-connection server daemon (147.75.109.163:40362). Feb 13 23:53:59.859763 sshd[1760]: Accepted publickey for core from 147.75.109.163 port 40362 ssh2: RSA SHA256:slnQpsdd5IjGSkOiaC+U57sWYutUdIqrcNAPolCJlHM Feb 13 23:53:59.863236 sshd[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 23:53:59.875360 systemd-logind[1479]: New session 11 of user core. Feb 13 23:53:59.887230 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 23:54:00.335131 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 23:54:00.335470 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 23:54:00.970090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:54:00.980291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:54:01.010129 systemd[1]: Reloading requested from client PID 1795 ('systemctl') (unit session-11.scope)... Feb 13 23:54:01.010311 systemd[1]: Reloading... Feb 13 23:54:01.110022 zram_generator::config[1832]: No configuration found. Feb 13 23:54:01.268155 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 23:54:01.344930 systemd[1]: Reloading finished in 334 ms. Feb 13 23:54:01.401685 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 23:54:01.401914 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 23:54:01.403083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:54:01.411704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 23:54:01.559346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 23:54:01.569303 (kubelet)[1901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 23:54:01.612418 kubelet[1901]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 23:54:01.612418 kubelet[1901]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 23:54:01.612418 kubelet[1901]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 23:54:01.612418 kubelet[1901]: I0213 23:54:01.612419 1901 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 23:54:02.020944 kubelet[1901]: I0213 23:54:02.020896 1901 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 23:54:02.020944 kubelet[1901]: I0213 23:54:02.020940 1901 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 23:54:02.021439 kubelet[1901]: I0213 23:54:02.021415 1901 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 23:54:02.048145 kubelet[1901]: I0213 23:54:02.048099 1901 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 23:54:02.074181 kubelet[1901]: E0213 23:54:02.074037 1901 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 23:54:02.074181 kubelet[1901]: I0213 23:54:02.074092 1901 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 23:54:02.080947 kubelet[1901]: I0213 23:54:02.080909 1901 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 23:54:02.082103 kubelet[1901]: I0213 23:54:02.081275 1901 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 23:54:02.082103 kubelet[1901]: I0213 23:54:02.081444 1901 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 23:54:02.082103 kubelet[1901]: I0213 23:54:02.081477 1901 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.244.103.218","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 23:54:02.082103 kubelet[1901]: I0213 23:54:02.081719 1901 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 23:54:02.082456 kubelet[1901]: I0213 23:54:02.081730 1901 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 23:54:02.082456 kubelet[1901]: I0213 23:54:02.081858 1901 state_mem.go:36] "Initialized new in-memory state store" Feb 13 23:54:02.084070 kubelet[1901]: I0213 23:54:02.084044 1901 kubelet.go:408] "Attempting to sync node with API server" Feb 13 23:54:02.084070 kubelet[1901]: I0213 23:54:02.084072 1901 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 23:54:02.084164 kubelet[1901]: I0213 23:54:02.084119 1901 kubelet.go:314] "Adding apiserver pod source" Feb 13 23:54:02.084164 kubelet[1901]: I0213 23:54:02.084145 1901 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 23:54:02.084829 kubelet[1901]: E0213 23:54:02.084804 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:02.084861 kubelet[1901]: E0213 23:54:02.084839 1901 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:02.090534 kubelet[1901]: I0213 23:54:02.090381 1901 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 23:54:02.092412 kubelet[1901]: I0213 23:54:02.092279 1901 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 23:54:02.094021 kubelet[1901]: W0213 23:54:02.093002 1901 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 23:54:02.094021 kubelet[1901]: I0213 23:54:02.093640 1901 server.go:1269] "Started kubelet" Feb 13 23:54:02.099799 kubelet[1901]: I0213 23:54:02.099720 1901 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 23:54:02.101018 kubelet[1901]: I0213 23:54:02.100738 1901 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 23:54:02.101339 kubelet[1901]: I0213 23:54:02.101324 1901 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 23:54:02.101438 kubelet[1901]: I0213 23:54:02.101417 1901 server.go:460] "Adding debug handlers to kubelet server" Feb 13 23:54:02.103224 kubelet[1901]: I0213 23:54:02.103207 1901 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 23:54:02.103600 kubelet[1901]: I0213 23:54:02.103573 1901 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 23:54:02.109890 kubelet[1901]: I0213 23:54:02.109314 1901 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 23:54:02.109890 kubelet[1901]: I0213 23:54:02.109487 1901 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 23:54:02.109890 kubelet[1901]: I0213 23:54:02.109589 1901 reconciler.go:26] "Reconciler: start to sync state" Feb 13 23:54:02.110981 kubelet[1901]: I0213 23:54:02.110385 1901 factory.go:221] Registration of the systemd container factory successfully Feb 13 23:54:02.110981 kubelet[1901]: I0213 23:54:02.110480 1901 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 23:54:02.112133 kubelet[1901]: E0213 23:54:02.112094 1901 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.244.103.218\" not found" Feb 13 23:54:02.113490 kubelet[1901]: I0213 23:54:02.113464 1901 factory.go:221] Registration of the containerd container factory successfully Feb 13 23:54:02.118953 kubelet[1901]: E0213 23:54:02.118923 1901 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 23:54:02.125011 kubelet[1901]: E0213 23:54:02.122699 1901 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.244.103.218\" not found" node="10.244.103.218" Feb 13 23:54:02.139913 kubelet[1901]: I0213 23:54:02.139645 1901 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 23:54:02.139913 kubelet[1901]: I0213 23:54:02.139664 1901 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 23:54:02.139913 kubelet[1901]: I0213 23:54:02.139688 1901 state_mem.go:36] "Initialized new in-memory state store" Feb 13 23:54:02.141856 kubelet[1901]: I0213 23:54:02.141361 1901 policy_none.go:49] "None policy: Start" Feb 13 23:54:02.142377 kubelet[1901]: I0213 23:54:02.142365 1901 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 23:54:02.142474 kubelet[1901]: I0213 23:54:02.142467 1901 state_mem.go:35] "Initializing new in-memory state store" Feb 13 23:54:02.149116 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 23:54:02.160735 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 23:54:02.166441 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 23:54:02.175355 kubelet[1901]: I0213 23:54:02.175323 1901 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 23:54:02.175537 kubelet[1901]: I0213 23:54:02.175514 1901 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 23:54:02.175599 kubelet[1901]: I0213 23:54:02.175525 1901 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 23:54:02.176179 kubelet[1901]: I0213 23:54:02.176146 1901 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 23:54:02.181128 kubelet[1901]: E0213 23:54:02.181031 1901 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.244.103.218\" not found" Feb 13 23:54:02.194151 kubelet[1901]: I0213 23:54:02.194036 1901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 23:54:02.196573 kubelet[1901]: I0213 23:54:02.196530 1901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 23:54:02.196681 kubelet[1901]: I0213 23:54:02.196664 1901 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 23:54:02.197056 kubelet[1901]: I0213 23:54:02.196702 1901 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 23:54:02.197056 kubelet[1901]: E0213 23:54:02.196908 1901 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 23:54:02.277825 kubelet[1901]: I0213 23:54:02.276762 1901 kubelet_node_status.go:72] "Attempting to register node" node="10.244.103.218" Feb 13 23:54:02.284612 kubelet[1901]: I0213 23:54:02.284487 1901 kubelet_node_status.go:75] "Successfully registered node" node="10.244.103.218" Feb 13 23:54:02.298032 kubelet[1901]: I0213 23:54:02.297948 1901 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 23:54:02.299284 containerd[1485]: time="2025-02-13T23:54:02.299166215Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 23:54:02.299772 kubelet[1901]: I0213 23:54:02.299659 1901 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 23:54:02.600976 sudo[1763]: pam_unix(sudo:session): session closed for user root Feb 13 23:54:02.744780 sshd[1760]: pam_unix(sshd:session): session closed for user core Feb 13 23:54:02.753861 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Feb 13 23:54:02.755615 systemd[1]: sshd@8-10.244.103.218:22-147.75.109.163:40362.service: Deactivated successfully. Feb 13 23:54:02.758617 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 23:54:02.759744 systemd-logind[1479]: Removed session 11. Feb 13 23:54:03.024777 kubelet[1901]: I0213 23:54:03.024679 1901 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 23:54:03.026348 kubelet[1901]: W0213 23:54:03.025031 1901 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 23:54:03.026348 kubelet[1901]: W0213 23:54:03.025107 1901 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 23:54:03.026348 kubelet[1901]: W0213 23:54:03.025043 1901 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 23:54:03.085462 kubelet[1901]: I0213 23:54:03.085376 1901 apiserver.go:52] "Watching apiserver" Feb 13 23:54:03.085694 kubelet[1901]: E0213 23:54:03.085395 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:03.103233 systemd[1]: Created slice kubepods-besteffort-podafe4d1fa_a2da_40eb_a9fe_e3da427e0c2c.slice - libcontainer container kubepods-besteffort-podafe4d1fa_a2da_40eb_a9fe_e3da427e0c2c.slice. Feb 13 23:54:03.110033 kubelet[1901]: I0213 23:54:03.109931 1901 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 23:54:03.114373 kubelet[1901]: I0213 23:54:03.114345 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-run\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.114884 kubelet[1901]: I0213 23:54:03.114494 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-hostproc\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.114884 kubelet[1901]: I0213 23:54:03.114517 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-etc-cni-netd\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.114884 kubelet[1901]: I0213 23:54:03.114534 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-lib-modules\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.114884 kubelet[1901]: I0213 23:54:03.114573 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-xtables-lock\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.114884 kubelet[1901]: I0213 23:54:03.114590 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-host-proc-sys-kernel\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.114884 kubelet[1901]: I0213 23:54:03.114606 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-hubble-tls\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.115114 kubelet[1901]: I0213 23:54:03.114636 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c-kube-proxy\") pod \"kube-proxy-xsj6b\" (UID: \"afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c\") " pod="kube-system/kube-proxy-xsj6b" Feb 13 23:54:03.115114 kubelet[1901]: I0213 23:54:03.114662 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcj9d\" (UniqueName: \"kubernetes.io/projected/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-kube-api-access-gcj9d\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.115114 kubelet[1901]: I0213 23:54:03.114689 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-cgroup\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.115114 kubelet[1901]: I0213 23:54:03.114705 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cni-path\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.115114 kubelet[1901]: I0213 23:54:03.114723 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7pjn\" (UniqueName: \"kubernetes.io/projected/afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c-kube-api-access-x7pjn\") pod \"kube-proxy-xsj6b\" (UID: \"afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c\") " pod="kube-system/kube-proxy-xsj6b" Feb 13 23:54:03.115250 kubelet[1901]: I0213 23:54:03.114738 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c-lib-modules\") pod \"kube-proxy-xsj6b\" (UID: \"afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c\") " pod="kube-system/kube-proxy-xsj6b" Feb 13 23:54:03.115250 kubelet[1901]: I0213 23:54:03.114754 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-clustermesh-secrets\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.115250 kubelet[1901]: I0213 23:54:03.114770 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-config-path\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.115250 kubelet[1901]: I0213 23:54:03.114802 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-host-proc-sys-net\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.115250 kubelet[1901]: I0213 23:54:03.114822 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c-xtables-lock\") pod \"kube-proxy-xsj6b\" (UID: \"afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c\") " pod="kube-system/kube-proxy-xsj6b" Feb 13 23:54:03.115250 kubelet[1901]: I0213 23:54:03.114838 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-bpf-maps\") pod \"cilium-5gw77\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " pod="kube-system/cilium-5gw77" Feb 13 23:54:03.120470 systemd[1]: Created slice kubepods-burstable-podd0008fba_dde3_40ba_8ed1_b3d76a1dce97.slice - libcontainer container kubepods-burstable-podd0008fba_dde3_40ba_8ed1_b3d76a1dce97.slice. Feb 13 23:54:03.421457 containerd[1485]: time="2025-02-13T23:54:03.421370434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsj6b,Uid:afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c,Namespace:kube-system,Attempt:0,}" Feb 13 23:54:03.433890 containerd[1485]: time="2025-02-13T23:54:03.433187342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5gw77,Uid:d0008fba-dde3-40ba-8ed1-b3d76a1dce97,Namespace:kube-system,Attempt:0,}" Feb 13 23:54:04.085945 kubelet[1901]: E0213 23:54:04.085838 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:04.168674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2850440907.mount: Deactivated successfully. Feb 13 23:54:04.173277 containerd[1485]: time="2025-02-13T23:54:04.173109009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:54:04.174419 containerd[1485]: time="2025-02-13T23:54:04.174103405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:54:04.174419 containerd[1485]: time="2025-02-13T23:54:04.174391423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 23:54:04.175136 containerd[1485]: time="2025-02-13T23:54:04.175108097Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:54:04.175673 containerd[1485]: time="2025-02-13T23:54:04.175621839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 23:54:04.177513 containerd[1485]: time="2025-02-13T23:54:04.177459342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 23:54:04.183688 containerd[1485]: time="2025-02-13T23:54:04.183296579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 749.956924ms" Feb 13 23:54:04.185659 containerd[1485]: time="2025-02-13T23:54:04.185603929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 763.697898ms" Feb 13 23:54:04.331745 containerd[1485]: time="2025-02-13T23:54:04.331629424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:54:04.331745 containerd[1485]: time="2025-02-13T23:54:04.331611222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:54:04.331745 containerd[1485]: time="2025-02-13T23:54:04.331686277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:54:04.331971 containerd[1485]: time="2025-02-13T23:54:04.331774408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:04.331971 containerd[1485]: time="2025-02-13T23:54:04.331928976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:04.333062 containerd[1485]: time="2025-02-13T23:54:04.332751211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:54:04.333062 containerd[1485]: time="2025-02-13T23:54:04.332776049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:04.333062 containerd[1485]: time="2025-02-13T23:54:04.332856380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:04.415664 systemd[1]: Started cri-containerd-0d061e299aff0bf64d02bebdd63f7e6c1186176a41c9f973516b9e2edab045e5.scope - libcontainer container 0d061e299aff0bf64d02bebdd63f7e6c1186176a41c9f973516b9e2edab045e5. Feb 13 23:54:04.435184 systemd[1]: Started cri-containerd-8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94.scope - libcontainer container 8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94. Feb 13 23:54:04.464909 containerd[1485]: time="2025-02-13T23:54:04.464868751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xsj6b,Uid:afe4d1fa-a2da-40eb-a9fe-e3da427e0c2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d061e299aff0bf64d02bebdd63f7e6c1186176a41c9f973516b9e2edab045e5\"" Feb 13 23:54:04.468963 containerd[1485]: time="2025-02-13T23:54:04.468927388Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 23:54:04.475712 containerd[1485]: time="2025-02-13T23:54:04.475646725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5gw77,Uid:d0008fba-dde3-40ba-8ed1-b3d76a1dce97,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\"" Feb 13 23:54:05.086623 kubelet[1901]: E0213 23:54:05.086535 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:05.872308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275936898.mount: Deactivated successfully. Feb 13 23:54:06.086928 kubelet[1901]: E0213 23:54:06.086882 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:06.332863 containerd[1485]: time="2025-02-13T23:54:06.332820096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:06.333764 containerd[1485]: time="2025-02-13T23:54:06.333462446Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229116" Feb 13 23:54:06.334193 containerd[1485]: time="2025-02-13T23:54:06.334171902Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:06.336381 containerd[1485]: time="2025-02-13T23:54:06.336352668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:06.337151 containerd[1485]: time="2025-02-13T23:54:06.337124028Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.86815836s" Feb 13 23:54:06.337207 containerd[1485]: time="2025-02-13T23:54:06.337153873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 23:54:06.338702 containerd[1485]: time="2025-02-13T23:54:06.338676700Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 23:54:06.339594 containerd[1485]: time="2025-02-13T23:54:06.339568406Z" level=info msg="CreateContainer within sandbox \"0d061e299aff0bf64d02bebdd63f7e6c1186176a41c9f973516b9e2edab045e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 23:54:06.359367 containerd[1485]: time="2025-02-13T23:54:06.359322916Z" level=info msg="CreateContainer within sandbox \"0d061e299aff0bf64d02bebdd63f7e6c1186176a41c9f973516b9e2edab045e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4581d0948e018bd4181c96c85dc31d0a05403431191adf95f5bc5da719a2a70a\"" Feb 13 23:54:06.361186 containerd[1485]: time="2025-02-13T23:54:06.360155255Z" level=info msg="StartContainer for \"4581d0948e018bd4181c96c85dc31d0a05403431191adf95f5bc5da719a2a70a\"" Feb 13 23:54:06.397253 systemd[1]: Started cri-containerd-4581d0948e018bd4181c96c85dc31d0a05403431191adf95f5bc5da719a2a70a.scope - libcontainer container 4581d0948e018bd4181c96c85dc31d0a05403431191adf95f5bc5da719a2a70a. Feb 13 23:54:06.426363 containerd[1485]: time="2025-02-13T23:54:06.426217461Z" level=info msg="StartContainer for \"4581d0948e018bd4181c96c85dc31d0a05403431191adf95f5bc5da719a2a70a\" returns successfully" Feb 13 23:54:06.562162 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 23:54:07.088127 kubelet[1901]: E0213 23:54:07.088069 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:07.231167 kubelet[1901]: I0213 23:54:07.230952 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xsj6b" podStartSLOduration=3.360632228 podStartE2EDuration="5.230915673s" podCreationTimestamp="2025-02-13 23:54:02 +0000 UTC" firstStartedPulling="2025-02-13 23:54:04.467891717 +0000 UTC m=+2.894751888" lastFinishedPulling="2025-02-13 23:54:06.338175158 +0000 UTC m=+4.765035333" observedRunningTime="2025-02-13 23:54:07.230450568 +0000 UTC m=+5.657310823" watchObservedRunningTime="2025-02-13 23:54:07.230915673 +0000 UTC m=+5.657775868" Feb 13 23:54:08.088653 kubelet[1901]: E0213 23:54:08.088389 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:09.089190 kubelet[1901]: E0213 23:54:09.089132 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:10.160959 systemd-timesyncd[1400]: Contacted time server [2a02:e00:ffe9:11c::1]:123 (2.flatcar.pool.ntp.org). Feb 13 23:54:10.161226 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-02-13 23:54:10.157529 UTC. Feb 13 23:54:10.161393 systemd-resolved[1382]: Clock change detected. Flushing caches. Feb 13 23:54:10.737278 kubelet[1901]: E0213 23:54:10.737190 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:11.738414 kubelet[1901]: E0213 23:54:11.738304 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:12.478629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335602661.mount: Deactivated successfully. Feb 13 23:54:12.739395 kubelet[1901]: E0213 23:54:12.739036 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:13.740174 kubelet[1901]: E0213 23:54:13.740068 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:14.362155 containerd[1485]: time="2025-02-13T23:54:14.361127402Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:14.363265 containerd[1485]: time="2025-02-13T23:54:14.363170697Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 23:54:14.363948 containerd[1485]: time="2025-02-13T23:54:14.363687744Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:14.366263 containerd[1485]: time="2025-02-13T23:54:14.365518201Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.379814391s" Feb 13 23:54:14.366263 containerd[1485]: time="2025-02-13T23:54:14.365560729Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 23:54:14.369483 containerd[1485]: time="2025-02-13T23:54:14.369360137Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 23:54:14.387084 containerd[1485]: time="2025-02-13T23:54:14.387047209Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\"" Feb 13 23:54:14.387777 containerd[1485]: time="2025-02-13T23:54:14.387556331Z" level=info msg="StartContainer for \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\"" Feb 13 23:54:14.418343 systemd[1]: run-containerd-runc-k8s.io-2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa-runc.ccpDow.mount: Deactivated successfully. Feb 13 23:54:14.426283 systemd[1]: Started cri-containerd-2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa.scope - libcontainer container 2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa. Feb 13 23:54:14.454520 containerd[1485]: time="2025-02-13T23:54:14.454476899Z" level=info msg="StartContainer for \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\" returns successfully" Feb 13 23:54:14.467557 systemd[1]: cri-containerd-2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa.scope: Deactivated successfully. Feb 13 23:54:14.537270 containerd[1485]: time="2025-02-13T23:54:14.537147269Z" level=info msg="shim disconnected" id=2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa namespace=k8s.io Feb 13 23:54:14.537549 containerd[1485]: time="2025-02-13T23:54:14.537275336Z" level=warning msg="cleaning up after shim disconnected" id=2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa namespace=k8s.io Feb 13 23:54:14.537549 containerd[1485]: time="2025-02-13T23:54:14.537297252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:54:14.740939 kubelet[1901]: E0213 23:54:14.740706 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:14.908596 containerd[1485]: time="2025-02-13T23:54:14.908547056Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 23:54:14.927291 containerd[1485]: time="2025-02-13T23:54:14.927147438Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\"" Feb 13 23:54:14.927809 containerd[1485]: time="2025-02-13T23:54:14.927771941Z" level=info msg="StartContainer for \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\"" Feb 13 23:54:14.956160 systemd[1]: Started cri-containerd-00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a.scope - libcontainer container 00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a. Feb 13 23:54:14.996092 containerd[1485]: time="2025-02-13T23:54:14.995958710Z" level=info msg="StartContainer for \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\" returns successfully" Feb 13 23:54:15.008928 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 23:54:15.009204 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:54:15.009287 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:54:15.015348 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 23:54:15.017850 systemd[1]: cri-containerd-00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a.scope: Deactivated successfully. Feb 13 23:54:15.040882 containerd[1485]: time="2025-02-13T23:54:15.040821953Z" level=info msg="shim disconnected" id=00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a namespace=k8s.io Feb 13 23:54:15.040882 containerd[1485]: time="2025-02-13T23:54:15.040870882Z" level=warning msg="cleaning up after shim disconnected" id=00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a namespace=k8s.io Feb 13 23:54:15.040882 containerd[1485]: time="2025-02-13T23:54:15.040879988Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:54:15.042454 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 23:54:15.378834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa-rootfs.mount: Deactivated successfully. Feb 13 23:54:15.741838 kubelet[1901]: E0213 23:54:15.741570 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:15.917605 containerd[1485]: time="2025-02-13T23:54:15.917369715Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 23:54:15.930824 containerd[1485]: time="2025-02-13T23:54:15.930782599Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\"" Feb 13 23:54:15.932411 containerd[1485]: time="2025-02-13T23:54:15.932321113Z" level=info msg="StartContainer for \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\"" Feb 13 23:54:15.968237 systemd[1]: Started cri-containerd-f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0.scope - libcontainer container f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0. Feb 13 23:54:16.000912 containerd[1485]: time="2025-02-13T23:54:15.999932283Z" level=info msg="StartContainer for \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\" returns successfully" Feb 13 23:54:16.005716 systemd[1]: cri-containerd-f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0.scope: Deactivated successfully. Feb 13 23:54:16.034575 containerd[1485]: time="2025-02-13T23:54:16.034413337Z" level=info msg="shim disconnected" id=f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0 namespace=k8s.io Feb 13 23:54:16.034575 containerd[1485]: time="2025-02-13T23:54:16.034465955Z" level=warning msg="cleaning up after shim disconnected" id=f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0 namespace=k8s.io Feb 13 23:54:16.034575 containerd[1485]: time="2025-02-13T23:54:16.034476671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:54:16.377344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0-rootfs.mount: Deactivated successfully. Feb 13 23:54:16.742790 kubelet[1901]: E0213 23:54:16.742549 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:16.924837 containerd[1485]: time="2025-02-13T23:54:16.924758505Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 23:54:16.937071 containerd[1485]: time="2025-02-13T23:54:16.937027214Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\"" Feb 13 23:54:16.938174 containerd[1485]: time="2025-02-13T23:54:16.937588348Z" level=info msg="StartContainer for \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\"" Feb 13 23:54:16.974231 systemd[1]: Started cri-containerd-d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e.scope - libcontainer container d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e. Feb 13 23:54:16.996433 systemd[1]: cri-containerd-d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e.scope: Deactivated successfully. Feb 13 23:54:16.999809 containerd[1485]: time="2025-02-13T23:54:16.999707758Z" level=info msg="StartContainer for \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\" returns successfully" Feb 13 23:54:17.024449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e-rootfs.mount: Deactivated successfully. Feb 13 23:54:17.024977 containerd[1485]: time="2025-02-13T23:54:17.024882139Z" level=info msg="shim disconnected" id=d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e namespace=k8s.io Feb 13 23:54:17.024977 containerd[1485]: time="2025-02-13T23:54:17.024949127Z" level=warning msg="cleaning up after shim disconnected" id=d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e namespace=k8s.io Feb 13 23:54:17.024977 containerd[1485]: time="2025-02-13T23:54:17.024958942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:54:17.743757 kubelet[1901]: E0213 23:54:17.743710 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:17.934043 containerd[1485]: time="2025-02-13T23:54:17.933795385Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 23:54:17.954658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385531822.mount: Deactivated successfully. Feb 13 23:54:17.954882 containerd[1485]: time="2025-02-13T23:54:17.954787516Z" level=info msg="CreateContainer within sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\"" Feb 13 23:54:17.957160 containerd[1485]: time="2025-02-13T23:54:17.956614638Z" level=info msg="StartContainer for \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\"" Feb 13 23:54:17.990129 systemd[1]: Started cri-containerd-c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d.scope - libcontainer container c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d. Feb 13 23:54:18.017325 containerd[1485]: time="2025-02-13T23:54:18.017206040Z" level=info msg="StartContainer for \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\" returns successfully" Feb 13 23:54:18.138430 kubelet[1901]: I0213 23:54:18.137723 1901 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 23:54:18.421819 kernel: Initializing XFRM netlink socket Feb 13 23:54:18.744910 kubelet[1901]: E0213 23:54:18.744549 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:18.968196 kubelet[1901]: I0213 23:54:18.968061 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5gw77" podStartSLOduration=7.724316337 podStartE2EDuration="16.96797894s" podCreationTimestamp="2025-02-13 23:54:02 +0000 UTC" firstStartedPulling="2025-02-13 23:54:04.476909481 +0000 UTC m=+2.903769652" lastFinishedPulling="2025-02-13 23:54:14.367567619 +0000 UTC m=+12.147432255" observedRunningTime="2025-02-13 23:54:18.966568019 +0000 UTC m=+16.746432687" watchObservedRunningTime="2025-02-13 23:54:18.96797894 +0000 UTC m=+16.747843614" Feb 13 23:54:19.745304 kubelet[1901]: E0213 23:54:19.745149 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:20.121163 systemd-networkd[1426]: cilium_host: Link UP Feb 13 23:54:20.121399 systemd-networkd[1426]: cilium_net: Link UP Feb 13 23:54:20.121764 systemd-networkd[1426]: cilium_net: Gained carrier Feb 13 23:54:20.121975 systemd-networkd[1426]: cilium_host: Gained carrier Feb 13 23:54:20.254154 systemd-networkd[1426]: cilium_host: Gained IPv6LL Feb 13 23:54:20.273869 systemd-networkd[1426]: cilium_vxlan: Link UP Feb 13 23:54:20.273876 systemd-networkd[1426]: cilium_vxlan: Gained carrier Feb 13 23:54:20.278188 update_engine[1480]: I20250213 23:54:20.277118 1480 update_attempter.cc:509] Updating boot flags... Feb 13 23:54:20.324141 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2611) Feb 13 23:54:20.391012 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2611) Feb 13 23:54:20.455040 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2611) Feb 13 23:54:20.662142 kernel: NET: Registered PF_ALG protocol family Feb 13 23:54:20.745702 kubelet[1901]: E0213 23:54:20.745453 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:20.852848 systemd[1]: Created slice kubepods-besteffort-pod29cf1566_ff33_4bfd_afbc_19d6a3a9fbd2.slice - libcontainer container kubepods-besteffort-pod29cf1566_ff33_4bfd_afbc_19d6a3a9fbd2.slice. Feb 13 23:54:20.897869 kubelet[1901]: I0213 23:54:20.897814 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqrcf\" (UniqueName: \"kubernetes.io/projected/29cf1566-ff33-4bfd-afbc-19d6a3a9fbd2-kube-api-access-nqrcf\") pod \"nginx-deployment-8587fbcb89-m7c2g\" (UID: \"29cf1566-ff33-4bfd-afbc-19d6a3a9fbd2\") " pod="default/nginx-deployment-8587fbcb89-m7c2g" Feb 13 23:54:20.958192 systemd-networkd[1426]: cilium_net: Gained IPv6LL Feb 13 23:54:21.157763 containerd[1485]: time="2025-02-13T23:54:21.157699885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-m7c2g,Uid:29cf1566-ff33-4bfd-afbc-19d6a3a9fbd2,Namespace:default,Attempt:0,}" Feb 13 23:54:21.449370 systemd-networkd[1426]: lxc_health: Link UP Feb 13 23:54:21.464789 systemd-networkd[1426]: lxc_health: Gained carrier Feb 13 23:54:21.598328 systemd-networkd[1426]: cilium_vxlan: Gained IPv6LL Feb 13 23:54:21.707728 systemd-networkd[1426]: lxc5c2a0a283284: Link UP Feb 13 23:54:21.718074 kernel: eth0: renamed from tmp1cc4e Feb 13 23:54:21.729435 systemd-networkd[1426]: lxc5c2a0a283284: Gained carrier Feb 13 23:54:21.746621 kubelet[1901]: E0213 23:54:21.746491 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:22.731477 kubelet[1901]: E0213 23:54:22.731395 1901 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:22.747129 kubelet[1901]: E0213 23:54:22.747058 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:22.942467 systemd-networkd[1426]: lxc5c2a0a283284: Gained IPv6LL Feb 13 23:54:23.071305 systemd-networkd[1426]: lxc_health: Gained IPv6LL Feb 13 23:54:23.747682 kubelet[1901]: E0213 23:54:23.747588 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:24.748509 kubelet[1901]: E0213 23:54:24.748360 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:25.740294 containerd[1485]: time="2025-02-13T23:54:25.740179626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:54:25.740294 containerd[1485]: time="2025-02-13T23:54:25.740244346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:54:25.740294 containerd[1485]: time="2025-02-13T23:54:25.740266475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:25.741309 containerd[1485]: time="2025-02-13T23:54:25.741239316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:25.749251 kubelet[1901]: E0213 23:54:25.749206 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:25.768221 systemd[1]: Started cri-containerd-1cc4e6151926698cc1bff807653980c60ca87e11255b2583c0af4bf1a3f50712.scope - libcontainer container 1cc4e6151926698cc1bff807653980c60ca87e11255b2583c0af4bf1a3f50712. Feb 13 23:54:25.818140 containerd[1485]: time="2025-02-13T23:54:25.818093215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-m7c2g,Uid:29cf1566-ff33-4bfd-afbc-19d6a3a9fbd2,Namespace:default,Attempt:0,} returns sandbox id \"1cc4e6151926698cc1bff807653980c60ca87e11255b2583c0af4bf1a3f50712\"" Feb 13 23:54:25.821427 containerd[1485]: time="2025-02-13T23:54:25.821397048Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 23:54:26.749560 kubelet[1901]: E0213 23:54:26.749508 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:27.750681 kubelet[1901]: E0213 23:54:27.750614 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:28.751179 kubelet[1901]: E0213 23:54:28.751132 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:28.963873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009452205.mount: Deactivated successfully. Feb 13 23:54:29.752485 kubelet[1901]: E0213 23:54:29.752300 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:30.048334 containerd[1485]: time="2025-02-13T23:54:30.048277804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:30.049639 containerd[1485]: time="2025-02-13T23:54:30.049325397Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 23:54:30.050666 containerd[1485]: time="2025-02-13T23:54:30.050145804Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:30.053909 containerd[1485]: time="2025-02-13T23:54:30.053864513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:30.055445 containerd[1485]: time="2025-02-13T23:54:30.055308429Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.233872637s" Feb 13 23:54:30.055445 containerd[1485]: time="2025-02-13T23:54:30.055344010Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 23:54:30.057480 containerd[1485]: time="2025-02-13T23:54:30.057455516Z" level=info msg="CreateContainer within sandbox \"1cc4e6151926698cc1bff807653980c60ca87e11255b2583c0af4bf1a3f50712\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 23:54:30.067197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226114642.mount: Deactivated successfully. Feb 13 23:54:30.075163 containerd[1485]: time="2025-02-13T23:54:30.075118008Z" level=info msg="CreateContainer within sandbox \"1cc4e6151926698cc1bff807653980c60ca87e11255b2583c0af4bf1a3f50712\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e50d8df50ff2b9a4555cd1f52bac48e3af78c885b6c813ecf822bf6a80010aed\"" Feb 13 23:54:30.075830 containerd[1485]: time="2025-02-13T23:54:30.075780653Z" level=info msg="StartContainer for \"e50d8df50ff2b9a4555cd1f52bac48e3af78c885b6c813ecf822bf6a80010aed\"" Feb 13 23:54:30.166164 systemd[1]: Started cri-containerd-e50d8df50ff2b9a4555cd1f52bac48e3af78c885b6c813ecf822bf6a80010aed.scope - libcontainer container e50d8df50ff2b9a4555cd1f52bac48e3af78c885b6c813ecf822bf6a80010aed. Feb 13 23:54:30.202320 containerd[1485]: time="2025-02-13T23:54:30.202274519Z" level=info msg="StartContainer for \"e50d8df50ff2b9a4555cd1f52bac48e3af78c885b6c813ecf822bf6a80010aed\" returns successfully" Feb 13 23:54:30.753075 kubelet[1901]: E0213 23:54:30.752964 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:30.985901 kubelet[1901]: I0213 23:54:30.985680 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-m7c2g" podStartSLOduration=6.749378953 podStartE2EDuration="10.985647516s" podCreationTimestamp="2025-02-13 23:54:20 +0000 UTC" firstStartedPulling="2025-02-13 23:54:25.819888806 +0000 UTC m=+23.599753435" lastFinishedPulling="2025-02-13 23:54:30.056157367 +0000 UTC m=+27.836021998" observedRunningTime="2025-02-13 23:54:30.985335852 +0000 UTC m=+28.765200581" watchObservedRunningTime="2025-02-13 23:54:30.985647516 +0000 UTC m=+28.765512213" Feb 13 23:54:31.754228 kubelet[1901]: E0213 23:54:31.754119 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:32.754945 kubelet[1901]: E0213 23:54:32.754846 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:33.755639 kubelet[1901]: E0213 23:54:33.755544 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:34.756266 kubelet[1901]: E0213 23:54:34.756175 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:35.757684 kubelet[1901]: E0213 23:54:35.757459 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:36.757740 kubelet[1901]: E0213 23:54:36.757620 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:37.758577 kubelet[1901]: E0213 23:54:37.758462 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:38.759299 kubelet[1901]: E0213 23:54:38.759207 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:39.759805 kubelet[1901]: E0213 23:54:39.759693 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:40.761110 kubelet[1901]: E0213 23:54:40.761015 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:41.761968 kubelet[1901]: E0213 23:54:41.761829 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:42.287919 systemd[1]: Created slice kubepods-besteffort-pod3aa37a33_e030_4599_a436_f41783be707d.slice - libcontainer container kubepods-besteffort-pod3aa37a33_e030_4599_a436_f41783be707d.slice. Feb 13 23:54:42.349272 kubelet[1901]: I0213 23:54:42.349112 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3aa37a33-e030-4599-a436-f41783be707d-data\") pod \"nfs-server-provisioner-0\" (UID: \"3aa37a33-e030-4599-a436-f41783be707d\") " pod="default/nfs-server-provisioner-0" Feb 13 23:54:42.349272 kubelet[1901]: I0213 23:54:42.349217 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqqpc\" (UniqueName: \"kubernetes.io/projected/3aa37a33-e030-4599-a436-f41783be707d-kube-api-access-xqqpc\") pod \"nfs-server-provisioner-0\" (UID: \"3aa37a33-e030-4599-a436-f41783be707d\") " pod="default/nfs-server-provisioner-0" Feb 13 23:54:42.592866 containerd[1485]: time="2025-02-13T23:54:42.592769414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3aa37a33-e030-4599-a436-f41783be707d,Namespace:default,Attempt:0,}" Feb 13 23:54:42.633808 systemd-networkd[1426]: lxcf1e0154cad43: Link UP Feb 13 23:54:42.639573 kernel: eth0: renamed from tmp9d5b8 Feb 13 23:54:42.650483 systemd-networkd[1426]: lxcf1e0154cad43: Gained carrier Feb 13 23:54:42.731618 kubelet[1901]: E0213 23:54:42.731528 1901 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:42.762717 kubelet[1901]: E0213 23:54:42.762626 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:42.885444 containerd[1485]: time="2025-02-13T23:54:42.885232927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:54:42.885661 containerd[1485]: time="2025-02-13T23:54:42.885317689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:54:42.885661 containerd[1485]: time="2025-02-13T23:54:42.885334288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:42.885661 containerd[1485]: time="2025-02-13T23:54:42.885418111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:42.920421 systemd[1]: Started cri-containerd-9d5b843e3d5e5dc4528f45d9a6fa2480d8d96f2bc0f82067887f67809748d825.scope - libcontainer container 9d5b843e3d5e5dc4528f45d9a6fa2480d8d96f2bc0f82067887f67809748d825. Feb 13 23:54:42.968498 containerd[1485]: time="2025-02-13T23:54:42.968199555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3aa37a33-e030-4599-a436-f41783be707d,Namespace:default,Attempt:0,} returns sandbox id \"9d5b843e3d5e5dc4528f45d9a6fa2480d8d96f2bc0f82067887f67809748d825\"" Feb 13 23:54:42.970581 containerd[1485]: time="2025-02-13T23:54:42.970554386Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 23:54:43.481547 systemd[1]: run-containerd-runc-k8s.io-9d5b843e3d5e5dc4528f45d9a6fa2480d8d96f2bc0f82067887f67809748d825-runc.ESipq7.mount: Deactivated successfully. Feb 13 23:54:43.763318 kubelet[1901]: E0213 23:54:43.762739 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:44.126702 systemd-networkd[1426]: lxcf1e0154cad43: Gained IPv6LL Feb 13 23:54:44.763955 kubelet[1901]: E0213 23:54:44.763888 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:45.465327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373892001.mount: Deactivated successfully. Feb 13 23:54:45.765565 kubelet[1901]: E0213 23:54:45.764865 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:46.766092 kubelet[1901]: E0213 23:54:46.766054 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:47.266615 containerd[1485]: time="2025-02-13T23:54:47.266553034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:47.267603 containerd[1485]: time="2025-02-13T23:54:47.267558616Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Feb 13 23:54:47.268237 containerd[1485]: time="2025-02-13T23:54:47.268004961Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:47.270592 containerd[1485]: time="2025-02-13T23:54:47.270537951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:47.271692 containerd[1485]: time="2025-02-13T23:54:47.271657557Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.301071364s" Feb 13 23:54:47.271903 containerd[1485]: time="2025-02-13T23:54:47.271784944Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 23:54:47.274256 containerd[1485]: time="2025-02-13T23:54:47.274136906Z" level=info msg="CreateContainer within sandbox \"9d5b843e3d5e5dc4528f45d9a6fa2480d8d96f2bc0f82067887f67809748d825\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 23:54:47.281899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726345738.mount: Deactivated successfully. Feb 13 23:54:47.287161 containerd[1485]: time="2025-02-13T23:54:47.287098514Z" level=info msg="CreateContainer within sandbox \"9d5b843e3d5e5dc4528f45d9a6fa2480d8d96f2bc0f82067887f67809748d825\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"30d5398bad018122e9be294c44c2e1bbb71f7647e9a36f9f7f4de2b5c23a20ec\"" Feb 13 23:54:47.287968 containerd[1485]: time="2025-02-13T23:54:47.287895372Z" level=info msg="StartContainer for \"30d5398bad018122e9be294c44c2e1bbb71f7647e9a36f9f7f4de2b5c23a20ec\"" Feb 13 23:54:47.335222 systemd[1]: Started cri-containerd-30d5398bad018122e9be294c44c2e1bbb71f7647e9a36f9f7f4de2b5c23a20ec.scope - libcontainer container 30d5398bad018122e9be294c44c2e1bbb71f7647e9a36f9f7f4de2b5c23a20ec. Feb 13 23:54:47.362891 containerd[1485]: time="2025-02-13T23:54:47.362735322Z" level=info msg="StartContainer for \"30d5398bad018122e9be294c44c2e1bbb71f7647e9a36f9f7f4de2b5c23a20ec\" returns successfully" Feb 13 23:54:47.766643 kubelet[1901]: E0213 23:54:47.766535 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:48.045160 kubelet[1901]: I0213 23:54:48.044939 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.742470795 podStartE2EDuration="6.04490362s" podCreationTimestamp="2025-02-13 23:54:42 +0000 UTC" firstStartedPulling="2025-02-13 23:54:42.970128797 +0000 UTC m=+40.749993424" lastFinishedPulling="2025-02-13 23:54:47.272561618 +0000 UTC m=+45.052426249" observedRunningTime="2025-02-13 23:54:48.043521465 +0000 UTC m=+45.823386136" watchObservedRunningTime="2025-02-13 23:54:48.04490362 +0000 UTC m=+45.824768343" Feb 13 23:54:48.767135 kubelet[1901]: E0213 23:54:48.767072 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:49.767938 kubelet[1901]: E0213 23:54:49.767834 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:50.768423 kubelet[1901]: E0213 23:54:50.768318 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:51.769479 kubelet[1901]: E0213 23:54:51.769379 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:52.770173 kubelet[1901]: E0213 23:54:52.770091 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:53.771570 kubelet[1901]: E0213 23:54:53.771301 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:54.772382 kubelet[1901]: E0213 23:54:54.772248 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:55.773137 kubelet[1901]: E0213 23:54:55.773028 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:56.773865 kubelet[1901]: E0213 23:54:56.773767 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:57.212396 systemd[1]: Created slice kubepods-besteffort-pod0d230cb2_a383_4c0a_8cc0_bb2299f5fb30.slice - libcontainer container kubepods-besteffort-pod0d230cb2_a383_4c0a_8cc0_bb2299f5fb30.slice. Feb 13 23:54:57.256164 kubelet[1901]: I0213 23:54:57.256056 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5daf9b13-a802-4107-9a84-d41e0e042180\" (UniqueName: \"kubernetes.io/nfs/0d230cb2-a383-4c0a-8cc0-bb2299f5fb30-pvc-5daf9b13-a802-4107-9a84-d41e0e042180\") pod \"test-pod-1\" (UID: \"0d230cb2-a383-4c0a-8cc0-bb2299f5fb30\") " pod="default/test-pod-1" Feb 13 23:54:57.256164 kubelet[1901]: I0213 23:54:57.256155 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj9f6\" (UniqueName: \"kubernetes.io/projected/0d230cb2-a383-4c0a-8cc0-bb2299f5fb30-kube-api-access-xj9f6\") pod \"test-pod-1\" (UID: \"0d230cb2-a383-4c0a-8cc0-bb2299f5fb30\") " pod="default/test-pod-1" Feb 13 23:54:57.400549 kernel: FS-Cache: Loaded Feb 13 23:54:57.483351 kernel: RPC: Registered named UNIX socket transport module. Feb 13 23:54:57.483527 kernel: RPC: Registered udp transport module. Feb 13 23:54:57.483602 kernel: RPC: Registered tcp transport module. Feb 13 23:54:57.484057 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 23:54:57.485421 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 23:54:57.774848 kubelet[1901]: E0213 23:54:57.774534 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:57.826444 kernel: NFS: Registering the id_resolver key type Feb 13 23:54:57.827100 kernel: Key type id_resolver registered Feb 13 23:54:57.827222 kernel: Key type id_legacy registered Feb 13 23:54:57.892107 nfsidmap[3306]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 23:54:57.907943 nfsidmap[3309]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 23:54:58.117261 containerd[1485]: time="2025-02-13T23:54:58.116869565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0d230cb2-a383-4c0a-8cc0-bb2299f5fb30,Namespace:default,Attempt:0,}" Feb 13 23:54:58.158789 systemd-networkd[1426]: lxc2b57c67c31ae: Link UP Feb 13 23:54:58.171271 kernel: eth0: renamed from tmp894f5 Feb 13 23:54:58.177164 systemd-networkd[1426]: lxc2b57c67c31ae: Gained carrier Feb 13 23:54:58.408702 containerd[1485]: time="2025-02-13T23:54:58.408348290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:54:58.409108 containerd[1485]: time="2025-02-13T23:54:58.408443871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:54:58.409108 containerd[1485]: time="2025-02-13T23:54:58.408479950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:58.409108 containerd[1485]: time="2025-02-13T23:54:58.408633618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:54:58.430710 systemd[1]: run-containerd-runc-k8s.io-894f55502c0b7f654ca4a622346c8a0b288d94a8db3dbbae65d3a70e9f2c4226-runc.2x21op.mount: Deactivated successfully. Feb 13 23:54:58.442482 systemd[1]: Started cri-containerd-894f55502c0b7f654ca4a622346c8a0b288d94a8db3dbbae65d3a70e9f2c4226.scope - libcontainer container 894f55502c0b7f654ca4a622346c8a0b288d94a8db3dbbae65d3a70e9f2c4226. Feb 13 23:54:58.496148 containerd[1485]: time="2025-02-13T23:54:58.496087920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:0d230cb2-a383-4c0a-8cc0-bb2299f5fb30,Namespace:default,Attempt:0,} returns sandbox id \"894f55502c0b7f654ca4a622346c8a0b288d94a8db3dbbae65d3a70e9f2c4226\"" Feb 13 23:54:58.498784 containerd[1485]: time="2025-02-13T23:54:58.498756093Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 23:54:58.775467 kubelet[1901]: E0213 23:54:58.775205 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:54:58.898389 containerd[1485]: time="2025-02-13T23:54:58.898222118Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:54:58.898389 containerd[1485]: time="2025-02-13T23:54:58.898321956Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 23:54:58.931072 containerd[1485]: time="2025-02-13T23:54:58.931021717Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 432.221803ms" Feb 13 23:54:58.931072 containerd[1485]: time="2025-02-13T23:54:58.931069419Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 23:54:58.938142 containerd[1485]: time="2025-02-13T23:54:58.938017505Z" level=info msg="CreateContainer within sandbox \"894f55502c0b7f654ca4a622346c8a0b288d94a8db3dbbae65d3a70e9f2c4226\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 23:54:58.960677 containerd[1485]: time="2025-02-13T23:54:58.960522456Z" level=info msg="CreateContainer within sandbox \"894f55502c0b7f654ca4a622346c8a0b288d94a8db3dbbae65d3a70e9f2c4226\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"2174ee4c5f6d6daded1f4096b3b89b645e64d12bbd48d3a7cc80852312a482c8\"" Feb 13 23:54:58.961444 containerd[1485]: time="2025-02-13T23:54:58.961393492Z" level=info msg="StartContainer for \"2174ee4c5f6d6daded1f4096b3b89b645e64d12bbd48d3a7cc80852312a482c8\"" Feb 13 23:54:58.993152 systemd[1]: Started cri-containerd-2174ee4c5f6d6daded1f4096b3b89b645e64d12bbd48d3a7cc80852312a482c8.scope - libcontainer container 2174ee4c5f6d6daded1f4096b3b89b645e64d12bbd48d3a7cc80852312a482c8. Feb 13 23:54:59.016248 containerd[1485]: time="2025-02-13T23:54:59.016139670Z" level=info msg="StartContainer for \"2174ee4c5f6d6daded1f4096b3b89b645e64d12bbd48d3a7cc80852312a482c8\" returns successfully" Feb 13 23:54:59.072500 kubelet[1901]: I0213 23:54:59.072340 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.638758969 podStartE2EDuration="15.072270838s" podCreationTimestamp="2025-02-13 23:54:44 +0000 UTC" firstStartedPulling="2025-02-13 23:54:58.498182381 +0000 UTC m=+56.278047008" lastFinishedPulling="2025-02-13 23:54:58.931694246 +0000 UTC m=+56.711558877" observedRunningTime="2025-02-13 23:54:59.071958638 +0000 UTC m=+56.851823388" watchObservedRunningTime="2025-02-13 23:54:59.072270838 +0000 UTC m=+56.852135558" Feb 13 23:54:59.486370 systemd-networkd[1426]: lxc2b57c67c31ae: Gained IPv6LL Feb 13 23:54:59.776514 kubelet[1901]: E0213 23:54:59.776228 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:00.776861 kubelet[1901]: E0213 23:55:00.776762 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:01.777857 kubelet[1901]: E0213 23:55:01.777742 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:02.732132 kubelet[1901]: E0213 23:55:02.732026 1901 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:02.778936 kubelet[1901]: E0213 23:55:02.778862 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:02.803618 systemd[1]: run-containerd-runc-k8s.io-c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d-runc.6Jxt3l.mount: Deactivated successfully. Feb 13 23:55:02.843100 containerd[1485]: time="2025-02-13T23:55:02.843052877Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 23:55:02.857915 kubelet[1901]: E0213 23:55:02.856226 1901 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 23:55:02.941895 containerd[1485]: time="2025-02-13T23:55:02.941784833Z" level=info msg="StopContainer for \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\" with timeout 2 (s)" Feb 13 23:55:02.960971 containerd[1485]: time="2025-02-13T23:55:02.960795768Z" level=info msg="Stop container \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\" with signal terminated" Feb 13 23:55:02.974663 systemd-networkd[1426]: lxc_health: Link DOWN Feb 13 23:55:02.975688 systemd-networkd[1426]: lxc_health: Lost carrier Feb 13 23:55:02.999858 systemd[1]: cri-containerd-c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d.scope: Deactivated successfully. Feb 13 23:55:03.000851 systemd[1]: cri-containerd-c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d.scope: Consumed 7.424s CPU time. Feb 13 23:55:03.030803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d-rootfs.mount: Deactivated successfully. Feb 13 23:55:03.070188 containerd[1485]: time="2025-02-13T23:55:03.038574184Z" level=info msg="shim disconnected" id=c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d namespace=k8s.io Feb 13 23:55:03.070188 containerd[1485]: time="2025-02-13T23:55:03.070181158Z" level=warning msg="cleaning up after shim disconnected" id=c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d namespace=k8s.io Feb 13 23:55:03.070188 containerd[1485]: time="2025-02-13T23:55:03.070196994Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:55:03.093411 containerd[1485]: time="2025-02-13T23:55:03.092893086Z" level=info msg="StopContainer for \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\" returns successfully" Feb 13 23:55:03.107779 containerd[1485]: time="2025-02-13T23:55:03.107683207Z" level=info msg="StopPodSandbox for \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\"" Feb 13 23:55:03.107779 containerd[1485]: time="2025-02-13T23:55:03.107751893Z" level=info msg="Container to stop \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:55:03.107779 containerd[1485]: time="2025-02-13T23:55:03.107766824Z" level=info msg="Container to stop \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:55:03.107779 containerd[1485]: time="2025-02-13T23:55:03.107779863Z" level=info msg="Container to stop \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:55:03.107779 containerd[1485]: time="2025-02-13T23:55:03.107789968Z" level=info msg="Container to stop \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:55:03.108538 containerd[1485]: time="2025-02-13T23:55:03.107799409Z" level=info msg="Container to stop \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 23:55:03.109763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94-shm.mount: Deactivated successfully. Feb 13 23:55:03.122029 systemd[1]: cri-containerd-8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94.scope: Deactivated successfully. Feb 13 23:55:03.145305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94-rootfs.mount: Deactivated successfully. Feb 13 23:55:03.146847 containerd[1485]: time="2025-02-13T23:55:03.146585435Z" level=info msg="shim disconnected" id=8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94 namespace=k8s.io Feb 13 23:55:03.146847 containerd[1485]: time="2025-02-13T23:55:03.146642135Z" level=warning msg="cleaning up after shim disconnected" id=8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94 namespace=k8s.io Feb 13 23:55:03.146847 containerd[1485]: time="2025-02-13T23:55:03.146651356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:55:03.169171 containerd[1485]: time="2025-02-13T23:55:03.169098108Z" level=info msg="TearDown network for sandbox \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" successfully" Feb 13 23:55:03.169171 containerd[1485]: time="2025-02-13T23:55:03.169151027Z" level=info msg="StopPodSandbox for \"8c8894e94679612821291280e1dfbbceded2aadb10f140d06ed104178d939f94\" returns successfully" Feb 13 23:55:03.199012 kubelet[1901]: I0213 23:55:03.196831 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-lib-modules\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199012 kubelet[1901]: I0213 23:55:03.196892 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-hubble-tls\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199012 kubelet[1901]: I0213 23:55:03.196925 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcj9d\" (UniqueName: \"kubernetes.io/projected/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-kube-api-access-gcj9d\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199012 kubelet[1901]: I0213 23:55:03.196950 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-hostproc\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199012 kubelet[1901]: I0213 23:55:03.196972 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-host-proc-sys-kernel\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199012 kubelet[1901]: I0213 23:55:03.197013 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-host-proc-sys-net\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199557 kubelet[1901]: I0213 23:55:03.197036 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-run\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199557 kubelet[1901]: I0213 23:55:03.197055 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cni-path\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199557 kubelet[1901]: I0213 23:55:03.197046 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.199557 kubelet[1901]: I0213 23:55:03.197079 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-clustermesh-secrets\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199557 kubelet[1901]: I0213 23:55:03.197167 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-xtables-lock\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199557 kubelet[1901]: I0213 23:55:03.197197 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-cgroup\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199931 kubelet[1901]: I0213 23:55:03.197261 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-config-path\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199931 kubelet[1901]: I0213 23:55:03.197334 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-bpf-maps\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199931 kubelet[1901]: I0213 23:55:03.197362 1901 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-etc-cni-netd\") pod \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\" (UID: \"d0008fba-dde3-40ba-8ed1-b3d76a1dce97\") " Feb 13 23:55:03.199931 kubelet[1901]: I0213 23:55:03.197417 1901 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-lib-modules\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.199931 kubelet[1901]: I0213 23:55:03.197456 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.199931 kubelet[1901]: I0213 23:55:03.197486 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.200313 kubelet[1901]: I0213 23:55:03.197511 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.203228 kubelet[1901]: I0213 23:55:03.203178 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 23:55:03.203392 kubelet[1901]: I0213 23:55:03.203376 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.203557 kubelet[1901]: I0213 23:55:03.203542 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 23:55:03.205152 kubelet[1901]: I0213 23:55:03.205121 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-hostproc" (OuterVolumeSpecName: "hostproc") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.205270 kubelet[1901]: I0213 23:55:03.205166 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.205270 kubelet[1901]: I0213 23:55:03.205186 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cni-path" (OuterVolumeSpecName: "cni-path") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.205270 kubelet[1901]: I0213 23:55:03.205206 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.205442 kubelet[1901]: I0213 23:55:03.205420 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 23:55:03.206143 kubelet[1901]: I0213 23:55:03.206117 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 23:55:03.206900 kubelet[1901]: I0213 23:55:03.206875 1901 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-kube-api-access-gcj9d" (OuterVolumeSpecName: "kube-api-access-gcj9d") pod "d0008fba-dde3-40ba-8ed1-b3d76a1dce97" (UID: "d0008fba-dde3-40ba-8ed1-b3d76a1dce97"). InnerVolumeSpecName "kube-api-access-gcj9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 23:55:03.298610 kubelet[1901]: I0213 23:55:03.298433 1901 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-hostproc\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.298610 kubelet[1901]: I0213 23:55:03.298505 1901 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-hubble-tls\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.298610 kubelet[1901]: I0213 23:55:03.298532 1901 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gcj9d\" (UniqueName: \"kubernetes.io/projected/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-kube-api-access-gcj9d\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.298610 kubelet[1901]: I0213 23:55:03.298560 1901 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-run\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.298610 kubelet[1901]: I0213 23:55:03.298582 1901 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-host-proc-sys-kernel\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.298610 kubelet[1901]: I0213 23:55:03.298605 1901 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-host-proc-sys-net\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.298610 kubelet[1901]: I0213 23:55:03.298626 1901 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-xtables-lock\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.298610 kubelet[1901]: I0213 23:55:03.298645 1901 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cni-path\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.299533 kubelet[1901]: I0213 23:55:03.298666 1901 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-clustermesh-secrets\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.299533 kubelet[1901]: I0213 23:55:03.298687 1901 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-etc-cni-netd\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.299533 kubelet[1901]: I0213 23:55:03.298707 1901 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-cgroup\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.299533 kubelet[1901]: I0213 23:55:03.298726 1901 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-cilium-config-path\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.299533 kubelet[1901]: I0213 23:55:03.298745 1901 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0008fba-dde3-40ba-8ed1-b3d76a1dce97-bpf-maps\") on node \"10.244.103.218\" DevicePath \"\"" Feb 13 23:55:03.779952 kubelet[1901]: E0213 23:55:03.779706 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:03.802515 systemd[1]: var-lib-kubelet-pods-d0008fba\x2ddde3\x2d40ba\x2d8ed1\x2db3d76a1dce97-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgcj9d.mount: Deactivated successfully. Feb 13 23:55:03.803055 systemd[1]: var-lib-kubelet-pods-d0008fba\x2ddde3\x2d40ba\x2d8ed1\x2db3d76a1dce97-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 23:55:03.803396 systemd[1]: var-lib-kubelet-pods-d0008fba\x2ddde3\x2d40ba\x2d8ed1\x2db3d76a1dce97-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 23:55:04.077758 kubelet[1901]: I0213 23:55:04.077236 1901 scope.go:117] "RemoveContainer" containerID="c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d" Feb 13 23:55:04.080039 containerd[1485]: time="2025-02-13T23:55:04.079857791Z" level=info msg="RemoveContainer for \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\"" Feb 13 23:55:04.084012 containerd[1485]: time="2025-02-13T23:55:04.083718752Z" level=info msg="RemoveContainer for \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\" returns successfully" Feb 13 23:55:04.086548 kubelet[1901]: I0213 23:55:04.086529 1901 scope.go:117] "RemoveContainer" containerID="d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e" Feb 13 23:55:04.086713 systemd[1]: Removed slice kubepods-burstable-podd0008fba_dde3_40ba_8ed1_b3d76a1dce97.slice - libcontainer container kubepods-burstable-podd0008fba_dde3_40ba_8ed1_b3d76a1dce97.slice. Feb 13 23:55:04.086842 systemd[1]: kubepods-burstable-podd0008fba_dde3_40ba_8ed1_b3d76a1dce97.slice: Consumed 7.522s CPU time. Feb 13 23:55:04.088757 containerd[1485]: time="2025-02-13T23:55:04.088725889Z" level=info msg="RemoveContainer for \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\"" Feb 13 23:55:04.091038 containerd[1485]: time="2025-02-13T23:55:04.090968694Z" level=info msg="RemoveContainer for \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\" returns successfully" Feb 13 23:55:04.091199 kubelet[1901]: I0213 23:55:04.091171 1901 scope.go:117] "RemoveContainer" containerID="f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0" Feb 13 23:55:04.092165 containerd[1485]: time="2025-02-13T23:55:04.092144981Z" level=info msg="RemoveContainer for \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\"" Feb 13 23:55:04.094714 containerd[1485]: time="2025-02-13T23:55:04.094632467Z" level=info msg="RemoveContainer for \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\" returns successfully" Feb 13 23:55:04.094831 kubelet[1901]: I0213 23:55:04.094798 1901 scope.go:117] "RemoveContainer" containerID="00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a" Feb 13 23:55:04.099701 containerd[1485]: time="2025-02-13T23:55:04.099536534Z" level=info msg="RemoveContainer for \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\"" Feb 13 23:55:04.101818 containerd[1485]: time="2025-02-13T23:55:04.101747662Z" level=info msg="RemoveContainer for \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\" returns successfully" Feb 13 23:55:04.102787 kubelet[1901]: I0213 23:55:04.102001 1901 scope.go:117] "RemoveContainer" containerID="2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa" Feb 13 23:55:04.103435 containerd[1485]: time="2025-02-13T23:55:04.103212609Z" level=info msg="RemoveContainer for \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\"" Feb 13 23:55:04.104745 containerd[1485]: time="2025-02-13T23:55:04.104719760Z" level=info msg="RemoveContainer for \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\" returns successfully" Feb 13 23:55:04.104966 kubelet[1901]: I0213 23:55:04.104943 1901 scope.go:117] "RemoveContainer" containerID="c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d" Feb 13 23:55:04.109141 containerd[1485]: time="2025-02-13T23:55:04.109087489Z" level=error msg="ContainerStatus for \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\": not found" Feb 13 23:55:04.121456 kubelet[1901]: E0213 23:55:04.121245 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\": not found" containerID="c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d" Feb 13 23:55:04.121456 kubelet[1901]: I0213 23:55:04.121285 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d"} err="failed to get container status \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1f137bc5f689f09d4c5e488823047d48d94d89891995414b4c915ddb2fa965d\": not found" Feb 13 23:55:04.121456 kubelet[1901]: I0213 23:55:04.121385 1901 scope.go:117] "RemoveContainer" containerID="d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e" Feb 13 23:55:04.121615 containerd[1485]: time="2025-02-13T23:55:04.121582101Z" level=error msg="ContainerStatus for \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\": not found" Feb 13 23:55:04.121763 kubelet[1901]: E0213 23:55:04.121726 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\": not found" containerID="d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e" Feb 13 23:55:04.121917 kubelet[1901]: I0213 23:55:04.121842 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e"} err="failed to get container status \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d90044d7e12111302d705ebd5d31a7ff3a051161df426741cf019ea75c19634e\": not found" Feb 13 23:55:04.121917 kubelet[1901]: I0213 23:55:04.121864 1901 scope.go:117] "RemoveContainer" containerID="f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0" Feb 13 23:55:04.122214 containerd[1485]: time="2025-02-13T23:55:04.122019119Z" level=error msg="ContainerStatus for \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\": not found" Feb 13 23:55:04.122281 kubelet[1901]: E0213 23:55:04.122122 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\": not found" containerID="f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0" Feb 13 23:55:04.122281 kubelet[1901]: I0213 23:55:04.122138 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0"} err="failed to get container status \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6e0e55209f7284eeae8ba0f9e1730f30399367e3d72b55cccd19f4e11f3b4b0\": not found" Feb 13 23:55:04.122281 kubelet[1901]: I0213 23:55:04.122152 1901 scope.go:117] "RemoveContainer" containerID="00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a" Feb 13 23:55:04.122421 containerd[1485]: time="2025-02-13T23:55:04.122296822Z" level=error msg="ContainerStatus for \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\": not found" Feb 13 23:55:04.122646 kubelet[1901]: E0213 23:55:04.122526 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\": not found" containerID="00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a" Feb 13 23:55:04.122646 kubelet[1901]: I0213 23:55:04.122567 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a"} err="failed to get container status \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\": rpc error: code = NotFound desc = an error occurred when try to find container \"00383e6e2ad0bb05804dec53d2b9e784a57cae6e2aad42040bef5072fc42519a\": not found" Feb 13 23:55:04.122646 kubelet[1901]: I0213 23:55:04.122588 1901 scope.go:117] "RemoveContainer" containerID="2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa" Feb 13 23:55:04.122856 kubelet[1901]: E0213 23:55:04.122780 1901 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\": not found" containerID="2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa" Feb 13 23:55:04.122856 kubelet[1901]: I0213 23:55:04.122796 1901 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa"} err="failed to get container status \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\": not found" Feb 13 23:55:04.122938 containerd[1485]: time="2025-02-13T23:55:04.122710762Z" level=error msg="ContainerStatus for \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b6b30167f0a8a94bc49e4325f8e50129ec7d0f0214ec4951916e8c4dd9961aa\": not found" Feb 13 23:55:04.330624 kubelet[1901]: I0213 23:55:04.328741 1901 setters.go:600] "Node became not ready" node="10.244.103.218" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T23:55:04Z","lastTransitionTime":"2025-02-13T23:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 23:55:04.781021 kubelet[1901]: E0213 23:55:04.780738 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:04.847677 kubelet[1901]: I0213 23:55:04.847616 1901 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0008fba-dde3-40ba-8ed1-b3d76a1dce97" path="/var/lib/kubelet/pods/d0008fba-dde3-40ba-8ed1-b3d76a1dce97/volumes" Feb 13 23:55:05.781061 kubelet[1901]: E0213 23:55:05.780957 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:06.781775 kubelet[1901]: E0213 23:55:06.781666 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:07.571035 kubelet[1901]: E0213 23:55:07.570948 1901 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0008fba-dde3-40ba-8ed1-b3d76a1dce97" containerName="apply-sysctl-overwrites" Feb 13 23:55:07.571035 kubelet[1901]: E0213 23:55:07.571050 1901 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0008fba-dde3-40ba-8ed1-b3d76a1dce97" containerName="mount-bpf-fs" Feb 13 23:55:07.571309 kubelet[1901]: E0213 23:55:07.571064 1901 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0008fba-dde3-40ba-8ed1-b3d76a1dce97" containerName="cilium-agent" Feb 13 23:55:07.571309 kubelet[1901]: E0213 23:55:07.571086 1901 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0008fba-dde3-40ba-8ed1-b3d76a1dce97" containerName="mount-cgroup" Feb 13 23:55:07.571309 kubelet[1901]: E0213 23:55:07.571097 1901 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d0008fba-dde3-40ba-8ed1-b3d76a1dce97" containerName="clean-cilium-state" Feb 13 23:55:07.571309 kubelet[1901]: I0213 23:55:07.571147 1901 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0008fba-dde3-40ba-8ed1-b3d76a1dce97" containerName="cilium-agent" Feb 13 23:55:07.580371 systemd[1]: Created slice kubepods-besteffort-pod603dcc2b_b77b_49ab_aa3f_b66680bba816.slice - libcontainer container kubepods-besteffort-pod603dcc2b_b77b_49ab_aa3f_b66680bba816.slice. Feb 13 23:55:07.594572 systemd[1]: Created slice kubepods-burstable-pod02a42d96_efa1_46aa_8e2f_48272a9bb033.slice - libcontainer container kubepods-burstable-pod02a42d96_efa1_46aa_8e2f_48272a9bb033.slice. Feb 13 23:55:07.629035 kubelet[1901]: I0213 23:55:07.628929 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/603dcc2b-b77b-49ab-aa3f-b66680bba816-cilium-config-path\") pod \"cilium-operator-5d85765b45-xksfc\" (UID: \"603dcc2b-b77b-49ab-aa3f-b66680bba816\") " pod="kube-system/cilium-operator-5d85765b45-xksfc" Feb 13 23:55:07.629035 kubelet[1901]: I0213 23:55:07.629094 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-cilium-run\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.629848 kubelet[1901]: I0213 23:55:07.629160 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-lib-modules\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.629848 kubelet[1901]: I0213 23:55:07.629212 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/02a42d96-efa1-46aa-8e2f-48272a9bb033-cilium-config-path\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.629848 kubelet[1901]: I0213 23:55:07.629249 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-hostproc\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.629848 kubelet[1901]: I0213 23:55:07.629286 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-etc-cni-netd\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.629848 kubelet[1901]: I0213 23:55:07.629364 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-xtables-lock\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.629848 kubelet[1901]: I0213 23:55:07.629423 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/02a42d96-efa1-46aa-8e2f-48272a9bb033-cilium-ipsec-secrets\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.630134 kubelet[1901]: I0213 23:55:07.629450 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-host-proc-sys-net\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.630134 kubelet[1901]: I0213 23:55:07.629473 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-host-proc-sys-kernel\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.630134 kubelet[1901]: I0213 23:55:07.629499 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-bpf-maps\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.630134 kubelet[1901]: I0213 23:55:07.629523 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-cilium-cgroup\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.630134 kubelet[1901]: I0213 23:55:07.629548 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/02a42d96-efa1-46aa-8e2f-48272a9bb033-hubble-tls\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.630319 kubelet[1901]: I0213 23:55:07.629576 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4w6l\" (UniqueName: \"kubernetes.io/projected/603dcc2b-b77b-49ab-aa3f-b66680bba816-kube-api-access-q4w6l\") pod \"cilium-operator-5d85765b45-xksfc\" (UID: \"603dcc2b-b77b-49ab-aa3f-b66680bba816\") " pod="kube-system/cilium-operator-5d85765b45-xksfc" Feb 13 23:55:07.630319 kubelet[1901]: I0213 23:55:07.629603 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/02a42d96-efa1-46aa-8e2f-48272a9bb033-cni-path\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.630319 kubelet[1901]: I0213 23:55:07.629671 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/02a42d96-efa1-46aa-8e2f-48272a9bb033-clustermesh-secrets\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.630319 kubelet[1901]: I0213 23:55:07.629721 1901 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zllwj\" (UniqueName: \"kubernetes.io/projected/02a42d96-efa1-46aa-8e2f-48272a9bb033-kube-api-access-zllwj\") pod \"cilium-rgdrz\" (UID: \"02a42d96-efa1-46aa-8e2f-48272a9bb033\") " pod="kube-system/cilium-rgdrz" Feb 13 23:55:07.782642 kubelet[1901]: E0213 23:55:07.782551 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:07.858758 kubelet[1901]: E0213 23:55:07.858393 1901 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 23:55:07.889044 containerd[1485]: time="2025-02-13T23:55:07.888731971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xksfc,Uid:603dcc2b-b77b-49ab-aa3f-b66680bba816,Namespace:kube-system,Attempt:0,}" Feb 13 23:55:07.904812 containerd[1485]: time="2025-02-13T23:55:07.904334872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rgdrz,Uid:02a42d96-efa1-46aa-8e2f-48272a9bb033,Namespace:kube-system,Attempt:0,}" Feb 13 23:55:07.945366 containerd[1485]: time="2025-02-13T23:55:07.944980863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:55:07.945366 containerd[1485]: time="2025-02-13T23:55:07.945084903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:55:07.945366 containerd[1485]: time="2025-02-13T23:55:07.945104241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:55:07.945366 containerd[1485]: time="2025-02-13T23:55:07.945197881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:55:07.954841 containerd[1485]: time="2025-02-13T23:55:07.954706266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 23:55:07.954841 containerd[1485]: time="2025-02-13T23:55:07.954779413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 23:55:07.954841 containerd[1485]: time="2025-02-13T23:55:07.954794609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:55:07.955186 containerd[1485]: time="2025-02-13T23:55:07.954882911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 23:55:07.973371 systemd[1]: Started cri-containerd-0a9b825ca36a26d477b9ecab103d46311f9546f503bef200f32cf76da60ef821.scope - libcontainer container 0a9b825ca36a26d477b9ecab103d46311f9546f503bef200f32cf76da60ef821. Feb 13 23:55:07.980275 systemd[1]: Started cri-containerd-9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8.scope - libcontainer container 9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8. Feb 13 23:55:08.024347 containerd[1485]: time="2025-02-13T23:55:08.024281194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rgdrz,Uid:02a42d96-efa1-46aa-8e2f-48272a9bb033,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\"" Feb 13 23:55:08.030214 containerd[1485]: time="2025-02-13T23:55:08.030167072Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 23:55:08.067005 containerd[1485]: time="2025-02-13T23:55:08.066858462Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06f0757214be6c377241953ceb52d09ee0ec1dbac8c6947b62a33ced899911fe\"" Feb 13 23:55:08.077078 containerd[1485]: time="2025-02-13T23:55:08.077030436Z" level=info msg="StartContainer for \"06f0757214be6c377241953ceb52d09ee0ec1dbac8c6947b62a33ced899911fe\"" Feb 13 23:55:08.081201 containerd[1485]: time="2025-02-13T23:55:08.081101435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xksfc,Uid:603dcc2b-b77b-49ab-aa3f-b66680bba816,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a9b825ca36a26d477b9ecab103d46311f9546f503bef200f32cf76da60ef821\"" Feb 13 23:55:08.083251 containerd[1485]: time="2025-02-13T23:55:08.082973442Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 23:55:08.111280 systemd[1]: Started cri-containerd-06f0757214be6c377241953ceb52d09ee0ec1dbac8c6947b62a33ced899911fe.scope - libcontainer container 06f0757214be6c377241953ceb52d09ee0ec1dbac8c6947b62a33ced899911fe. Feb 13 23:55:08.136256 containerd[1485]: time="2025-02-13T23:55:08.136209002Z" level=info msg="StartContainer for \"06f0757214be6c377241953ceb52d09ee0ec1dbac8c6947b62a33ced899911fe\" returns successfully" Feb 13 23:55:08.150571 systemd[1]: cri-containerd-06f0757214be6c377241953ceb52d09ee0ec1dbac8c6947b62a33ced899911fe.scope: Deactivated successfully. Feb 13 23:55:08.181603 containerd[1485]: time="2025-02-13T23:55:08.181424128Z" level=info msg="shim disconnected" id=06f0757214be6c377241953ceb52d09ee0ec1dbac8c6947b62a33ced899911fe namespace=k8s.io Feb 13 23:55:08.181603 containerd[1485]: time="2025-02-13T23:55:08.181485090Z" level=warning msg="cleaning up after shim disconnected" id=06f0757214be6c377241953ceb52d09ee0ec1dbac8c6947b62a33ced899911fe namespace=k8s.io Feb 13 23:55:08.181603 containerd[1485]: time="2025-02-13T23:55:08.181495765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:55:08.782806 kubelet[1901]: E0213 23:55:08.782747 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:09.103132 containerd[1485]: time="2025-02-13T23:55:09.103080587Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 23:55:09.112769 containerd[1485]: time="2025-02-13T23:55:09.112616214Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"95728f858e3bde979f23fc6d557178279cf2aeeb439cace6d99554ab8645273d\"" Feb 13 23:55:09.113317 containerd[1485]: time="2025-02-13T23:55:09.113255141Z" level=info msg="StartContainer for \"95728f858e3bde979f23fc6d557178279cf2aeeb439cace6d99554ab8645273d\"" Feb 13 23:55:09.150379 systemd[1]: Started cri-containerd-95728f858e3bde979f23fc6d557178279cf2aeeb439cace6d99554ab8645273d.scope - libcontainer container 95728f858e3bde979f23fc6d557178279cf2aeeb439cace6d99554ab8645273d. Feb 13 23:55:09.177176 containerd[1485]: time="2025-02-13T23:55:09.177133142Z" level=info msg="StartContainer for \"95728f858e3bde979f23fc6d557178279cf2aeeb439cace6d99554ab8645273d\" returns successfully" Feb 13 23:55:09.186716 systemd[1]: cri-containerd-95728f858e3bde979f23fc6d557178279cf2aeeb439cace6d99554ab8645273d.scope: Deactivated successfully. Feb 13 23:55:09.211476 containerd[1485]: time="2025-02-13T23:55:09.211413996Z" level=info msg="shim disconnected" id=95728f858e3bde979f23fc6d557178279cf2aeeb439cace6d99554ab8645273d namespace=k8s.io Feb 13 23:55:09.211476 containerd[1485]: time="2025-02-13T23:55:09.211473335Z" level=warning msg="cleaning up after shim disconnected" id=95728f858e3bde979f23fc6d557178279cf2aeeb439cace6d99554ab8645273d namespace=k8s.io Feb 13 23:55:09.211476 containerd[1485]: time="2025-02-13T23:55:09.211488985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:55:09.743426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95728f858e3bde979f23fc6d557178279cf2aeeb439cace6d99554ab8645273d-rootfs.mount: Deactivated successfully. Feb 13 23:55:09.783676 kubelet[1901]: E0213 23:55:09.783577 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:10.108399 containerd[1485]: time="2025-02-13T23:55:10.108339392Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 23:55:10.128093 containerd[1485]: time="2025-02-13T23:55:10.127970858Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"553e35f502ad6745b0abb2e423646562095ffa9843b98011b8e9a6b769d60832\"" Feb 13 23:55:10.128841 containerd[1485]: time="2025-02-13T23:55:10.128768004Z" level=info msg="StartContainer for \"553e35f502ad6745b0abb2e423646562095ffa9843b98011b8e9a6b769d60832\"" Feb 13 23:55:10.168623 systemd[1]: Started cri-containerd-553e35f502ad6745b0abb2e423646562095ffa9843b98011b8e9a6b769d60832.scope - libcontainer container 553e35f502ad6745b0abb2e423646562095ffa9843b98011b8e9a6b769d60832. Feb 13 23:55:10.206577 containerd[1485]: time="2025-02-13T23:55:10.206538731Z" level=info msg="StartContainer for \"553e35f502ad6745b0abb2e423646562095ffa9843b98011b8e9a6b769d60832\" returns successfully" Feb 13 23:55:10.212590 systemd[1]: cri-containerd-553e35f502ad6745b0abb2e423646562095ffa9843b98011b8e9a6b769d60832.scope: Deactivated successfully. Feb 13 23:55:10.252241 containerd[1485]: time="2025-02-13T23:55:10.252176982Z" level=info msg="shim disconnected" id=553e35f502ad6745b0abb2e423646562095ffa9843b98011b8e9a6b769d60832 namespace=k8s.io Feb 13 23:55:10.252241 containerd[1485]: time="2025-02-13T23:55:10.252236470Z" level=warning msg="cleaning up after shim disconnected" id=553e35f502ad6745b0abb2e423646562095ffa9843b98011b8e9a6b769d60832 namespace=k8s.io Feb 13 23:55:10.252241 containerd[1485]: time="2025-02-13T23:55:10.252246725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:55:10.267951 containerd[1485]: time="2025-02-13T23:55:10.267860965Z" level=warning msg="cleanup warnings time=\"2025-02-13T23:55:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 23:55:10.741570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-553e35f502ad6745b0abb2e423646562095ffa9843b98011b8e9a6b769d60832-rootfs.mount: Deactivated successfully. Feb 13 23:55:10.784728 kubelet[1901]: E0213 23:55:10.784650 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:10.960143 containerd[1485]: time="2025-02-13T23:55:10.959995111Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:55:10.961043 containerd[1485]: time="2025-02-13T23:55:10.960685507Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 23:55:10.962023 containerd[1485]: time="2025-02-13T23:55:10.961967417Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 23:55:10.967306 containerd[1485]: time="2025-02-13T23:55:10.967273047Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.884135257s" Feb 13 23:55:10.967489 containerd[1485]: time="2025-02-13T23:55:10.967397339Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 23:55:10.969896 containerd[1485]: time="2025-02-13T23:55:10.969781408Z" level=info msg="CreateContainer within sandbox \"0a9b825ca36a26d477b9ecab103d46311f9546f503bef200f32cf76da60ef821\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 23:55:10.992594 containerd[1485]: time="2025-02-13T23:55:10.992442973Z" level=info msg="CreateContainer within sandbox \"0a9b825ca36a26d477b9ecab103d46311f9546f503bef200f32cf76da60ef821\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"88b1c1788a53daec6e14b696f022dbdcb282defa25c74b2f054b3e8e6540669c\"" Feb 13 23:55:10.993795 containerd[1485]: time="2025-02-13T23:55:10.993484366Z" level=info msg="StartContainer for \"88b1c1788a53daec6e14b696f022dbdcb282defa25c74b2f054b3e8e6540669c\"" Feb 13 23:55:11.036262 systemd[1]: Started cri-containerd-88b1c1788a53daec6e14b696f022dbdcb282defa25c74b2f054b3e8e6540669c.scope - libcontainer container 88b1c1788a53daec6e14b696f022dbdcb282defa25c74b2f054b3e8e6540669c. Feb 13 23:55:11.064666 containerd[1485]: time="2025-02-13T23:55:11.064622751Z" level=info msg="StartContainer for \"88b1c1788a53daec6e14b696f022dbdcb282defa25c74b2f054b3e8e6540669c\" returns successfully" Feb 13 23:55:11.117226 containerd[1485]: time="2025-02-13T23:55:11.117175812Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 23:55:11.124112 kubelet[1901]: I0213 23:55:11.123890 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xksfc" podStartSLOduration=1.238092936 podStartE2EDuration="4.123870223s" podCreationTimestamp="2025-02-13 23:55:07 +0000 UTC" firstStartedPulling="2025-02-13 23:55:08.08253652 +0000 UTC m=+65.862401148" lastFinishedPulling="2025-02-13 23:55:10.968313808 +0000 UTC m=+68.748178435" observedRunningTime="2025-02-13 23:55:11.122848818 +0000 UTC m=+68.902713469" watchObservedRunningTime="2025-02-13 23:55:11.123870223 +0000 UTC m=+68.903734873" Feb 13 23:55:11.126192 containerd[1485]: time="2025-02-13T23:55:11.126024603Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8883e1d71a127c035c6223a75749075ed5a54570afbc8de9f5eb9c1b86ea8918\"" Feb 13 23:55:11.127924 containerd[1485]: time="2025-02-13T23:55:11.126839471Z" level=info msg="StartContainer for \"8883e1d71a127c035c6223a75749075ed5a54570afbc8de9f5eb9c1b86ea8918\"" Feb 13 23:55:11.174190 systemd[1]: Started cri-containerd-8883e1d71a127c035c6223a75749075ed5a54570afbc8de9f5eb9c1b86ea8918.scope - libcontainer container 8883e1d71a127c035c6223a75749075ed5a54570afbc8de9f5eb9c1b86ea8918. Feb 13 23:55:11.218576 containerd[1485]: time="2025-02-13T23:55:11.218527856Z" level=info msg="StartContainer for \"8883e1d71a127c035c6223a75749075ed5a54570afbc8de9f5eb9c1b86ea8918\" returns successfully" Feb 13 23:55:11.219833 systemd[1]: cri-containerd-8883e1d71a127c035c6223a75749075ed5a54570afbc8de9f5eb9c1b86ea8918.scope: Deactivated successfully. Feb 13 23:55:11.284506 containerd[1485]: time="2025-02-13T23:55:11.284179529Z" level=info msg="shim disconnected" id=8883e1d71a127c035c6223a75749075ed5a54570afbc8de9f5eb9c1b86ea8918 namespace=k8s.io Feb 13 23:55:11.284506 containerd[1485]: time="2025-02-13T23:55:11.284248631Z" level=warning msg="cleaning up after shim disconnected" id=8883e1d71a127c035c6223a75749075ed5a54570afbc8de9f5eb9c1b86ea8918 namespace=k8s.io Feb 13 23:55:11.284506 containerd[1485]: time="2025-02-13T23:55:11.284260079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 23:55:11.785739 kubelet[1901]: E0213 23:55:11.785654 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:12.125764 containerd[1485]: time="2025-02-13T23:55:12.125420677Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 23:55:12.138623 containerd[1485]: time="2025-02-13T23:55:12.138404542Z" level=info msg="CreateContainer within sandbox \"9eb1e73a86ca44dd3907a9a9bb8207e9fb59af3ba6117eeb8041ea0277a999b8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"60edf8ef9628445f29cee14bc9b8455d5319f19677ab2e033b41c0f88e57eb55\"" Feb 13 23:55:12.139210 containerd[1485]: time="2025-02-13T23:55:12.139112066Z" level=info msg="StartContainer for \"60edf8ef9628445f29cee14bc9b8455d5319f19677ab2e033b41c0f88e57eb55\"" Feb 13 23:55:12.171171 systemd[1]: Started cri-containerd-60edf8ef9628445f29cee14bc9b8455d5319f19677ab2e033b41c0f88e57eb55.scope - libcontainer container 60edf8ef9628445f29cee14bc9b8455d5319f19677ab2e033b41c0f88e57eb55. Feb 13 23:55:12.205506 containerd[1485]: time="2025-02-13T23:55:12.205461042Z" level=info msg="StartContainer for \"60edf8ef9628445f29cee14bc9b8455d5319f19677ab2e033b41c0f88e57eb55\" returns successfully" Feb 13 23:55:12.643136 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 23:55:12.786622 kubelet[1901]: E0213 23:55:12.786554 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:13.787740 kubelet[1901]: E0213 23:55:13.787635 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:14.788111 kubelet[1901]: E0213 23:55:14.788058 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:15.789116 kubelet[1901]: E0213 23:55:15.788888 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:15.983308 systemd-networkd[1426]: lxc_health: Link UP Feb 13 23:55:15.992121 systemd-networkd[1426]: lxc_health: Gained carrier Feb 13 23:55:16.789352 kubelet[1901]: E0213 23:55:16.789309 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:17.670901 systemd[1]: run-containerd-runc-k8s.io-60edf8ef9628445f29cee14bc9b8455d5319f19677ab2e033b41c0f88e57eb55-runc.u68hPb.mount: Deactivated successfully. Feb 13 23:55:17.791895 kubelet[1901]: E0213 23:55:17.791809 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:17.918199 systemd-networkd[1426]: lxc_health: Gained IPv6LL Feb 13 23:55:17.936244 kubelet[1901]: I0213 23:55:17.936061 1901 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rgdrz" podStartSLOduration=10.936039416 podStartE2EDuration="10.936039416s" podCreationTimestamp="2025-02-13 23:55:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 23:55:13.159523968 +0000 UTC m=+70.939388663" watchObservedRunningTime="2025-02-13 23:55:17.936039416 +0000 UTC m=+75.715904064" Feb 13 23:55:18.792526 kubelet[1901]: E0213 23:55:18.792462 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:19.795008 kubelet[1901]: E0213 23:55:19.792866 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:19.984645 systemd[1]: run-containerd-runc-k8s.io-60edf8ef9628445f29cee14bc9b8455d5319f19677ab2e033b41c0f88e57eb55-runc.CKCv6A.mount: Deactivated successfully. Feb 13 23:55:20.794185 kubelet[1901]: E0213 23:55:20.794085 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:21.794542 kubelet[1901]: E0213 23:55:21.794435 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:22.731758 kubelet[1901]: E0213 23:55:22.731654 1901 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:22.795203 kubelet[1901]: E0213 23:55:22.795106 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:23.795900 kubelet[1901]: E0213 23:55:23.795802 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:24.796274 kubelet[1901]: E0213 23:55:24.796163 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:25.797559 kubelet[1901]: E0213 23:55:25.797453 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 23:55:26.797775 kubelet[1901]: E0213 23:55:26.797653 1901 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"