Jan 30 17:41:02.024562 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 17:41:02.024598 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 17:41:02.024619 kernel: BIOS-provided physical RAM map: Jan 30 17:41:02.024648 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 17:41:02.024658 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 17:41:02.024668 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 17:41:02.024680 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 30 17:41:02.024690 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 30 17:41:02.024700 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 17:41:02.024711 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 17:41:02.024721 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 17:41:02.024731 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 17:41:02.024747 kernel: NX (Execute Disable) protection: active Jan 30 17:41:02.024757 kernel: APIC: Static calls initialized Jan 30 17:41:02.024770 kernel: SMBIOS 2.8 present. Jan 30 17:41:02.024781 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 30 17:41:02.024793 kernel: Hypervisor detected: KVM Jan 30 17:41:02.024808 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 17:41:02.024820 kernel: kvm-clock: using sched offset of 4380338306 cycles Jan 30 17:41:02.024832 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 17:41:02.024844 kernel: tsc: Detected 2499.998 MHz processor Jan 30 17:41:02.024868 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 17:41:02.024882 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 17:41:02.024894 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 30 17:41:02.024905 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 17:41:02.024917 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 17:41:02.024939 kernel: Using GB pages for direct mapping Jan 30 17:41:02.024951 kernel: ACPI: Early table checksum verification disabled Jan 30 17:41:02.024962 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 30 17:41:02.024974 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 17:41:02.024985 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 17:41:02.024996 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 17:41:02.025008 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 30 17:41:02.025019 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 17:41:02.025030 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 17:41:02.025047 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 17:41:02.025058 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 17:41:02.025070 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 30 17:41:02.025081 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 30 17:41:02.025093 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 30 17:41:02.025114 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 30 17:41:02.028007 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 30 17:41:02.028036 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 30 17:41:02.028050 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 30 17:41:02.028062 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 17:41:02.028074 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 17:41:02.028086 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 30 17:41:02.028098 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 30 17:41:02.028110 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 30 17:41:02.028127 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 30 17:41:02.028139 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 30 17:41:02.028151 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 30 17:41:02.028163 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 30 17:41:02.028193 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 30 17:41:02.028208 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 30 17:41:02.028220 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 30 17:41:02.028232 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 30 17:41:02.028244 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 30 17:41:02.028256 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 30 17:41:02.028274 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 30 17:41:02.028286 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 17:41:02.028299 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 17:41:02.028311 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 30 17:41:02.028323 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 30 17:41:02.028335 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 30 17:41:02.028347 kernel: Zone ranges: Jan 30 17:41:02.028360 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 17:41:02.028371 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 30 17:41:02.028388 kernel: Normal empty Jan 30 17:41:02.028400 kernel: Movable zone start for each node Jan 30 17:41:02.028412 kernel: Early memory node ranges Jan 30 17:41:02.028424 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 17:41:02.028436 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 30 17:41:02.028448 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 30 17:41:02.028460 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 17:41:02.028472 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 17:41:02.028488 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 30 17:41:02.028500 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 17:41:02.028517 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 17:41:02.028529 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 17:41:02.028541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 17:41:02.028553 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 17:41:02.028565 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 17:41:02.028576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 17:41:02.028588 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 17:41:02.028600 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 17:41:02.028612 kernel: TSC deadline timer available Jan 30 17:41:02.028642 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 30 17:41:02.028654 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 17:41:02.028666 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 17:41:02.028678 kernel: Booting paravirtualized kernel on KVM Jan 30 17:41:02.028690 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 17:41:02.028702 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 30 17:41:02.028714 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 17:41:02.028726 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 17:41:02.028738 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 30 17:41:02.028755 kernel: kvm-guest: PV spinlocks enabled Jan 30 17:41:02.028768 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 17:41:02.028781 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 17:41:02.028794 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 17:41:02.028805 kernel: random: crng init done Jan 30 17:41:02.028817 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 17:41:02.028829 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 17:41:02.028841 kernel: Fallback order for Node 0: 0 Jan 30 17:41:02.028858 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 30 17:41:02.028870 kernel: Policy zone: DMA32 Jan 30 17:41:02.028882 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 17:41:02.028894 kernel: software IO TLB: area num 16. Jan 30 17:41:02.028906 kernel: Memory: 1901540K/2096616K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 194820K reserved, 0K cma-reserved) Jan 30 17:41:02.028918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 30 17:41:02.028930 kernel: Kernel/User page tables isolation: enabled Jan 30 17:41:02.028942 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 17:41:02.028954 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 17:41:02.028971 kernel: Dynamic Preempt: voluntary Jan 30 17:41:02.028983 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 17:41:02.028996 kernel: rcu: RCU event tracing is enabled. Jan 30 17:41:02.029008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 30 17:41:02.029020 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 17:41:02.029045 kernel: Rude variant of Tasks RCU enabled. Jan 30 17:41:02.029062 kernel: Tracing variant of Tasks RCU enabled. Jan 30 17:41:02.029075 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 17:41:02.029087 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 30 17:41:02.029100 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 30 17:41:02.029112 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 17:41:02.029129 kernel: Console: colour VGA+ 80x25 Jan 30 17:41:02.029142 kernel: printk: console [tty0] enabled Jan 30 17:41:02.029155 kernel: printk: console [ttyS0] enabled Jan 30 17:41:02.029167 kernel: ACPI: Core revision 20230628 Jan 30 17:41:02.029192 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 17:41:02.029205 kernel: x2apic enabled Jan 30 17:41:02.029224 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 17:41:02.029237 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 17:41:02.029250 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 30 17:41:02.029263 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 17:41:02.029280 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 30 17:41:02.029293 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 30 17:41:02.029306 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 17:41:02.029318 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 17:41:02.029330 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 17:41:02.029357 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 17:41:02.029382 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 30 17:41:02.029394 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 17:41:02.029406 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 17:41:02.029418 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 17:41:02.029430 kernel: MMIO Stale Data: Unknown: No mitigations Jan 30 17:41:02.029442 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 30 17:41:02.029465 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 17:41:02.029478 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 17:41:02.029490 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 17:41:02.029503 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 17:41:02.029520 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 17:41:02.029533 kernel: Freeing SMP alternatives memory: 32K Jan 30 17:41:02.029545 kernel: pid_max: default: 32768 minimum: 301 Jan 30 17:41:02.029558 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 17:41:02.029570 kernel: landlock: Up and running. Jan 30 17:41:02.029583 kernel: SELinux: Initializing. Jan 30 17:41:02.029595 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 17:41:02.029607 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 17:41:02.029633 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 30 17:41:02.029648 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 17:41:02.029661 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 17:41:02.029680 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 30 17:41:02.029693 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 30 17:41:02.029705 kernel: signal: max sigframe size: 1776 Jan 30 17:41:02.029718 kernel: rcu: Hierarchical SRCU implementation. Jan 30 17:41:02.029731 kernel: rcu: Max phase no-delay instances is 400. Jan 30 17:41:02.029743 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 17:41:02.029756 kernel: smp: Bringing up secondary CPUs ... Jan 30 17:41:02.029768 kernel: smpboot: x86: Booting SMP configuration: Jan 30 17:41:02.029780 kernel: .... node #0, CPUs: #1 Jan 30 17:41:02.029798 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 30 17:41:02.029811 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 17:41:02.029824 kernel: smpboot: Max logical packages: 16 Jan 30 17:41:02.029836 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 30 17:41:02.029848 kernel: devtmpfs: initialized Jan 30 17:41:02.029861 kernel: x86/mm: Memory block size: 128MB Jan 30 17:41:02.029874 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 17:41:02.029886 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 30 17:41:02.029898 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 17:41:02.029916 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 17:41:02.029929 kernel: audit: initializing netlink subsys (disabled) Jan 30 17:41:02.029941 kernel: audit: type=2000 audit(1738258860.104:1): state=initialized audit_enabled=0 res=1 Jan 30 17:41:02.029954 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 17:41:02.029966 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 17:41:02.029979 kernel: cpuidle: using governor menu Jan 30 17:41:02.029991 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 17:41:02.030004 kernel: dca service started, version 1.12.1 Jan 30 17:41:02.030017 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 17:41:02.030034 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 17:41:02.030047 kernel: PCI: Using configuration type 1 for base access Jan 30 17:41:02.030060 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 17:41:02.030073 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 17:41:02.030086 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 17:41:02.030098 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 17:41:02.030111 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 17:41:02.030123 kernel: ACPI: Added _OSI(Module Device) Jan 30 17:41:02.030136 kernel: ACPI: Added _OSI(Processor Device) Jan 30 17:41:02.030153 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 17:41:02.030166 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 17:41:02.031015 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 17:41:02.031033 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 17:41:02.031046 kernel: ACPI: Interpreter enabled Jan 30 17:41:02.031059 kernel: ACPI: PM: (supports S0 S5) Jan 30 17:41:02.031071 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 17:41:02.031084 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 17:41:02.031097 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 17:41:02.031118 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 17:41:02.031130 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 17:41:02.031840 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 17:41:02.032044 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 17:41:02.032232 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 17:41:02.032252 kernel: PCI host bridge to bus 0000:00 Jan 30 17:41:02.032433 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 17:41:02.032597 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 17:41:02.032766 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 17:41:02.032919 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 30 17:41:02.033072 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 17:41:02.034861 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 30 17:41:02.035025 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 17:41:02.035236 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 17:41:02.035425 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 30 17:41:02.035595 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 30 17:41:02.035780 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 30 17:41:02.035947 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 30 17:41:02.036112 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 17:41:02.036328 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 17:41:02.036507 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 30 17:41:02.036761 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 17:41:02.036942 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 30 17:41:02.037128 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 17:41:02.038865 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 30 17:41:02.039053 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 17:41:02.039266 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 30 17:41:02.039461 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 17:41:02.039648 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 30 17:41:02.039830 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 17:41:02.039998 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 30 17:41:02.040212 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 17:41:02.040398 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 30 17:41:02.040576 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 17:41:02.040768 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 30 17:41:02.043296 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 30 17:41:02.043479 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 17:41:02.043668 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 30 17:41:02.043836 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 30 17:41:02.044014 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 30 17:41:02.044221 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 30 17:41:02.044393 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 17:41:02.044557 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 30 17:41:02.044736 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 30 17:41:02.044919 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 17:41:02.045083 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 17:41:02.045316 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 17:41:02.045484 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 30 17:41:02.045659 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 30 17:41:02.045832 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 17:41:02.046005 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 17:41:02.046205 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 30 17:41:02.046388 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 30 17:41:02.046558 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 17:41:02.046737 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 17:41:02.046903 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 17:41:02.047080 kernel: pci_bus 0000:02: extended config space not accessible Jan 30 17:41:02.049346 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 30 17:41:02.049546 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 30 17:41:02.049735 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 17:41:02.049906 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 17:41:02.050090 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 17:41:02.050284 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 30 17:41:02.050471 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 17:41:02.050657 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 17:41:02.050834 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 17:41:02.051034 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 17:41:02.054543 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 30 17:41:02.054736 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 17:41:02.054903 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 17:41:02.055065 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 17:41:02.055247 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 17:41:02.055410 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 17:41:02.055588 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 17:41:02.055768 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 17:41:02.055932 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 17:41:02.056093 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 17:41:02.056336 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 17:41:02.056502 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 17:41:02.056677 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 17:41:02.056842 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 17:41:02.057015 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 17:41:02.057212 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 17:41:02.057387 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 17:41:02.057581 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 17:41:02.057765 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 17:41:02.057785 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 17:41:02.057799 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 17:41:02.057812 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 17:41:02.057825 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 17:41:02.057845 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 17:41:02.057858 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 17:41:02.057871 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 17:41:02.057884 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 17:41:02.057909 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 17:41:02.057921 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 17:41:02.057934 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 17:41:02.057946 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 17:41:02.057958 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 17:41:02.057975 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 17:41:02.057988 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 17:41:02.058000 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 17:41:02.058013 kernel: iommu: Default domain type: Translated Jan 30 17:41:02.058025 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 17:41:02.058038 kernel: PCI: Using ACPI for IRQ routing Jan 30 17:41:02.058072 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 17:41:02.058084 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 17:41:02.058097 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 30 17:41:02.060304 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 17:41:02.060498 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 17:41:02.060687 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 17:41:02.060708 kernel: vgaarb: loaded Jan 30 17:41:02.060722 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 17:41:02.060735 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 17:41:02.060748 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 17:41:02.060761 kernel: pnp: PnP ACPI init Jan 30 17:41:02.060944 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 17:41:02.060966 kernel: pnp: PnP ACPI: found 5 devices Jan 30 17:41:02.060980 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 17:41:02.060993 kernel: NET: Registered PF_INET protocol family Jan 30 17:41:02.061006 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 17:41:02.061019 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 17:41:02.061032 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 17:41:02.061045 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 17:41:02.061065 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 17:41:02.061078 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 17:41:02.061091 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 17:41:02.061104 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 17:41:02.061116 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 17:41:02.061129 kernel: NET: Registered PF_XDP protocol family Jan 30 17:41:02.061320 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 30 17:41:02.061500 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 17:41:02.061694 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 17:41:02.061864 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 17:41:02.062033 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 17:41:02.062444 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 17:41:02.062649 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 17:41:02.062818 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 17:41:02.063026 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 17:41:02.063221 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 17:41:02.063396 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 17:41:02.064501 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 17:41:02.064692 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 17:41:02.064860 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 17:41:02.065027 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 17:41:02.065270 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 17:41:02.065477 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 30 17:41:02.065683 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 30 17:41:02.065849 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 30 17:41:02.066012 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 17:41:02.066175 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 30 17:41:02.066373 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 17:41:02.066547 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 30 17:41:02.066726 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 17:41:02.066897 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 30 17:41:02.067067 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 17:41:02.067311 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 30 17:41:02.067480 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 17:41:02.067665 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 30 17:41:02.067839 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 17:41:02.068019 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 30 17:41:02.068196 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 17:41:02.068417 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 30 17:41:02.068580 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 17:41:02.068756 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 30 17:41:02.068929 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 17:41:02.069116 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 30 17:41:02.069355 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 17:41:02.069642 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 30 17:41:02.069861 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 17:41:02.070062 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 30 17:41:02.070301 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 17:41:02.070488 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 30 17:41:02.070667 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 17:41:02.070839 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 30 17:41:02.071001 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 17:41:02.071164 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 30 17:41:02.071363 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 17:41:02.071528 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 30 17:41:02.071708 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 17:41:02.071867 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 17:41:02.072018 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 17:41:02.072168 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 17:41:02.072341 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 30 17:41:02.072514 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 17:41:02.072678 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 30 17:41:02.072849 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 17:41:02.073008 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 30 17:41:02.073184 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 30 17:41:02.075419 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 30 17:41:02.075611 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 30 17:41:02.075787 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 30 17:41:02.075943 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 30 17:41:02.076109 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 30 17:41:02.076296 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 30 17:41:02.076454 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 30 17:41:02.076641 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 30 17:41:02.076798 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 30 17:41:02.076958 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 30 17:41:02.077137 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 30 17:41:02.079355 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 30 17:41:02.079519 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 30 17:41:02.079703 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 30 17:41:02.079872 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 30 17:41:02.080040 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 30 17:41:02.080254 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 30 17:41:02.080414 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 30 17:41:02.080568 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 30 17:41:02.080746 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 30 17:41:02.080903 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 30 17:41:02.081066 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 30 17:41:02.081088 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 17:41:02.081103 kernel: PCI: CLS 0 bytes, default 64 Jan 30 17:41:02.081116 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 17:41:02.081130 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 30 17:41:02.081150 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 17:41:02.081163 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 30 17:41:02.081243 kernel: Initialise system trusted keyrings Jan 30 17:41:02.081281 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 17:41:02.081300 kernel: Key type asymmetric registered Jan 30 17:41:02.081314 kernel: Asymmetric key parser 'x509' registered Jan 30 17:41:02.081327 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 17:41:02.081341 kernel: io scheduler mq-deadline registered Jan 30 17:41:02.081354 kernel: io scheduler kyber registered Jan 30 17:41:02.081367 kernel: io scheduler bfq registered Jan 30 17:41:02.081539 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 30 17:41:02.081727 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 30 17:41:02.081901 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 17:41:02.082083 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 30 17:41:02.082313 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 30 17:41:02.082478 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 17:41:02.082655 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 30 17:41:02.082821 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 30 17:41:02.082993 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 17:41:02.083168 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 30 17:41:02.083380 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 30 17:41:02.083554 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 17:41:02.083741 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 30 17:41:02.083904 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 30 17:41:02.084078 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 17:41:02.084292 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 30 17:41:02.084468 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 30 17:41:02.084644 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 17:41:02.084809 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 30 17:41:02.084971 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 30 17:41:02.085142 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 17:41:02.085333 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 30 17:41:02.085497 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 30 17:41:02.085675 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 17:41:02.085697 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 17:41:02.085712 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 17:41:02.085733 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 17:41:02.085747 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 17:41:02.085761 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 17:41:02.085775 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 17:41:02.085788 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 17:41:02.085802 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 17:41:02.085980 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 30 17:41:02.086002 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 17:41:02.086163 kernel: rtc_cmos 00:03: registered as rtc0 Jan 30 17:41:02.086357 kernel: rtc_cmos 00:03: setting system clock to 2025-01-30T17:41:01 UTC (1738258861) Jan 30 17:41:02.086533 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 30 17:41:02.086553 kernel: intel_pstate: CPU model not supported Jan 30 17:41:02.086567 kernel: NET: Registered PF_INET6 protocol family Jan 30 17:41:02.086580 kernel: Segment Routing with IPv6 Jan 30 17:41:02.086594 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 17:41:02.086607 kernel: NET: Registered PF_PACKET protocol family Jan 30 17:41:02.086631 kernel: Key type dns_resolver registered Jan 30 17:41:02.086653 kernel: IPI shorthand broadcast: enabled Jan 30 17:41:02.086667 kernel: sched_clock: Marking stable (1276003508, 241536563)->(1651225976, -133685905) Jan 30 17:41:02.086680 kernel: registered taskstats version 1 Jan 30 17:41:02.086694 kernel: Loading compiled-in X.509 certificates Jan 30 17:41:02.086707 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 17:41:02.086720 kernel: Key type .fscrypt registered Jan 30 17:41:02.086733 kernel: Key type fscrypt-provisioning registered Jan 30 17:41:02.086747 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 17:41:02.086760 kernel: ima: Allocated hash algorithm: sha1 Jan 30 17:41:02.086779 kernel: ima: No architecture policies found Jan 30 17:41:02.086797 kernel: clk: Disabling unused clocks Jan 30 17:41:02.086810 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 17:41:02.086824 kernel: Write protecting the kernel read-only data: 36864k Jan 30 17:41:02.086837 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 17:41:02.086851 kernel: Run /init as init process Jan 30 17:41:02.086864 kernel: with arguments: Jan 30 17:41:02.086878 kernel: /init Jan 30 17:41:02.086891 kernel: with environment: Jan 30 17:41:02.086908 kernel: HOME=/ Jan 30 17:41:02.086921 kernel: TERM=linux Jan 30 17:41:02.086935 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 17:41:02.086960 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 17:41:02.086976 systemd[1]: Detected virtualization kvm. Jan 30 17:41:02.086991 systemd[1]: Detected architecture x86-64. Jan 30 17:41:02.087004 systemd[1]: Running in initrd. Jan 30 17:41:02.087022 systemd[1]: No hostname configured, using default hostname. Jan 30 17:41:02.087041 systemd[1]: Hostname set to . Jan 30 17:41:02.087056 systemd[1]: Initializing machine ID from VM UUID. Jan 30 17:41:02.087070 systemd[1]: Queued start job for default target initrd.target. Jan 30 17:41:02.087092 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 17:41:02.087107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 17:41:02.087121 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 17:41:02.087136 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 17:41:02.087157 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 17:41:02.087199 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 17:41:02.087222 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 17:41:02.087237 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 17:41:02.087251 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 17:41:02.087272 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 17:41:02.087286 systemd[1]: Reached target paths.target - Path Units. Jan 30 17:41:02.087300 systemd[1]: Reached target slices.target - Slice Units. Jan 30 17:41:02.087321 systemd[1]: Reached target swap.target - Swaps. Jan 30 17:41:02.087335 systemd[1]: Reached target timers.target - Timer Units. Jan 30 17:41:02.087350 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 17:41:02.087364 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 17:41:02.087378 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 17:41:02.087393 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 17:41:02.087407 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 17:41:02.087421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 17:41:02.087441 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 17:41:02.087455 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 17:41:02.087470 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 17:41:02.087484 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 17:41:02.087498 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 17:41:02.087512 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 17:41:02.087526 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 17:41:02.087541 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 17:41:02.087555 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 17:41:02.087575 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 17:41:02.087635 systemd-journald[201]: Collecting audit messages is disabled. Jan 30 17:41:02.087670 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 17:41:02.087685 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 17:41:02.087707 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 17:41:02.087721 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 17:41:02.087735 kernel: Bridge firewalling registered Jan 30 17:41:02.087749 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 17:41:02.087769 systemd-journald[201]: Journal started Jan 30 17:41:02.087796 systemd-journald[201]: Runtime Journal (/run/log/journal/62e8472b50c64bcc9f85b352bae8ea5c) is 4.7M, max 38.0M, 33.2M free. Jan 30 17:41:02.021694 systemd-modules-load[202]: Inserted module 'overlay' Jan 30 17:41:02.147591 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 17:41:02.077312 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 30 17:41:02.148635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 17:41:02.150061 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 17:41:02.163395 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 17:41:02.166087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 17:41:02.168495 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 17:41:02.174366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 17:41:02.194452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 17:41:02.197422 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 17:41:02.208403 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 17:41:02.210708 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 17:41:02.211789 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 17:41:02.216379 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 17:41:02.238712 dracut-cmdline[236]: dracut-dracut-053 Jan 30 17:41:02.242105 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 17:41:02.249870 systemd-resolved[232]: Positive Trust Anchors: Jan 30 17:41:02.250822 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 17:41:02.250866 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 17:41:02.259146 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 30 17:41:02.261153 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 17:41:02.262043 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 17:41:02.343231 kernel: SCSI subsystem initialized Jan 30 17:41:02.355208 kernel: Loading iSCSI transport class v2.0-870. Jan 30 17:41:02.369225 kernel: iscsi: registered transport (tcp) Jan 30 17:41:02.396351 kernel: iscsi: registered transport (qla4xxx) Jan 30 17:41:02.396417 kernel: QLogic iSCSI HBA Driver Jan 30 17:41:02.451519 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 17:41:02.464497 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 17:41:02.495888 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 17:41:02.495962 kernel: device-mapper: uevent: version 1.0.3 Jan 30 17:41:02.496742 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 17:41:02.545236 kernel: raid6: sse2x4 gen() 13388 MB/s Jan 30 17:41:02.563227 kernel: raid6: sse2x2 gen() 8958 MB/s Jan 30 17:41:02.582067 kernel: raid6: sse2x1 gen() 9520 MB/s Jan 30 17:41:02.582109 kernel: raid6: using algorithm sse2x4 gen() 13388 MB/s Jan 30 17:41:02.600942 kernel: raid6: .... xor() 7671 MB/s, rmw enabled Jan 30 17:41:02.601003 kernel: raid6: using ssse3x2 recovery algorithm Jan 30 17:41:02.627255 kernel: xor: automatically using best checksumming function avx Jan 30 17:41:02.829218 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 17:41:02.844003 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 17:41:02.850453 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 17:41:02.880550 systemd-udevd[418]: Using default interface naming scheme 'v255'. Jan 30 17:41:02.887717 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 17:41:02.897589 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 17:41:02.919345 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Jan 30 17:41:02.960043 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 17:41:02.965420 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 17:41:03.082154 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 17:41:03.092809 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 17:41:03.118942 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 17:41:03.120910 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 17:41:03.122030 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 17:41:03.124865 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 17:41:03.133730 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 17:41:03.160005 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 17:41:03.215258 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 30 17:41:03.294062 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 17:41:03.294092 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 30 17:41:03.294919 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 17:41:03.294942 kernel: GPT:17805311 != 125829119 Jan 30 17:41:03.294960 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 17:41:03.294977 kernel: GPT:17805311 != 125829119 Jan 30 17:41:03.294994 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 17:41:03.295011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 17:41:03.295037 kernel: AVX version of gcm_enc/dec engaged. Jan 30 17:41:03.295056 kernel: AES CTR mode by8 optimization enabled Jan 30 17:41:03.295116 kernel: ACPI: bus type USB registered Jan 30 17:41:03.295135 kernel: usbcore: registered new interface driver usbfs Jan 30 17:41:03.295152 kernel: usbcore: registered new interface driver hub Jan 30 17:41:03.295170 kernel: usbcore: registered new device driver usb Jan 30 17:41:03.271552 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 17:41:03.271767 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 17:41:03.272844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 17:41:03.273631 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 17:41:03.273800 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 17:41:03.274662 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 17:41:03.283622 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 17:41:03.341650 kernel: libata version 3.00 loaded. Jan 30 17:41:03.383246 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 17:41:03.412738 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 30 17:41:03.413000 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 17:41:03.413652 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 30 17:41:03.413872 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 30 17:41:03.414079 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 30 17:41:03.414343 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (469) Jan 30 17:41:03.414367 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (475) Jan 30 17:41:03.414385 kernel: hub 1-0:1.0: USB hub found Jan 30 17:41:03.414634 kernel: hub 1-0:1.0: 4 ports detected Jan 30 17:41:03.414837 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 17:41:03.415057 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 17:41:03.425424 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 17:41:03.425465 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 17:41:03.425722 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 17:41:03.425919 kernel: hub 2-0:1.0: USB hub found Jan 30 17:41:03.426155 kernel: hub 2-0:1.0: 4 ports detected Jan 30 17:41:03.426402 kernel: scsi host0: ahci Jan 30 17:41:03.426612 kernel: scsi host1: ahci Jan 30 17:41:03.426818 kernel: scsi host2: ahci Jan 30 17:41:03.427021 kernel: scsi host3: ahci Jan 30 17:41:03.428387 kernel: scsi host4: ahci Jan 30 17:41:03.428590 kernel: scsi host5: ahci Jan 30 17:41:03.428806 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 30 17:41:03.428828 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 30 17:41:03.428846 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 30 17:41:03.428872 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 30 17:41:03.428891 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 30 17:41:03.428908 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 30 17:41:03.399226 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 17:41:03.496997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 17:41:03.505083 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 17:41:03.517008 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 17:41:03.517858 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 17:41:03.525837 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 17:41:03.532401 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 17:41:03.537357 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 17:41:03.541295 disk-uuid[566]: Primary Header is updated. Jan 30 17:41:03.541295 disk-uuid[566]: Secondary Entries is updated. Jan 30 17:41:03.541295 disk-uuid[566]: Secondary Header is updated. Jan 30 17:41:03.549256 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 17:41:03.554229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 17:41:03.562208 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 17:41:03.582620 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 17:41:03.645223 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 17:41:03.747135 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 17:41:03.747234 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 17:41:03.747268 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 17:41:03.747285 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 30 17:41:03.747301 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 17:41:03.747316 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 17:41:03.792233 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 17:41:03.798564 kernel: usbcore: registered new interface driver usbhid Jan 30 17:41:03.798619 kernel: usbhid: USB HID core driver Jan 30 17:41:03.807023 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 30 17:41:03.807062 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 30 17:41:04.562232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 17:41:04.563238 disk-uuid[567]: The operation has completed successfully. Jan 30 17:41:04.617090 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 17:41:04.617267 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 17:41:04.634421 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 17:41:04.648338 sh[588]: Success Jan 30 17:41:04.666263 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 30 17:41:04.729710 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 17:41:04.739311 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 17:41:04.741244 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 17:41:04.770427 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 17:41:04.770486 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 17:41:04.772818 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 17:41:04.776280 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 17:41:04.776312 kernel: BTRFS info (device dm-0): using free space tree Jan 30 17:41:04.787635 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 17:41:04.789112 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 17:41:04.795382 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 17:41:04.797254 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 17:41:04.818680 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 17:41:04.818740 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 17:41:04.818761 kernel: BTRFS info (device vda6): using free space tree Jan 30 17:41:04.824202 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 17:41:04.839055 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 17:41:04.841606 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 17:41:04.848944 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 17:41:04.856449 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 17:41:04.951686 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 17:41:04.965450 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 17:41:05.004092 systemd-networkd[770]: lo: Link UP Jan 30 17:41:05.004106 systemd-networkd[770]: lo: Gained carrier Jan 30 17:41:05.010549 systemd-networkd[770]: Enumeration completed Jan 30 17:41:05.011218 ignition[692]: Ignition 2.19.0 Jan 30 17:41:05.010946 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 17:41:05.011234 ignition[692]: Stage: fetch-offline Jan 30 17:41:05.012673 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 17:41:05.011328 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 30 17:41:05.012679 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 17:41:05.011360 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 17:41:05.014235 systemd[1]: Reached target network.target - Network. Jan 30 17:41:05.011543 ignition[692]: parsed url from cmdline: "" Jan 30 17:41:05.016006 systemd-networkd[770]: eth0: Link UP Jan 30 17:41:05.011550 ignition[692]: no config URL provided Jan 30 17:41:05.016012 systemd-networkd[770]: eth0: Gained carrier Jan 30 17:41:05.011579 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 17:41:05.016023 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 17:41:05.011597 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 30 17:41:05.016733 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 17:41:05.011606 ignition[692]: failed to fetch config: resource requires networking Jan 30 17:41:05.024428 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 17:41:05.011908 ignition[692]: Ignition finished successfully Jan 30 17:41:05.038308 systemd-networkd[770]: eth0: DHCPv4 address 10.244.11.222/30, gateway 10.244.11.221 acquired from 10.244.11.221 Jan 30 17:41:05.057794 ignition[778]: Ignition 2.19.0 Jan 30 17:41:05.057812 ignition[778]: Stage: fetch Jan 30 17:41:05.058074 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jan 30 17:41:05.058094 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 17:41:05.059269 ignition[778]: parsed url from cmdline: "" Jan 30 17:41:05.059277 ignition[778]: no config URL provided Jan 30 17:41:05.059287 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 17:41:05.059304 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jan 30 17:41:05.059445 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 30 17:41:05.059522 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 30 17:41:05.059542 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 30 17:41:05.076071 ignition[778]: GET result: OK Jan 30 17:41:05.076214 ignition[778]: parsing config with SHA512: ce7cdb9848e8ae95362b5a9629969c52b6331d10e6a6a3769e04c1cfff946e8392e3d734c410d25bf3a17e1e7e4665f647c6b1e3dbb5b3176e42553960ed9f34 Jan 30 17:41:05.080145 unknown[778]: fetched base config from "system" Jan 30 17:41:05.080162 unknown[778]: fetched base config from "system" Jan 30 17:41:05.080502 ignition[778]: fetch: fetch complete Jan 30 17:41:05.080172 unknown[778]: fetched user config from "openstack" Jan 30 17:41:05.080510 ignition[778]: fetch: fetch passed Jan 30 17:41:05.082516 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 17:41:05.080597 ignition[778]: Ignition finished successfully Jan 30 17:41:05.093491 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 17:41:05.113368 ignition[786]: Ignition 2.19.0 Jan 30 17:41:05.113395 ignition[786]: Stage: kargs Jan 30 17:41:05.113655 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 30 17:41:05.113676 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 17:41:05.116019 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 17:41:05.114723 ignition[786]: kargs: kargs passed Jan 30 17:41:05.114799 ignition[786]: Ignition finished successfully Jan 30 17:41:05.124399 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 17:41:05.142389 ignition[792]: Ignition 2.19.0 Jan 30 17:41:05.142411 ignition[792]: Stage: disks Jan 30 17:41:05.142672 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 30 17:41:05.145291 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 17:41:05.142692 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 17:41:05.143609 ignition[792]: disks: disks passed Jan 30 17:41:05.148364 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 17:41:05.143683 ignition[792]: Ignition finished successfully Jan 30 17:41:05.149511 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 17:41:05.150960 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 17:41:05.152517 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 17:41:05.153956 systemd[1]: Reached target basic.target - Basic System. Jan 30 17:41:05.172421 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 17:41:05.190645 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 17:41:05.194339 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 17:41:05.210369 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 17:41:05.332242 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 17:41:05.333406 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 17:41:05.334858 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 17:41:05.349459 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 17:41:05.352250 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 17:41:05.354164 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 17:41:05.360379 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 30 17:41:05.374610 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808) Jan 30 17:41:05.374665 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 17:41:05.374695 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 17:41:05.374714 kernel: BTRFS info (device vda6): using free space tree Jan 30 17:41:05.374731 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 17:41:05.372716 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 17:41:05.372766 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 17:41:05.378089 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 17:41:05.379048 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 17:41:05.390384 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 17:41:05.468237 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 17:41:05.476956 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 30 17:41:05.488882 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 17:41:05.496426 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 17:41:05.600024 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 17:41:05.606303 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 17:41:05.608386 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 17:41:05.623202 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 17:41:05.651844 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 17:41:05.655639 ignition[924]: INFO : Ignition 2.19.0 Jan 30 17:41:05.655639 ignition[924]: INFO : Stage: mount Jan 30 17:41:05.655639 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 17:41:05.655639 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 17:41:05.655639 ignition[924]: INFO : mount: mount passed Jan 30 17:41:05.655639 ignition[924]: INFO : Ignition finished successfully Jan 30 17:41:05.655852 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 17:41:05.768894 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 17:41:06.169526 systemd-networkd[770]: eth0: Gained IPv6LL Jan 30 17:41:07.679003 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:2f7:24:19ff:fef4:bde/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:2f7:24:19ff:fef4:bde/64 assigned by NDisc. Jan 30 17:41:07.679027 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 17:41:12.539814 coreos-metadata[810]: Jan 30 17:41:12.539 WARN failed to locate config-drive, using the metadata service API instead Jan 30 17:41:12.562138 coreos-metadata[810]: Jan 30 17:41:12.562 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 17:41:12.577741 coreos-metadata[810]: Jan 30 17:41:12.577 INFO Fetch successful Jan 30 17:41:12.578610 coreos-metadata[810]: Jan 30 17:41:12.577 INFO wrote hostname srv-8ltbt.gb1.brightbox.com to /sysroot/etc/hostname Jan 30 17:41:12.582540 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 30 17:41:12.582745 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 30 17:41:12.595365 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 17:41:12.604649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 17:41:12.625200 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 30 17:41:12.625277 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 17:41:12.627594 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 17:41:12.627638 kernel: BTRFS info (device vda6): using free space tree Jan 30 17:41:12.633236 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 17:41:12.636355 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 17:41:12.664016 ignition[960]: INFO : Ignition 2.19.0 Jan 30 17:41:12.664016 ignition[960]: INFO : Stage: files Jan 30 17:41:12.665860 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 17:41:12.665860 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 17:41:12.665860 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 30 17:41:12.668812 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 17:41:12.668812 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 17:41:12.671119 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 17:41:12.672311 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 17:41:12.673464 unknown[960]: wrote ssh authorized keys file for user: core Jan 30 17:41:12.674505 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 17:41:12.675599 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 17:41:12.676864 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 17:41:12.676864 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 17:41:12.676864 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 17:41:12.676864 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 17:41:12.676864 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 17:41:12.676864 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 17:41:12.676864 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 17:41:13.319317 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 17:41:14.516256 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 17:41:14.518734 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 17:41:14.518734 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 17:41:14.518734 ignition[960]: INFO : files: files passed Jan 30 17:41:14.518734 ignition[960]: INFO : Ignition finished successfully Jan 30 17:41:14.518675 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 17:41:14.526432 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 17:41:14.535475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 17:41:14.541287 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 17:41:14.541460 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 17:41:14.551914 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 17:41:14.553535 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 17:41:14.554674 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 17:41:14.555919 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 17:41:14.557102 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 17:41:14.564387 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 17:41:14.609120 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 17:41:14.609305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 17:41:14.611329 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 17:41:14.612755 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 17:41:14.614376 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 17:41:14.620415 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 17:41:14.639190 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 17:41:14.644409 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 17:41:14.675172 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 17:41:14.676242 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 17:41:14.678017 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 17:41:14.679561 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 17:41:14.679747 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 17:41:14.681546 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 17:41:14.682486 systemd[1]: Stopped target basic.target - Basic System. Jan 30 17:41:14.683973 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 17:41:14.685459 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 17:41:14.686987 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 17:41:14.688674 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 17:41:14.690297 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 17:41:14.691931 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 17:41:14.693503 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 17:41:14.695158 systemd[1]: Stopped target swap.target - Swaps. Jan 30 17:41:14.696583 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 17:41:14.696778 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 17:41:14.700653 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 17:41:14.701566 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 17:41:14.703108 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 17:41:14.703621 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 17:41:14.704848 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 17:41:14.705021 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 17:41:14.707051 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 17:41:14.707319 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 17:41:14.709108 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 17:41:14.709303 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 17:41:14.717490 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 17:41:14.724432 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 17:41:14.726274 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 17:41:14.726485 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 17:41:14.730440 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 17:41:14.730623 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 17:41:14.741487 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 17:41:14.743262 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 17:41:14.750060 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 17:41:14.754988 ignition[1012]: INFO : Ignition 2.19.0 Jan 30 17:41:14.754988 ignition[1012]: INFO : Stage: umount Jan 30 17:41:14.756793 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 17:41:14.756793 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 30 17:41:14.756793 ignition[1012]: INFO : umount: umount passed Jan 30 17:41:14.756793 ignition[1012]: INFO : Ignition finished successfully Jan 30 17:41:14.757508 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 17:41:14.757681 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 17:41:14.760129 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 17:41:14.760302 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 17:41:14.761804 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 17:41:14.761875 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 17:41:14.763387 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 17:41:14.763474 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 17:41:14.764779 systemd[1]: Stopped target network.target - Network. Jan 30 17:41:14.766114 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 17:41:14.766220 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 17:41:14.767659 systemd[1]: Stopped target paths.target - Path Units. Jan 30 17:41:14.768956 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 17:41:14.772258 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 17:41:14.773871 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 17:41:14.775519 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 17:41:14.776954 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 17:41:14.777019 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 17:41:14.784481 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 17:41:14.784548 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 17:41:14.786190 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 17:41:14.786286 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 17:41:14.787648 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 17:41:14.787729 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 17:41:14.789296 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 17:41:14.790990 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 17:41:14.795025 systemd-networkd[770]: eth0: DHCPv6 lease lost Jan 30 17:41:14.797303 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 17:41:14.797516 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 17:41:14.799560 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 17:41:14.799619 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 17:41:14.807350 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 17:41:14.808577 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 17:41:14.808652 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 17:41:14.811305 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 17:41:14.815792 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 17:41:14.815966 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 17:41:14.823673 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 17:41:14.823902 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 17:41:14.827918 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 17:41:14.828012 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 17:41:14.829729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 17:41:14.829788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 17:41:14.831337 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 17:41:14.831416 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 17:41:14.833454 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 17:41:14.833521 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 17:41:14.835033 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 17:41:14.835112 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 17:41:14.845441 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 17:41:14.846883 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 17:41:14.846957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 17:41:14.848439 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 17:41:14.848507 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 17:41:14.849971 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 17:41:14.850037 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 17:41:14.851661 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 17:41:14.851727 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 17:41:14.856020 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 17:41:14.856090 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 17:41:14.857532 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 17:41:14.857598 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 17:41:14.860896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 17:41:14.860982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 17:41:14.863296 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 17:41:14.863458 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 17:41:14.865642 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 17:41:14.865802 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 17:41:14.867283 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 17:41:14.867421 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 17:41:14.870634 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 17:41:14.871474 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 17:41:14.871557 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 17:41:14.879439 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 17:41:14.890668 systemd[1]: Switching root. Jan 30 17:41:14.925115 systemd-journald[201]: Journal stopped Jan 30 17:41:16.369674 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 30 17:41:16.369851 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 17:41:16.369877 kernel: SELinux: policy capability open_perms=1 Jan 30 17:41:16.369903 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 17:41:16.369931 kernel: SELinux: policy capability always_check_network=0 Jan 30 17:41:16.369951 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 17:41:16.369984 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 17:41:16.370003 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 17:41:16.370027 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 17:41:16.370059 kernel: audit: type=1403 audit(1738258875.173:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 17:41:16.370099 systemd[1]: Successfully loaded SELinux policy in 49.364ms. Jan 30 17:41:16.370146 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.885ms. Jan 30 17:41:16.370199 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 17:41:16.370226 systemd[1]: Detected virtualization kvm. Jan 30 17:41:16.370269 systemd[1]: Detected architecture x86-64. Jan 30 17:41:16.370297 systemd[1]: Detected first boot. Jan 30 17:41:16.370318 systemd[1]: Hostname set to . Jan 30 17:41:16.370338 systemd[1]: Initializing machine ID from VM UUID. Jan 30 17:41:16.370358 zram_generator::config[1059]: No configuration found. Jan 30 17:41:16.370402 systemd[1]: Populated /etc with preset unit settings. Jan 30 17:41:16.370424 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 17:41:16.370450 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 17:41:16.370484 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 17:41:16.370515 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 17:41:16.370543 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 17:41:16.370571 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 17:41:16.370592 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 17:41:16.370628 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 17:41:16.370650 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 17:41:16.370670 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 17:41:16.370706 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 17:41:16.370728 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 17:41:16.370749 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 17:41:16.370770 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 17:41:16.370790 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 17:41:16.370810 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 17:41:16.370842 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 17:41:16.370870 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 17:41:16.370891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 17:41:16.370910 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 17:41:16.370955 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 17:41:16.370977 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 17:41:16.371009 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 17:41:16.371029 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 17:41:16.371055 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 17:41:16.371082 systemd[1]: Reached target slices.target - Slice Units. Jan 30 17:41:16.371118 systemd[1]: Reached target swap.target - Swaps. Jan 30 17:41:16.371140 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 17:41:16.371166 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 17:41:16.373323 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 17:41:16.373349 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 17:41:16.373382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 17:41:16.373411 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 17:41:16.373432 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 17:41:16.373459 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 17:41:16.373493 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 17:41:16.373516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 17:41:16.373547 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 17:41:16.373569 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 17:41:16.373589 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 17:41:16.373625 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 17:41:16.373647 systemd[1]: Reached target machines.target - Containers. Jan 30 17:41:16.373680 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 17:41:16.373712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 17:41:16.373734 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 17:41:16.373766 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 17:41:16.373813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 17:41:16.373847 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 17:41:16.373875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 17:41:16.373910 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 17:41:16.373932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 17:41:16.373959 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 17:41:16.373999 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 17:41:16.374019 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 17:41:16.374039 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 17:41:16.374057 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 17:41:16.374076 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 17:41:16.374108 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 17:41:16.374129 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 17:41:16.374162 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 17:41:16.374181 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 17:41:16.374216 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 17:41:16.374246 systemd[1]: Stopped verity-setup.service. Jan 30 17:41:16.374268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 17:41:16.374289 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 17:41:16.374309 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 17:41:16.374342 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 17:41:16.374372 kernel: fuse: init (API version 7.39) Jan 30 17:41:16.374395 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 17:41:16.374416 kernel: loop: module loaded Jan 30 17:41:16.374435 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 17:41:16.374469 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 17:41:16.374491 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 17:41:16.374518 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 17:41:16.374546 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 17:41:16.374573 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 17:41:16.374594 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 17:41:16.374628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 17:41:16.374690 systemd-journald[1155]: Collecting audit messages is disabled. Jan 30 17:41:16.374742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 17:41:16.374770 systemd-journald[1155]: Journal started Jan 30 17:41:16.374810 systemd-journald[1155]: Runtime Journal (/run/log/journal/62e8472b50c64bcc9f85b352bae8ea5c) is 4.7M, max 38.0M, 33.2M free. Jan 30 17:41:15.966290 systemd[1]: Queued start job for default target multi-user.target. Jan 30 17:41:16.377304 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 17:41:15.987576 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 17:41:15.988290 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 17:41:16.381242 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 17:41:16.384564 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 17:41:16.384817 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 17:41:16.386042 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 17:41:16.386295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 17:41:16.393701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 17:41:16.394972 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 17:41:16.396120 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 17:41:16.420816 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 17:41:16.447671 kernel: ACPI: bus type drm_connector registered Jan 30 17:41:16.451300 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 17:41:16.461338 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 17:41:16.463284 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 17:41:16.463382 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 17:41:16.467190 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 17:41:16.473434 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 17:41:16.480390 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 17:41:16.481322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 17:41:16.485374 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 17:41:16.493458 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 17:41:16.494334 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 17:41:16.497418 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 17:41:16.498301 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 17:41:16.500808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 17:41:16.510434 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 17:41:16.520433 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 17:41:16.525809 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 17:41:16.526100 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 17:41:16.527321 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 17:41:16.528373 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 17:41:16.529535 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 17:41:16.575517 systemd-journald[1155]: Time spent on flushing to /var/log/journal/62e8472b50c64bcc9f85b352bae8ea5c is 123.427ms for 1125 entries. Jan 30 17:41:16.575517 systemd-journald[1155]: System Journal (/var/log/journal/62e8472b50c64bcc9f85b352bae8ea5c) is 8.0M, max 584.8M, 576.8M free. Jan 30 17:41:16.713457 systemd-journald[1155]: Received client request to flush runtime journal. Jan 30 17:41:16.713516 kernel: loop0: detected capacity change from 0 to 218376 Jan 30 17:41:16.713619 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 17:41:16.713656 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 17:41:16.580124 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 17:41:16.583036 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 17:41:16.597170 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 17:41:16.655868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 17:41:16.680369 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 30 17:41:16.680394 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jan 30 17:41:16.684158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 17:41:16.696155 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 17:41:16.700845 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 17:41:16.705438 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 17:41:16.709728 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 17:41:16.725455 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 17:41:16.726761 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 17:41:16.754238 kernel: loop2: detected capacity change from 0 to 8 Jan 30 17:41:16.774136 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 17:41:16.787253 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 17:41:16.824996 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 17:41:16.845485 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 17:41:16.881222 kernel: loop4: detected capacity change from 0 to 218376 Jan 30 17:41:16.887155 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 30 17:41:16.887212 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 30 17:41:16.897612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 17:41:16.906210 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 17:41:16.924393 kernel: loop6: detected capacity change from 0 to 8 Jan 30 17:41:16.929252 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 17:41:16.943837 (sd-merge)[1217]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 30 17:41:16.944674 (sd-merge)[1217]: Merged extensions into '/usr'. Jan 30 17:41:16.956807 systemd[1]: Reloading requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 17:41:16.957022 systemd[1]: Reloading... Jan 30 17:41:17.049242 zram_generator::config[1241]: No configuration found. Jan 30 17:41:17.350873 ldconfig[1182]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 17:41:17.387332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 17:41:17.456293 systemd[1]: Reloading finished in 496 ms. Jan 30 17:41:17.501270 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 17:41:17.502506 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 17:41:17.512409 systemd[1]: Starting ensure-sysext.service... Jan 30 17:41:17.519409 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 17:41:17.540385 systemd[1]: Reloading requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Jan 30 17:41:17.540411 systemd[1]: Reloading... Jan 30 17:41:17.564551 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 17:41:17.565139 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 17:41:17.570666 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 17:41:17.571109 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 30 17:41:17.573164 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 30 17:41:17.580308 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 17:41:17.580442 systemd-tmpfiles[1301]: Skipping /boot Jan 30 17:41:17.598961 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 17:41:17.599110 systemd-tmpfiles[1301]: Skipping /boot Jan 30 17:41:17.634215 zram_generator::config[1328]: No configuration found. Jan 30 17:41:17.806030 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 17:41:17.873789 systemd[1]: Reloading finished in 332 ms. Jan 30 17:41:17.894161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 17:41:17.904927 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 17:41:17.919559 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 17:41:17.925314 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 17:41:17.928013 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 17:41:17.934525 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 17:41:17.940825 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 17:41:17.949517 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 17:41:17.954175 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 17:41:17.954496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 17:41:17.962533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 17:41:17.967530 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 17:41:17.971460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 17:41:17.972364 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 17:41:17.972534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 17:41:17.978232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 17:41:17.978530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 17:41:17.978769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 17:41:17.988572 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 17:41:17.989393 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 17:41:17.993460 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 17:41:17.993769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 17:41:18.004567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 17:41:18.006346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 17:41:18.006545 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 17:41:18.008723 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 17:41:18.009120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 17:41:18.022387 systemd[1]: Finished ensure-sysext.service. Jan 30 17:41:18.032152 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Jan 30 17:41:18.042549 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 17:41:18.045774 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 17:41:18.067371 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 17:41:18.067631 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 17:41:18.088577 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 17:41:18.090562 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 17:41:18.091639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 17:41:18.095278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 17:41:18.095353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 17:41:18.099598 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 17:41:18.102348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 17:41:18.102588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 17:41:18.104149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 17:41:18.121481 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 17:41:18.122374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 17:41:18.131431 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 17:41:18.134693 augenrules[1429]: No rules Jan 30 17:41:18.136564 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 17:41:18.167956 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 17:41:18.184107 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 17:41:18.304095 systemd-resolved[1390]: Positive Trust Anchors: Jan 30 17:41:18.306697 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 17:41:18.306754 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 17:41:18.319668 systemd-resolved[1390]: Using system hostname 'srv-8ltbt.gb1.brightbox.com'. Jan 30 17:41:18.325853 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 17:41:18.326794 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 17:41:18.344394 systemd-networkd[1424]: lo: Link UP Jan 30 17:41:18.345018 systemd-networkd[1424]: lo: Gained carrier Jan 30 17:41:18.345766 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 17:41:18.346947 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 17:41:18.355391 systemd-networkd[1424]: Enumeration completed Jan 30 17:41:18.355412 systemd-timesyncd[1407]: No network connectivity, watching for changes. Jan 30 17:41:18.355513 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 17:41:18.356422 systemd[1]: Reached target network.target - Network. Jan 30 17:41:18.357591 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 17:41:18.357598 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 17:41:18.363128 systemd-networkd[1424]: eth0: Link UP Jan 30 17:41:18.363135 systemd-networkd[1424]: eth0: Gained carrier Jan 30 17:41:18.363154 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 17:41:18.365440 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 17:41:18.379906 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1442) Jan 30 17:41:18.380281 systemd-networkd[1424]: eth0: DHCPv4 address 10.244.11.222/30, gateway 10.244.11.221 acquired from 10.244.11.221 Jan 30 17:41:18.382928 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Jan 30 17:41:18.398443 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 17:41:18.400275 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 17:41:18.487237 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 17:41:18.502282 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 17:41:18.509270 kernel: ACPI: button: Power Button [PWRF] Jan 30 17:41:18.518868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 17:41:18.527465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 17:41:18.557770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 17:41:18.577513 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 17:41:18.577581 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 17:41:18.591255 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 17:41:18.591549 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 17:41:18.595768 systemd-timesyncd[1407]: Contacted time server 162.159.200.1:123 (1.flatcar.pool.ntp.org). Jan 30 17:41:18.596069 systemd-timesyncd[1407]: Initial clock synchronization to Thu 2025-01-30 17:41:18.731817 UTC. Jan 30 17:41:18.648466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 17:41:18.826526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 17:41:18.848303 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 17:41:18.855487 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 17:41:18.884262 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 17:41:18.917482 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 17:41:18.925650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 17:41:18.926767 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 17:41:18.927739 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 17:41:18.928646 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 17:41:18.929995 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 17:41:18.930990 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 17:41:18.931862 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 17:41:18.932697 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 17:41:18.932768 systemd[1]: Reached target paths.target - Path Units. Jan 30 17:41:18.933546 systemd[1]: Reached target timers.target - Timer Units. Jan 30 17:41:18.935537 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 17:41:18.939062 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 17:41:18.945563 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 17:41:18.948517 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 17:41:18.950126 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 17:41:18.951069 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 17:41:18.951796 systemd[1]: Reached target basic.target - Basic System. Jan 30 17:41:18.952561 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 17:41:18.952616 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 17:41:18.954354 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 17:41:18.960530 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 17:41:18.971258 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 17:41:18.971506 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 17:41:18.974977 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 17:41:18.984425 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 17:41:18.985281 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 17:41:18.993483 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 17:41:18.999262 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 17:41:19.005425 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 17:41:19.013408 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 17:41:19.015168 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 17:41:19.018012 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 17:41:19.023804 jq[1482]: false Jan 30 17:41:19.029105 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 17:41:19.034370 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 17:41:19.039305 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 17:41:19.041265 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 17:41:19.053740 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 17:41:19.054012 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 17:41:19.063287 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 17:41:19.072469 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 17:41:19.073290 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 17:41:19.113715 jq[1491]: true Jan 30 17:41:19.115239 extend-filesystems[1483]: Found loop4 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found loop5 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found loop6 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found loop7 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found vda Jan 30 17:41:19.119173 extend-filesystems[1483]: Found vda1 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found vda2 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found vda3 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found usr Jan 30 17:41:19.119173 extend-filesystems[1483]: Found vda4 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found vda6 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found vda7 Jan 30 17:41:19.119173 extend-filesystems[1483]: Found vda9 Jan 30 17:41:19.119173 extend-filesystems[1483]: Checking size of /dev/vda9 Jan 30 17:41:19.156339 update_engine[1490]: I20250130 17:41:19.117545 1490 main.cc:92] Flatcar Update Engine starting Jan 30 17:41:19.156339 update_engine[1490]: I20250130 17:41:19.130302 1490 update_check_scheduler.cc:74] Next update check in 6m54s Jan 30 17:41:19.124466 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 17:41:19.120611 dbus-daemon[1481]: [system] SELinux support is enabled Jan 30 17:41:19.132634 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 17:41:19.126764 dbus-daemon[1481]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1424 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 17:41:19.160627 jq[1510]: true Jan 30 17:41:19.132675 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 17:41:19.133598 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 17:41:19.136397 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 17:41:19.136425 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 17:41:19.137349 systemd[1]: Started update-engine.service - Update Engine. Jan 30 17:41:19.139050 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 17:41:19.154422 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 17:41:19.162641 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 17:41:19.180397 extend-filesystems[1483]: Resized partition /dev/vda9 Jan 30 17:41:19.190690 extend-filesystems[1520]: resize2fs 1.47.1 (20-May-2024) Jan 30 17:41:19.202734 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 30 17:41:19.260719 systemd-logind[1489]: Watching system buttons on /dev/input/event2 (Power Button) Jan 30 17:41:19.260762 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 17:41:19.263450 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1439) Jan 30 17:41:19.284332 systemd-logind[1489]: New seat seat0. Jan 30 17:41:19.292141 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 17:41:19.413306 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 30 17:41:19.421037 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 17:41:19.439422 systemd-networkd[1424]: eth0: Gained IPv6LL Jan 30 17:41:19.457529 systemd[1]: Starting sshkeys.service... Jan 30 17:41:19.466233 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 17:41:19.468552 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 17:41:19.481492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 17:41:19.500623 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 17:41:19.559834 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 17:41:19.574180 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 17:41:19.576734 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 30 17:41:19.602946 extend-filesystems[1520]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 17:41:19.602946 extend-filesystems[1520]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 30 17:41:19.602946 extend-filesystems[1520]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 30 17:41:19.614635 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Jan 30 17:41:19.606386 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 17:41:19.616941 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 17:41:19.606690 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 17:41:19.617160 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 17:41:19.618475 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1514 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 17:41:19.632425 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 17:41:19.644227 containerd[1508]: time="2025-01-30T17:41:19.643579295Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 17:41:19.649505 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 17:41:19.671020 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 17:41:19.678398 polkitd[1560]: Started polkitd version 121 Jan 30 17:41:19.695697 polkitd[1560]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 17:41:19.695801 polkitd[1560]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 17:41:19.700779 polkitd[1560]: Finished loading, compiling and executing 2 rules Jan 30 17:41:19.701463 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 17:41:19.703462 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 17:41:19.705044 polkitd[1560]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 17:41:19.719003 containerd[1508]: time="2025-01-30T17:41:19.718879768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.723226611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.723280414Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.723314668Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.723593781Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.723644364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.723770346Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.723794756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.724546700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.724574535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.724596260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725147 containerd[1508]: time="2025-01-30T17:41:19.724613004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725772 containerd[1508]: time="2025-01-30T17:41:19.724760296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725772 containerd[1508]: time="2025-01-30T17:41:19.725128828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725772 containerd[1508]: time="2025-01-30T17:41:19.725412270Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 17:41:19.725772 containerd[1508]: time="2025-01-30T17:41:19.725437949Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 17:41:19.725772 containerd[1508]: time="2025-01-30T17:41:19.725591790Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 17:41:19.725772 containerd[1508]: time="2025-01-30T17:41:19.725676972Z" level=info msg="metadata content store policy set" policy=shared Jan 30 17:41:19.733220 systemd-hostnamed[1514]: Hostname set to (static) Jan 30 17:41:19.740079 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 17:41:19.742847 containerd[1508]: time="2025-01-30T17:41:19.742047570Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 17:41:19.742847 containerd[1508]: time="2025-01-30T17:41:19.742178245Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 17:41:19.742847 containerd[1508]: time="2025-01-30T17:41:19.742283891Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 17:41:19.742847 containerd[1508]: time="2025-01-30T17:41:19.742349083Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 17:41:19.742847 containerd[1508]: time="2025-01-30T17:41:19.742401504Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 17:41:19.742847 containerd[1508]: time="2025-01-30T17:41:19.742620562Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.743741244Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.743957869Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.743985991Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744007051Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744031685Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744059948Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744090932Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744117143Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744139574Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744161990Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744209622Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744234278Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744275977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.744897 containerd[1508]: time="2025-01-30T17:41:19.744302245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744322969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744344464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744370494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744392925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744413230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744433863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744519447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744548614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744568374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744595974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744618019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744640470Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744684608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744710226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746267 containerd[1508]: time="2025-01-30T17:41:19.744728737Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 17:41:19.746781 containerd[1508]: time="2025-01-30T17:41:19.744824347Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 17:41:19.746781 containerd[1508]: time="2025-01-30T17:41:19.744855378Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 17:41:19.746781 containerd[1508]: time="2025-01-30T17:41:19.744876884Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 17:41:19.746781 containerd[1508]: time="2025-01-30T17:41:19.744897581Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 17:41:19.746781 containerd[1508]: time="2025-01-30T17:41:19.744914586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.746781 containerd[1508]: time="2025-01-30T17:41:19.744941855Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 17:41:19.746781 containerd[1508]: time="2025-01-30T17:41:19.744971903Z" level=info msg="NRI interface is disabled by configuration." Jan 30 17:41:19.746781 containerd[1508]: time="2025-01-30T17:41:19.744991398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 17:41:19.747090 containerd[1508]: time="2025-01-30T17:41:19.745410740Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 17:41:19.747090 containerd[1508]: time="2025-01-30T17:41:19.745497728Z" level=info msg="Connect containerd service" Jan 30 17:41:19.747090 containerd[1508]: time="2025-01-30T17:41:19.745570450Z" level=info msg="using legacy CRI server" Jan 30 17:41:19.747090 containerd[1508]: time="2025-01-30T17:41:19.745593100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 17:41:19.747090 containerd[1508]: time="2025-01-30T17:41:19.745763413Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 17:41:19.750693 containerd[1508]: time="2025-01-30T17:41:19.750032352Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 17:41:19.750693 containerd[1508]: time="2025-01-30T17:41:19.750233787Z" level=info msg="Start subscribing containerd event" Jan 30 17:41:19.751789 containerd[1508]: time="2025-01-30T17:41:19.751754315Z" level=info msg="Start recovering state" Jan 30 17:41:19.751947 containerd[1508]: time="2025-01-30T17:41:19.751897928Z" level=info msg="Start event monitor" Jan 30 17:41:19.752029 containerd[1508]: time="2025-01-30T17:41:19.751957747Z" level=info msg="Start snapshots syncer" Jan 30 17:41:19.752029 containerd[1508]: time="2025-01-30T17:41:19.751987211Z" level=info msg="Start cni network conf syncer for default" Jan 30 17:41:19.752029 containerd[1508]: time="2025-01-30T17:41:19.752003516Z" level=info msg="Start streaming server" Jan 30 17:41:19.753991 containerd[1508]: time="2025-01-30T17:41:19.753751784Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 17:41:19.753991 containerd[1508]: time="2025-01-30T17:41:19.753854644Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 17:41:19.756838 containerd[1508]: time="2025-01-30T17:41:19.756776518Z" level=info msg="containerd successfully booted in 0.117551s" Jan 30 17:41:19.756891 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 17:41:19.788540 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 17:41:19.803679 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 17:41:19.811889 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 17:41:19.812175 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 17:41:19.824022 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 17:41:19.837093 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 17:41:19.849856 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 17:41:19.858704 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 17:41:19.860981 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 17:41:20.673462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 17:41:20.678173 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 17:41:20.928372 systemd-networkd[1424]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:2f7:24:19ff:fef4:bde/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:2f7:24:19ff:fef4:bde/64 assigned by NDisc. Jan 30 17:41:20.928387 systemd-networkd[1424]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 30 17:41:21.292048 kubelet[1598]: E0130 17:41:21.291798 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 17:41:21.294504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 17:41:21.294785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 17:41:21.295454 systemd[1]: kubelet.service: Consumed 1.067s CPU time. Jan 30 17:41:23.938017 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 17:41:23.944616 systemd[1]: Started sshd@0-10.244.11.222:22-139.178.89.65:47246.service - OpenSSH per-connection server daemon (139.178.89.65:47246). Jan 30 17:41:24.849117 sshd[1610]: Accepted publickey for core from 139.178.89.65 port 47246 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 17:41:24.852605 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 17:41:24.882811 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 17:41:24.888426 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 17:41:24.896314 systemd-logind[1489]: New session 1 of user core. Jan 30 17:41:25.003288 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 17:41:25.004905 login[1589]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 17:41:25.010067 login[1590]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 17:41:25.018986 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 17:41:25.024752 (systemd)[1618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 17:41:25.025553 systemd-logind[1489]: New session 2 of user core. Jan 30 17:41:25.031642 systemd-logind[1489]: New session 3 of user core. Jan 30 17:41:25.161416 systemd[1618]: Queued start job for default target default.target. Jan 30 17:41:25.179214 systemd[1618]: Created slice app.slice - User Application Slice. Jan 30 17:41:25.179261 systemd[1618]: Reached target paths.target - Paths. Jan 30 17:41:25.179285 systemd[1618]: Reached target timers.target - Timers. Jan 30 17:41:25.181347 systemd[1618]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 17:41:25.199647 systemd[1618]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 17:41:25.199902 systemd[1618]: Reached target sockets.target - Sockets. Jan 30 17:41:25.199929 systemd[1618]: Reached target basic.target - Basic System. Jan 30 17:41:25.200067 systemd[1618]: Reached target default.target - Main User Target. Jan 30 17:41:25.200142 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 17:41:25.200148 systemd[1618]: Startup finished in 163ms. Jan 30 17:41:25.212595 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 17:41:25.214085 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 17:41:25.215550 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 17:41:25.859666 systemd[1]: Started sshd@1-10.244.11.222:22-139.178.89.65:47260.service - OpenSSH per-connection server daemon (139.178.89.65:47260). Jan 30 17:41:26.119569 coreos-metadata[1480]: Jan 30 17:41:26.119 WARN failed to locate config-drive, using the metadata service API instead Jan 30 17:41:26.151923 coreos-metadata[1480]: Jan 30 17:41:26.151 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 30 17:41:26.159531 coreos-metadata[1480]: Jan 30 17:41:26.159 INFO Fetch failed with 404: resource not found Jan 30 17:41:26.159531 coreos-metadata[1480]: Jan 30 17:41:26.159 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 30 17:41:26.160349 coreos-metadata[1480]: Jan 30 17:41:26.160 INFO Fetch successful Jan 30 17:41:26.160455 coreos-metadata[1480]: Jan 30 17:41:26.160 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 30 17:41:26.171669 coreos-metadata[1480]: Jan 30 17:41:26.171 INFO Fetch successful Jan 30 17:41:26.171860 coreos-metadata[1480]: Jan 30 17:41:26.171 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 30 17:41:26.184516 coreos-metadata[1480]: Jan 30 17:41:26.184 INFO Fetch successful Jan 30 17:41:26.184516 coreos-metadata[1480]: Jan 30 17:41:26.184 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 30 17:41:26.198510 coreos-metadata[1480]: Jan 30 17:41:26.198 INFO Fetch successful Jan 30 17:41:26.198936 coreos-metadata[1480]: Jan 30 17:41:26.198 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 30 17:41:26.213053 coreos-metadata[1480]: Jan 30 17:41:26.212 INFO Fetch successful Jan 30 17:41:26.249276 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 17:41:26.251054 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 17:41:26.719610 coreos-metadata[1550]: Jan 30 17:41:26.719 WARN failed to locate config-drive, using the metadata service API instead Jan 30 17:41:26.744102 coreos-metadata[1550]: Jan 30 17:41:26.744 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 30 17:41:26.746681 sshd[1651]: Accepted publickey for core from 139.178.89.65 port 47260 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 17:41:26.749210 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 17:41:26.757778 systemd-logind[1489]: New session 4 of user core. Jan 30 17:41:26.768581 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 17:41:26.769867 coreos-metadata[1550]: Jan 30 17:41:26.769 INFO Fetch successful Jan 30 17:41:26.770284 coreos-metadata[1550]: Jan 30 17:41:26.770 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 17:41:26.795511 coreos-metadata[1550]: Jan 30 17:41:26.795 INFO Fetch successful Jan 30 17:41:26.798714 unknown[1550]: wrote ssh authorized keys file for user: core Jan 30 17:41:26.827744 update-ssh-keys[1664]: Updated "/home/core/.ssh/authorized_keys" Jan 30 17:41:26.828759 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 17:41:26.832065 systemd[1]: Finished sshkeys.service. Jan 30 17:41:26.835315 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 17:41:26.835522 systemd[1]: Startup finished in 1.456s (kernel) + 13.417s (initrd) + 11.710s (userspace) = 26.584s. Jan 30 17:41:27.370660 sshd[1651]: pam_unix(sshd:session): session closed for user core Jan 30 17:41:27.375845 systemd[1]: sshd@1-10.244.11.222:22-139.178.89.65:47260.service: Deactivated successfully. Jan 30 17:41:27.378951 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 17:41:27.380873 systemd-logind[1489]: Session 4 logged out. Waiting for processes to exit. Jan 30 17:41:27.382603 systemd-logind[1489]: Removed session 4. Jan 30 17:41:27.534673 systemd[1]: Started sshd@2-10.244.11.222:22-139.178.89.65:47264.service - OpenSSH per-connection server daemon (139.178.89.65:47264). Jan 30 17:41:28.421548 sshd[1671]: Accepted publickey for core from 139.178.89.65 port 47264 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 17:41:28.423953 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 17:41:28.431303 systemd-logind[1489]: New session 5 of user core. Jan 30 17:41:28.439458 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 17:41:29.055514 sshd[1671]: pam_unix(sshd:session): session closed for user core Jan 30 17:41:29.068310 systemd[1]: sshd@2-10.244.11.222:22-139.178.89.65:47264.service: Deactivated successfully. Jan 30 17:41:29.072010 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 17:41:29.073786 systemd-logind[1489]: Session 5 logged out. Waiting for processes to exit. Jan 30 17:41:29.075840 systemd-logind[1489]: Removed session 5. Jan 30 17:41:29.198436 systemd[1]: Started sshd@3-10.244.11.222:22-139.178.89.65:47276.service - OpenSSH per-connection server daemon (139.178.89.65:47276). Jan 30 17:41:30.139339 sshd[1678]: Accepted publickey for core from 139.178.89.65 port 47276 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 17:41:30.142433 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 17:41:30.153087 systemd-logind[1489]: New session 6 of user core. Jan 30 17:41:30.164577 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 17:41:30.761632 sshd[1678]: pam_unix(sshd:session): session closed for user core Jan 30 17:41:30.765849 systemd[1]: sshd@3-10.244.11.222:22-139.178.89.65:47276.service: Deactivated successfully. Jan 30 17:41:30.768102 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 17:41:30.769885 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Jan 30 17:41:30.771300 systemd-logind[1489]: Removed session 6. Jan 30 17:41:30.928914 systemd[1]: Started sshd@4-10.244.11.222:22-139.178.89.65:47286.service - OpenSSH per-connection server daemon (139.178.89.65:47286). Jan 30 17:41:31.369362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 17:41:31.375428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 17:41:31.574868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 17:41:31.594772 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 17:41:31.691363 kubelet[1695]: E0130 17:41:31.691050 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 17:41:31.697868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 17:41:31.698439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 17:41:31.813812 sshd[1685]: Accepted publickey for core from 139.178.89.65 port 47286 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 17:41:31.816754 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 17:41:31.826006 systemd-logind[1489]: New session 7 of user core. Jan 30 17:41:31.829731 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 17:41:32.306362 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 17:41:32.306866 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 17:41:32.323738 sudo[1703]: pam_unix(sudo:session): session closed for user root Jan 30 17:41:32.468823 sshd[1685]: pam_unix(sshd:session): session closed for user core Jan 30 17:41:32.473595 systemd[1]: sshd@4-10.244.11.222:22-139.178.89.65:47286.service: Deactivated successfully. Jan 30 17:41:32.476085 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 17:41:32.477874 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Jan 30 17:41:32.479862 systemd-logind[1489]: Removed session 7. Jan 30 17:41:32.631588 systemd[1]: Started sshd@5-10.244.11.222:22-139.178.89.65:56070.service - OpenSSH per-connection server daemon (139.178.89.65:56070). Jan 30 17:41:33.516309 sshd[1708]: Accepted publickey for core from 139.178.89.65 port 56070 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 17:41:33.518776 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 17:41:33.526577 systemd-logind[1489]: New session 8 of user core. Jan 30 17:41:33.534425 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 17:41:33.996080 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 17:41:33.996612 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 17:41:34.002471 sudo[1712]: pam_unix(sudo:session): session closed for user root Jan 30 17:41:34.010920 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 17:41:34.011405 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 17:41:34.035655 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 17:41:34.038351 auditctl[1715]: No rules Jan 30 17:41:34.038856 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 17:41:34.039169 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 17:41:34.046873 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 17:41:34.089302 augenrules[1733]: No rules Jan 30 17:41:34.090298 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 17:41:34.092153 sudo[1711]: pam_unix(sudo:session): session closed for user root Jan 30 17:41:34.236615 sshd[1708]: pam_unix(sshd:session): session closed for user core Jan 30 17:41:34.241391 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Jan 30 17:41:34.242708 systemd[1]: sshd@5-10.244.11.222:22-139.178.89.65:56070.service: Deactivated successfully. Jan 30 17:41:34.245209 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 17:41:34.246960 systemd-logind[1489]: Removed session 8. Jan 30 17:41:34.401690 systemd[1]: Started sshd@6-10.244.11.222:22-139.178.89.65:56078.service - OpenSSH per-connection server daemon (139.178.89.65:56078). Jan 30 17:41:35.287121 sshd[1741]: Accepted publickey for core from 139.178.89.65 port 56078 ssh2: RSA SHA256:mi9Ffww0GZqlgqqsrskunsrI33jB/1uB1d3dx4wABvw Jan 30 17:41:35.289516 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 17:41:35.296995 systemd-logind[1489]: New session 9 of user core. Jan 30 17:41:35.302678 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 17:41:35.847046 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 17:41:35.848084 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 17:41:36.581251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 17:41:36.589603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 17:41:36.636907 systemd[1]: Reloading requested from client PID 1778 ('systemctl') (unit session-9.scope)... Jan 30 17:41:36.636967 systemd[1]: Reloading... Jan 30 17:41:36.791241 zram_generator::config[1820]: No configuration found. Jan 30 17:41:36.969735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 17:41:37.078756 systemd[1]: Reloading finished in 441 ms. Jan 30 17:41:37.146652 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 17:41:37.146804 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 17:41:37.147323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 17:41:37.155664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 17:41:37.308487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 17:41:37.321758 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 17:41:37.424970 kubelet[1883]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 17:41:37.427201 kubelet[1883]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 17:41:37.427201 kubelet[1883]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 17:41:37.427201 kubelet[1883]: I0130 17:41:37.425758 1883 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 17:41:38.050577 kubelet[1883]: I0130 17:41:38.050441 1883 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 17:41:38.050577 kubelet[1883]: I0130 17:41:38.050528 1883 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 17:41:38.051058 kubelet[1883]: I0130 17:41:38.051029 1883 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 17:41:38.082991 kubelet[1883]: I0130 17:41:38.082939 1883 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 17:41:38.096430 kubelet[1883]: E0130 17:41:38.096341 1883 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 17:41:38.096645 kubelet[1883]: I0130 17:41:38.096623 1883 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 17:41:38.103406 kubelet[1883]: I0130 17:41:38.103373 1883 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 17:41:38.105695 kubelet[1883]: I0130 17:41:38.105644 1883 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 17:41:38.106054 kubelet[1883]: I0130 17:41:38.105793 1883 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.244.11.222","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 17:41:38.107038 kubelet[1883]: I0130 17:41:38.106404 1883 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 17:41:38.107038 kubelet[1883]: I0130 17:41:38.106454 1883 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 17:41:38.107038 kubelet[1883]: I0130 17:41:38.106721 1883 state_mem.go:36] "Initialized new in-memory state store" Jan 30 17:41:38.111213 kubelet[1883]: I0130 17:41:38.110358 1883 kubelet.go:446] "Attempting to sync node with API server" Jan 30 17:41:38.111213 kubelet[1883]: I0130 17:41:38.110865 1883 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 17:41:38.111213 kubelet[1883]: I0130 17:41:38.110920 1883 kubelet.go:352] "Adding apiserver pod source" Jan 30 17:41:38.111213 kubelet[1883]: I0130 17:41:38.110943 1883 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 17:41:38.111893 kubelet[1883]: E0130 17:41:38.111847 1883 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:38.111962 kubelet[1883]: E0130 17:41:38.111942 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:38.114869 kubelet[1883]: I0130 17:41:38.114829 1883 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 17:41:38.115507 kubelet[1883]: I0130 17:41:38.115442 1883 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 17:41:38.117345 kubelet[1883]: W0130 17:41:38.116256 1883 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 17:41:38.119210 kubelet[1883]: I0130 17:41:38.118757 1883 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 17:41:38.119210 kubelet[1883]: W0130 17:41:38.118811 1883 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 17:41:38.119210 kubelet[1883]: E0130 17:41:38.118853 1883 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 17:41:38.119210 kubelet[1883]: I0130 17:41:38.118814 1883 server.go:1287] "Started kubelet" Jan 30 17:41:38.119210 kubelet[1883]: I0130 17:41:38.118891 1883 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 17:41:38.119210 kubelet[1883]: W0130 17:41:38.118977 1883 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.244.11.222" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 17:41:38.119210 kubelet[1883]: E0130 17:41:38.119114 1883 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.244.11.222\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 17:41:38.120670 kubelet[1883]: I0130 17:41:38.120598 1883 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 17:41:38.123135 kubelet[1883]: I0130 17:41:38.122889 1883 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 17:41:38.124777 kubelet[1883]: I0130 17:41:38.124718 1883 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 17:41:38.127968 kubelet[1883]: I0130 17:41:38.127255 1883 server.go:490] "Adding debug handlers to kubelet server" Jan 30 17:41:38.128947 kubelet[1883]: I0130 17:41:38.128909 1883 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 17:41:38.131215 kubelet[1883]: E0130 17:41:38.129468 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:38.131215 kubelet[1883]: I0130 17:41:38.130373 1883 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 17:41:38.131215 kubelet[1883]: I0130 17:41:38.130468 1883 reconciler.go:26] "Reconciler: start to sync state" Jan 30 17:41:38.131684 kubelet[1883]: I0130 17:41:38.131653 1883 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 17:41:38.142447 kubelet[1883]: I0130 17:41:38.142382 1883 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 17:41:38.145758 kubelet[1883]: E0130 17:41:38.145729 1883 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 17:41:38.147577 kubelet[1883]: I0130 17:41:38.147552 1883 factory.go:221] Registration of the containerd container factory successfully Jan 30 17:41:38.147804 kubelet[1883]: I0130 17:41:38.147784 1883 factory.go:221] Registration of the systemd container factory successfully Jan 30 17:41:38.178708 kubelet[1883]: E0130 17:41:38.178631 1883 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.244.11.222\" not found" node="10.244.11.222" Jan 30 17:41:38.187977 kubelet[1883]: I0130 17:41:38.187952 1883 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 17:41:38.187977 kubelet[1883]: I0130 17:41:38.187974 1883 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 17:41:38.188124 kubelet[1883]: I0130 17:41:38.188018 1883 state_mem.go:36] "Initialized new in-memory state store" Jan 30 17:41:38.189978 kubelet[1883]: I0130 17:41:38.189945 1883 policy_none.go:49] "None policy: Start" Jan 30 17:41:38.190055 kubelet[1883]: I0130 17:41:38.190003 1883 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 17:41:38.190055 kubelet[1883]: I0130 17:41:38.190036 1883 state_mem.go:35] "Initializing new in-memory state store" Jan 30 17:41:38.203832 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 17:41:38.216846 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 17:41:38.228257 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 17:41:38.229913 kubelet[1883]: E0130 17:41:38.229874 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:38.231874 kubelet[1883]: I0130 17:41:38.231848 1883 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 17:41:38.232420 kubelet[1883]: I0130 17:41:38.232399 1883 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 17:41:38.234240 kubelet[1883]: I0130 17:41:38.233110 1883 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 17:41:38.234951 kubelet[1883]: I0130 17:41:38.234930 1883 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 17:41:38.239254 kubelet[1883]: E0130 17:41:38.239231 1883 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 17:41:38.239440 kubelet[1883]: E0130 17:41:38.239405 1883 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.244.11.222\" not found" Jan 30 17:41:38.250031 kubelet[1883]: I0130 17:41:38.249968 1883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 17:41:38.251704 kubelet[1883]: I0130 17:41:38.251656 1883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 17:41:38.252077 kubelet[1883]: I0130 17:41:38.251941 1883 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 17:41:38.252333 kubelet[1883]: I0130 17:41:38.252247 1883 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 17:41:38.253418 kubelet[1883]: I0130 17:41:38.252283 1883 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 17:41:38.254385 kubelet[1883]: E0130 17:41:38.253689 1883 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 17:41:38.336735 kubelet[1883]: I0130 17:41:38.336501 1883 kubelet_node_status.go:76] "Attempting to register node" node="10.244.11.222" Jan 30 17:41:38.344225 kubelet[1883]: I0130 17:41:38.344116 1883 kubelet_node_status.go:79] "Successfully registered node" node="10.244.11.222" Jan 30 17:41:38.344225 kubelet[1883]: E0130 17:41:38.344172 1883 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.244.11.222\": node \"10.244.11.222\" not found" Jan 30 17:41:38.352290 kubelet[1883]: E0130 17:41:38.352137 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:38.453308 kubelet[1883]: E0130 17:41:38.453137 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:38.553950 kubelet[1883]: E0130 17:41:38.553858 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:38.654722 kubelet[1883]: E0130 17:41:38.654627 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:38.755697 kubelet[1883]: E0130 17:41:38.755621 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:38.846517 sudo[1744]: pam_unix(sudo:session): session closed for user root Jan 30 17:41:38.856447 kubelet[1883]: E0130 17:41:38.856369 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:38.956793 kubelet[1883]: E0130 17:41:38.956533 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:38.992706 sshd[1741]: pam_unix(sshd:session): session closed for user core Jan 30 17:41:38.997390 systemd[1]: sshd@6-10.244.11.222:22-139.178.89.65:56078.service: Deactivated successfully. Jan 30 17:41:39.000847 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 17:41:39.003303 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Jan 30 17:41:39.004972 systemd-logind[1489]: Removed session 9. Jan 30 17:41:39.054337 kubelet[1883]: I0130 17:41:39.054240 1883 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 17:41:39.054864 kubelet[1883]: W0130 17:41:39.054616 1883 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 17:41:39.054864 kubelet[1883]: W0130 17:41:39.054737 1883 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 17:41:39.057626 kubelet[1883]: E0130 17:41:39.057548 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:39.112381 kubelet[1883]: E0130 17:41:39.112260 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:39.158116 kubelet[1883]: E0130 17:41:39.158030 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:39.258484 kubelet[1883]: E0130 17:41:39.258277 1883 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.244.11.222\" not found" Jan 30 17:41:39.360838 kubelet[1883]: I0130 17:41:39.360792 1883 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 17:41:39.361865 containerd[1508]: time="2025-01-30T17:41:39.361664461Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 17:41:39.362583 kubelet[1883]: I0130 17:41:39.361990 1883 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 17:41:40.113512 kubelet[1883]: I0130 17:41:40.113262 1883 apiserver.go:52] "Watching apiserver" Jan 30 17:41:40.114562 kubelet[1883]: E0130 17:41:40.113258 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:40.136262 systemd[1]: Created slice kubepods-besteffort-podafa0bc2d_9ea5_4736_9180_e3562293e9d1.slice - libcontainer container kubepods-besteffort-podafa0bc2d_9ea5_4736_9180_e3562293e9d1.slice. Jan 30 17:41:40.141224 kubelet[1883]: I0130 17:41:40.140256 1883 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 17:41:40.144751 kubelet[1883]: I0130 17:41:40.144717 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-hubble-tls\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.144934 kubelet[1883]: I0130 17:41:40.144899 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afa0bc2d-9ea5-4736-9180-e3562293e9d1-lib-modules\") pod \"kube-proxy-66bqq\" (UID: \"afa0bc2d-9ea5-4736-9180-e3562293e9d1\") " pod="kube-system/kube-proxy-66bqq" Jan 30 17:41:40.145060 kubelet[1883]: I0130 17:41:40.145036 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-hostproc\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.145231 kubelet[1883]: I0130 17:41:40.145205 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-clustermesh-secrets\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.145435 kubelet[1883]: I0130 17:41:40.145343 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-host-proc-sys-net\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146275 kubelet[1883]: I0130 17:41:40.146235 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-host-proc-sys-kernel\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146373 kubelet[1883]: I0130 17:41:40.146288 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-cgroup\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146373 kubelet[1883]: I0130 17:41:40.146318 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cni-path\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146373 kubelet[1883]: I0130 17:41:40.146346 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-config-path\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146373 kubelet[1883]: I0130 17:41:40.146372 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-run\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146566 kubelet[1883]: I0130 17:41:40.146404 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvd6x\" (UniqueName: \"kubernetes.io/projected/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-kube-api-access-rvd6x\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146566 kubelet[1883]: I0130 17:41:40.146447 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afa0bc2d-9ea5-4736-9180-e3562293e9d1-kube-proxy\") pod \"kube-proxy-66bqq\" (UID: \"afa0bc2d-9ea5-4736-9180-e3562293e9d1\") " pod="kube-system/kube-proxy-66bqq" Jan 30 17:41:40.146566 kubelet[1883]: I0130 17:41:40.146476 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afa0bc2d-9ea5-4736-9180-e3562293e9d1-xtables-lock\") pod \"kube-proxy-66bqq\" (UID: \"afa0bc2d-9ea5-4736-9180-e3562293e9d1\") " pod="kube-system/kube-proxy-66bqq" Jan 30 17:41:40.146566 kubelet[1883]: I0130 17:41:40.146509 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8f79\" (UniqueName: \"kubernetes.io/projected/afa0bc2d-9ea5-4736-9180-e3562293e9d1-kube-api-access-f8f79\") pod \"kube-proxy-66bqq\" (UID: \"afa0bc2d-9ea5-4736-9180-e3562293e9d1\") " pod="kube-system/kube-proxy-66bqq" Jan 30 17:41:40.146566 kubelet[1883]: I0130 17:41:40.146537 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-bpf-maps\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146768 kubelet[1883]: I0130 17:41:40.146565 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-etc-cni-netd\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146768 kubelet[1883]: I0130 17:41:40.146592 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-lib-modules\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.146768 kubelet[1883]: I0130 17:41:40.146620 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-xtables-lock\") pod \"cilium-qfbxv\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " pod="kube-system/cilium-qfbxv" Jan 30 17:41:40.154814 systemd[1]: Created slice kubepods-burstable-podfab9ca69_b3fa_4ae4_8969_feb0ba4a7d45.slice - libcontainer container kubepods-burstable-podfab9ca69_b3fa_4ae4_8969_feb0ba4a7d45.slice. Jan 30 17:41:40.456819 containerd[1508]: time="2025-01-30T17:41:40.456672388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66bqq,Uid:afa0bc2d-9ea5-4736-9180-e3562293e9d1,Namespace:kube-system,Attempt:0,}" Jan 30 17:41:40.468519 containerd[1508]: time="2025-01-30T17:41:40.468468753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qfbxv,Uid:fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45,Namespace:kube-system,Attempt:0,}" Jan 30 17:41:41.113999 kubelet[1883]: E0130 17:41:41.113922 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:41.171723 containerd[1508]: time="2025-01-30T17:41:41.171656745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 17:41:41.174515 containerd[1508]: time="2025-01-30T17:41:41.173173958Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 17:41:41.174515 containerd[1508]: time="2025-01-30T17:41:41.173276958Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 17:41:41.174515 containerd[1508]: time="2025-01-30T17:41:41.173485616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 17:41:41.175240 containerd[1508]: time="2025-01-30T17:41:41.175204337Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 17:41:41.179399 containerd[1508]: time="2025-01-30T17:41:41.179358476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 17:41:41.180826 containerd[1508]: time="2025-01-30T17:41:41.180785181Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 723.761574ms" Jan 30 17:41:41.183708 containerd[1508]: time="2025-01-30T17:41:41.183671443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 715.0898ms" Jan 30 17:41:41.261294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308456904.mount: Deactivated successfully. Jan 30 17:41:41.355972 containerd[1508]: time="2025-01-30T17:41:41.355757247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 17:41:41.355972 containerd[1508]: time="2025-01-30T17:41:41.355997022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 17:41:41.356392 containerd[1508]: time="2025-01-30T17:41:41.356061090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:41:41.356392 containerd[1508]: time="2025-01-30T17:41:41.356299019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:41:41.356887 containerd[1508]: time="2025-01-30T17:41:41.356767731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 17:41:41.356995 containerd[1508]: time="2025-01-30T17:41:41.356924176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 17:41:41.357174 containerd[1508]: time="2025-01-30T17:41:41.356994677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:41:41.357327 containerd[1508]: time="2025-01-30T17:41:41.357223835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:41:41.462621 systemd[1]: run-containerd-runc-k8s.io-c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d-runc.6VkBT1.mount: Deactivated successfully. Jan 30 17:41:41.490585 systemd[1]: Started cri-containerd-135c08979a6cd95eaaa67bf859af401bd00278751e1688947aaf18c15491cd58.scope - libcontainer container 135c08979a6cd95eaaa67bf859af401bd00278751e1688947aaf18c15491cd58. Jan 30 17:41:41.494483 systemd[1]: Started cri-containerd-c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d.scope - libcontainer container c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d. Jan 30 17:41:41.541411 containerd[1508]: time="2025-01-30T17:41:41.541354042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66bqq,Uid:afa0bc2d-9ea5-4736-9180-e3562293e9d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"135c08979a6cd95eaaa67bf859af401bd00278751e1688947aaf18c15491cd58\"" Jan 30 17:41:41.546038 containerd[1508]: time="2025-01-30T17:41:41.545545922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qfbxv,Uid:fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\"" Jan 30 17:41:41.549477 containerd[1508]: time="2025-01-30T17:41:41.549421284Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 17:41:42.114384 kubelet[1883]: E0130 17:41:42.114272 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:43.114871 kubelet[1883]: E0130 17:41:43.114781 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:44.115564 kubelet[1883]: E0130 17:41:44.115494 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:45.117581 kubelet[1883]: E0130 17:41:45.117501 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:46.118668 kubelet[1883]: E0130 17:41:46.118575 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:47.121107 kubelet[1883]: E0130 17:41:47.119828 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:47.992104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890671950.mount: Deactivated successfully. Jan 30 17:41:48.121755 kubelet[1883]: E0130 17:41:48.121645 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:49.122510 kubelet[1883]: E0130 17:41:49.122274 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:50.122722 kubelet[1883]: E0130 17:41:50.122599 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:50.851670 containerd[1508]: time="2025-01-30T17:41:50.851523092Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:41:50.855328 containerd[1508]: time="2025-01-30T17:41:50.853893546Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 17:41:50.855328 containerd[1508]: time="2025-01-30T17:41:50.854590238Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:41:50.857289 containerd[1508]: time="2025-01-30T17:41:50.856401561Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.306906927s" Jan 30 17:41:50.857289 containerd[1508]: time="2025-01-30T17:41:50.856463712Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 17:41:50.858637 containerd[1508]: time="2025-01-30T17:41:50.858384621Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 17:41:50.860909 containerd[1508]: time="2025-01-30T17:41:50.860645220Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 17:41:50.878266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431344065.mount: Deactivated successfully. Jan 30 17:41:50.882216 containerd[1508]: time="2025-01-30T17:41:50.882150495Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\"" Jan 30 17:41:50.883620 containerd[1508]: time="2025-01-30T17:41:50.883574932Z" level=info msg="StartContainer for \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\"" Jan 30 17:41:50.936200 systemd[1]: run-containerd-runc-k8s.io-d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a-runc.8nHydG.mount: Deactivated successfully. Jan 30 17:41:50.951732 systemd[1]: Started cri-containerd-d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a.scope - libcontainer container d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a. Jan 30 17:41:50.977989 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 17:41:51.008563 containerd[1508]: time="2025-01-30T17:41:51.008131033Z" level=info msg="StartContainer for \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\" returns successfully" Jan 30 17:41:51.027336 systemd[1]: cri-containerd-d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a.scope: Deactivated successfully. Jan 30 17:41:51.123265 kubelet[1883]: E0130 17:41:51.123111 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:51.286145 containerd[1508]: time="2025-01-30T17:41:51.285331074Z" level=info msg="shim disconnected" id=d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a namespace=k8s.io Jan 30 17:41:51.286145 containerd[1508]: time="2025-01-30T17:41:51.285773873Z" level=warning msg="cleaning up after shim disconnected" id=d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a namespace=k8s.io Jan 30 17:41:51.286145 containerd[1508]: time="2025-01-30T17:41:51.285796590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:41:51.318857 containerd[1508]: time="2025-01-30T17:41:51.318539689Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 17:41:51.337987 containerd[1508]: time="2025-01-30T17:41:51.337799080Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\"" Jan 30 17:41:51.338918 containerd[1508]: time="2025-01-30T17:41:51.338877268Z" level=info msg="StartContainer for \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\"" Jan 30 17:41:51.384520 systemd[1]: Started cri-containerd-8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9.scope - libcontainer container 8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9. Jan 30 17:41:51.429013 containerd[1508]: time="2025-01-30T17:41:51.428816631Z" level=info msg="StartContainer for \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\" returns successfully" Jan 30 17:41:51.445923 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 17:41:51.446363 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 17:41:51.446518 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 17:41:51.456494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 17:41:51.456842 systemd[1]: cri-containerd-8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9.scope: Deactivated successfully. Jan 30 17:41:51.505791 containerd[1508]: time="2025-01-30T17:41:51.505678569Z" level=info msg="shim disconnected" id=8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9 namespace=k8s.io Jan 30 17:41:51.507107 containerd[1508]: time="2025-01-30T17:41:51.505798372Z" level=warning msg="cleaning up after shim disconnected" id=8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9 namespace=k8s.io Jan 30 17:41:51.507107 containerd[1508]: time="2025-01-30T17:41:51.505843133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:41:51.506103 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 17:41:51.879443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a-rootfs.mount: Deactivated successfully. Jan 30 17:41:52.124401 kubelet[1883]: E0130 17:41:52.124213 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:52.325586 containerd[1508]: time="2025-01-30T17:41:52.324678007Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 17:41:52.361296 containerd[1508]: time="2025-01-30T17:41:52.360833311Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\"" Jan 30 17:41:52.362257 containerd[1508]: time="2025-01-30T17:41:52.361863118Z" level=info msg="StartContainer for \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\"" Jan 30 17:41:52.437660 systemd[1]: Started cri-containerd-03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa.scope - libcontainer container 03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa. Jan 30 17:41:52.506208 containerd[1508]: time="2025-01-30T17:41:52.504174523Z" level=info msg="StartContainer for \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\" returns successfully" Jan 30 17:41:52.513875 systemd[1]: cri-containerd-03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa.scope: Deactivated successfully. Jan 30 17:41:52.693461 containerd[1508]: time="2025-01-30T17:41:52.693330883Z" level=info msg="shim disconnected" id=03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa namespace=k8s.io Jan 30 17:41:52.693461 containerd[1508]: time="2025-01-30T17:41:52.693458962Z" level=warning msg="cleaning up after shim disconnected" id=03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa namespace=k8s.io Jan 30 17:41:52.693829 containerd[1508]: time="2025-01-30T17:41:52.693477254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:41:52.873514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa-rootfs.mount: Deactivated successfully. Jan 30 17:41:52.873705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822548856.mount: Deactivated successfully. Jan 30 17:41:53.124749 kubelet[1883]: E0130 17:41:53.124376 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:53.327586 containerd[1508]: time="2025-01-30T17:41:53.327458886Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 17:41:53.354857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793562989.mount: Deactivated successfully. Jan 30 17:41:53.359838 containerd[1508]: time="2025-01-30T17:41:53.359287144Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\"" Jan 30 17:41:53.360802 containerd[1508]: time="2025-01-30T17:41:53.360763096Z" level=info msg="StartContainer for \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\"" Jan 30 17:41:53.424668 systemd[1]: Started cri-containerd-4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7.scope - libcontainer container 4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7. Jan 30 17:41:53.481818 systemd[1]: cri-containerd-4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7.scope: Deactivated successfully. Jan 30 17:41:53.489998 containerd[1508]: time="2025-01-30T17:41:53.489913918Z" level=info msg="StartContainer for \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\" returns successfully" Jan 30 17:41:53.502228 containerd[1508]: time="2025-01-30T17:41:53.502128115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:41:53.504978 containerd[1508]: time="2025-01-30T17:41:53.504691565Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909474" Jan 30 17:41:53.505920 containerd[1508]: time="2025-01-30T17:41:53.505874964Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:41:53.514760 containerd[1508]: time="2025-01-30T17:41:53.514563090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:41:53.516468 containerd[1508]: time="2025-01-30T17:41:53.516135423Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.657690079s" Jan 30 17:41:53.517149 containerd[1508]: time="2025-01-30T17:41:53.516816606Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 17:41:53.527708 containerd[1508]: time="2025-01-30T17:41:53.527047957Z" level=info msg="CreateContainer within sandbox \"135c08979a6cd95eaaa67bf859af401bd00278751e1688947aaf18c15491cd58\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 17:41:53.688846 containerd[1508]: time="2025-01-30T17:41:53.688608780Z" level=info msg="shim disconnected" id=4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7 namespace=k8s.io Jan 30 17:41:53.688846 containerd[1508]: time="2025-01-30T17:41:53.688749303Z" level=warning msg="cleaning up after shim disconnected" id=4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7 namespace=k8s.io Jan 30 17:41:53.688846 containerd[1508]: time="2025-01-30T17:41:53.688769611Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:41:53.691877 containerd[1508]: time="2025-01-30T17:41:53.691836090Z" level=info msg="CreateContainer within sandbox \"135c08979a6cd95eaaa67bf859af401bd00278751e1688947aaf18c15491cd58\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0d71d65db5b96e8a1c69e026d92a7f417a489d7f63da6dc412ae17add48b2df8\"" Jan 30 17:41:53.695438 containerd[1508]: time="2025-01-30T17:41:53.694032317Z" level=info msg="StartContainer for \"0d71d65db5b96e8a1c69e026d92a7f417a489d7f63da6dc412ae17add48b2df8\"" Jan 30 17:41:53.739407 systemd[1]: Started cri-containerd-0d71d65db5b96e8a1c69e026d92a7f417a489d7f63da6dc412ae17add48b2df8.scope - libcontainer container 0d71d65db5b96e8a1c69e026d92a7f417a489d7f63da6dc412ae17add48b2df8. Jan 30 17:41:53.785599 containerd[1508]: time="2025-01-30T17:41:53.785495107Z" level=info msg="StartContainer for \"0d71d65db5b96e8a1c69e026d92a7f417a489d7f63da6dc412ae17add48b2df8\" returns successfully" Jan 30 17:41:53.874109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7-rootfs.mount: Deactivated successfully. Jan 30 17:41:54.125583 kubelet[1883]: E0130 17:41:54.125443 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:54.330770 containerd[1508]: time="2025-01-30T17:41:54.330594791Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 17:41:54.350394 containerd[1508]: time="2025-01-30T17:41:54.350342414Z" level=info msg="CreateContainer within sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\"" Jan 30 17:41:54.350936 containerd[1508]: time="2025-01-30T17:41:54.350902816Z" level=info msg="StartContainer for \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\"" Jan 30 17:41:54.400432 systemd[1]: Started cri-containerd-e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8.scope - libcontainer container e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8. Jan 30 17:41:54.451921 containerd[1508]: time="2025-01-30T17:41:54.451843197Z" level=info msg="StartContainer for \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\" returns successfully" Jan 30 17:41:54.595320 kubelet[1883]: I0130 17:41:54.592607 1883 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 17:41:54.874864 systemd[1]: run-containerd-runc-k8s.io-e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8-runc.ijuBDs.mount: Deactivated successfully. Jan 30 17:41:55.049354 kernel: Initializing XFRM netlink socket Jan 30 17:41:55.126153 kubelet[1883]: E0130 17:41:55.125779 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:55.362367 kubelet[1883]: I0130 17:41:55.361627 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-66bqq" podStartSLOduration=5.388932462 podStartE2EDuration="17.361566378s" podCreationTimestamp="2025-01-30 17:41:38 +0000 UTC" firstStartedPulling="2025-01-30 17:41:41.548850956 +0000 UTC m=+4.220486636" lastFinishedPulling="2025-01-30 17:41:53.521484871 +0000 UTC m=+16.193120552" observedRunningTime="2025-01-30 17:41:54.365629779 +0000 UTC m=+17.037265486" watchObservedRunningTime="2025-01-30 17:41:55.361566378 +0000 UTC m=+18.033202058" Jan 30 17:41:56.126110 kubelet[1883]: E0130 17:41:56.125962 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:56.596394 kubelet[1883]: I0130 17:41:56.596112 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qfbxv" podStartSLOduration=9.287037538 podStartE2EDuration="18.596082603s" podCreationTimestamp="2025-01-30 17:41:38 +0000 UTC" firstStartedPulling="2025-01-30 17:41:41.548771261 +0000 UTC m=+4.220406943" lastFinishedPulling="2025-01-30 17:41:50.857816321 +0000 UTC m=+13.529452008" observedRunningTime="2025-01-30 17:41:55.362337573 +0000 UTC m=+18.033973282" watchObservedRunningTime="2025-01-30 17:41:56.596082603 +0000 UTC m=+19.267718290" Jan 30 17:41:56.606512 systemd[1]: Created slice kubepods-besteffort-pod5b1a0151_9571_4de2_acfd_5bafc00dd277.slice - libcontainer container kubepods-besteffort-pod5b1a0151_9571_4de2_acfd_5bafc00dd277.slice. Jan 30 17:41:56.659977 kubelet[1883]: I0130 17:41:56.659891 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4q9x\" (UniqueName: \"kubernetes.io/projected/5b1a0151-9571-4de2-acfd-5bafc00dd277-kube-api-access-x4q9x\") pod \"nginx-deployment-7fcdb87857-ngt5h\" (UID: \"5b1a0151-9571-4de2-acfd-5bafc00dd277\") " pod="default/nginx-deployment-7fcdb87857-ngt5h" Jan 30 17:41:56.808924 systemd-networkd[1424]: cilium_host: Link UP Jan 30 17:41:56.814257 systemd-networkd[1424]: cilium_net: Link UP Jan 30 17:41:56.815597 systemd-networkd[1424]: cilium_net: Gained carrier Jan 30 17:41:56.817771 systemd-networkd[1424]: cilium_host: Gained carrier Jan 30 17:41:56.912461 containerd[1508]: time="2025-01-30T17:41:56.912232863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ngt5h,Uid:5b1a0151-9571-4de2-acfd-5bafc00dd277,Namespace:default,Attempt:0,}" Jan 30 17:41:56.997221 systemd-networkd[1424]: cilium_vxlan: Link UP Jan 30 17:41:56.997234 systemd-networkd[1424]: cilium_vxlan: Gained carrier Jan 30 17:41:57.126290 kubelet[1883]: E0130 17:41:57.126165 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:57.249493 systemd-networkd[1424]: cilium_net: Gained IPv6LL Jan 30 17:41:57.389462 kernel: NET: Registered PF_ALG protocol family Jan 30 17:41:57.433388 systemd-networkd[1424]: cilium_host: Gained IPv6LL Jan 30 17:41:58.111606 kubelet[1883]: E0130 17:41:58.111458 1883 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:58.126627 kubelet[1883]: E0130 17:41:58.126578 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:58.383136 systemd-networkd[1424]: lxc_health: Link UP Jan 30 17:41:58.389462 systemd-networkd[1424]: lxc_health: Gained carrier Jan 30 17:41:58.995103 systemd-networkd[1424]: lxcf905b5aedab4: Link UP Jan 30 17:41:59.004764 kernel: eth0: renamed from tmp7ae3a Jan 30 17:41:59.012711 systemd-networkd[1424]: lxcf905b5aedab4: Gained carrier Jan 30 17:41:59.033376 systemd-networkd[1424]: cilium_vxlan: Gained IPv6LL Jan 30 17:41:59.126939 kubelet[1883]: E0130 17:41:59.126882 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:41:59.481433 systemd-networkd[1424]: lxc_health: Gained IPv6LL Jan 30 17:42:00.127728 kubelet[1883]: E0130 17:42:00.127650 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:00.442245 systemd-networkd[1424]: lxcf905b5aedab4: Gained IPv6LL Jan 30 17:42:01.128497 kubelet[1883]: E0130 17:42:01.128351 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:02.129460 kubelet[1883]: E0130 17:42:02.129309 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:03.130300 kubelet[1883]: E0130 17:42:03.130144 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:04.130783 kubelet[1883]: E0130 17:42:04.130659 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:04.261769 update_engine[1490]: I20250130 17:42:04.261531 1490 update_attempter.cc:509] Updating boot flags... Jan 30 17:42:04.345371 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2967) Jan 30 17:42:04.468602 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2967) Jan 30 17:42:04.779416 containerd[1508]: time="2025-01-30T17:42:04.779092878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 17:42:04.779416 containerd[1508]: time="2025-01-30T17:42:04.779238105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 17:42:04.779416 containerd[1508]: time="2025-01-30T17:42:04.779263051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:04.780167 containerd[1508]: time="2025-01-30T17:42:04.779576871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:04.810777 systemd[1]: run-containerd-runc-k8s.io-7ae3a28fc55943d7c2c0938db7ebd5730add0e1dfa14bf791fb6649406b2e054-runc.1ZwGmC.mount: Deactivated successfully. Jan 30 17:42:04.819407 systemd[1]: Started cri-containerd-7ae3a28fc55943d7c2c0938db7ebd5730add0e1dfa14bf791fb6649406b2e054.scope - libcontainer container 7ae3a28fc55943d7c2c0938db7ebd5730add0e1dfa14bf791fb6649406b2e054. Jan 30 17:42:04.880080 containerd[1508]: time="2025-01-30T17:42:04.879947081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-ngt5h,Uid:5b1a0151-9571-4de2-acfd-5bafc00dd277,Namespace:default,Attempt:0,} returns sandbox id \"7ae3a28fc55943d7c2c0938db7ebd5730add0e1dfa14bf791fb6649406b2e054\"" Jan 30 17:42:04.884413 containerd[1508]: time="2025-01-30T17:42:04.883992782Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 17:42:05.131008 kubelet[1883]: E0130 17:42:05.130927 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:06.132948 kubelet[1883]: E0130 17:42:06.132876 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:07.133437 kubelet[1883]: E0130 17:42:07.133325 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:08.134085 kubelet[1883]: E0130 17:42:08.133994 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:09.135888 kubelet[1883]: E0130 17:42:09.135773 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:09.214572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161535981.mount: Deactivated successfully. Jan 30 17:42:10.136684 kubelet[1883]: E0130 17:42:10.136497 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:10.857396 containerd[1508]: time="2025-01-30T17:42:10.857303027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:42:10.859037 containerd[1508]: time="2025-01-30T17:42:10.858983213Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 17:42:10.859752 containerd[1508]: time="2025-01-30T17:42:10.859420942Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:42:10.864486 containerd[1508]: time="2025-01-30T17:42:10.864430245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:42:10.866005 containerd[1508]: time="2025-01-30T17:42:10.865746518Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 5.981689944s" Jan 30 17:42:10.866005 containerd[1508]: time="2025-01-30T17:42:10.865835031Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 17:42:10.869683 containerd[1508]: time="2025-01-30T17:42:10.869636251Z" level=info msg="CreateContainer within sandbox \"7ae3a28fc55943d7c2c0938db7ebd5730add0e1dfa14bf791fb6649406b2e054\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 17:42:10.887341 containerd[1508]: time="2025-01-30T17:42:10.887280282Z" level=info msg="CreateContainer within sandbox \"7ae3a28fc55943d7c2c0938db7ebd5730add0e1dfa14bf791fb6649406b2e054\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"23a360866f2bb5c414652ac5b3342aee51c6a55f49b0f58bf7f85df3dab3a614\"" Jan 30 17:42:10.888348 containerd[1508]: time="2025-01-30T17:42:10.888251688Z" level=info msg="StartContainer for \"23a360866f2bb5c414652ac5b3342aee51c6a55f49b0f58bf7f85df3dab3a614\"" Jan 30 17:42:10.962721 systemd[1]: run-containerd-runc-k8s.io-23a360866f2bb5c414652ac5b3342aee51c6a55f49b0f58bf7f85df3dab3a614-runc.zRLPLC.mount: Deactivated successfully. Jan 30 17:42:10.977533 systemd[1]: Started cri-containerd-23a360866f2bb5c414652ac5b3342aee51c6a55f49b0f58bf7f85df3dab3a614.scope - libcontainer container 23a360866f2bb5c414652ac5b3342aee51c6a55f49b0f58bf7f85df3dab3a614. Jan 30 17:42:11.042626 containerd[1508]: time="2025-01-30T17:42:11.042443408Z" level=info msg="StartContainer for \"23a360866f2bb5c414652ac5b3342aee51c6a55f49b0f58bf7f85df3dab3a614\" returns successfully" Jan 30 17:42:11.137661 kubelet[1883]: E0130 17:42:11.137582 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:11.405636 kubelet[1883]: I0130 17:42:11.405047 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-ngt5h" podStartSLOduration=9.420129767 podStartE2EDuration="15.405006984s" podCreationTimestamp="2025-01-30 17:41:56 +0000 UTC" firstStartedPulling="2025-01-30 17:42:04.88272708 +0000 UTC m=+27.554362760" lastFinishedPulling="2025-01-30 17:42:10.867604284 +0000 UTC m=+33.539239977" observedRunningTime="2025-01-30 17:42:11.404588947 +0000 UTC m=+34.076224627" watchObservedRunningTime="2025-01-30 17:42:11.405006984 +0000 UTC m=+34.076642679" Jan 30 17:42:12.138175 kubelet[1883]: E0130 17:42:12.138091 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:13.139440 kubelet[1883]: E0130 17:42:13.139319 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:14.140564 kubelet[1883]: E0130 17:42:14.140487 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:15.141590 kubelet[1883]: E0130 17:42:15.141469 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:16.142091 kubelet[1883]: E0130 17:42:16.142015 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:16.903295 systemd[1]: Created slice kubepods-besteffort-pod32e4128c_f059_4cc9_9333_46ce8e04edee.slice - libcontainer container kubepods-besteffort-pod32e4128c_f059_4cc9_9333_46ce8e04edee.slice. Jan 30 17:42:16.996059 kubelet[1883]: I0130 17:42:16.995966 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/32e4128c-f059-4cc9-9333-46ce8e04edee-data\") pod \"nfs-server-provisioner-0\" (UID: \"32e4128c-f059-4cc9-9333-46ce8e04edee\") " pod="default/nfs-server-provisioner-0" Jan 30 17:42:16.996059 kubelet[1883]: I0130 17:42:16.996057 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngzxp\" (UniqueName: \"kubernetes.io/projected/32e4128c-f059-4cc9-9333-46ce8e04edee-kube-api-access-ngzxp\") pod \"nfs-server-provisioner-0\" (UID: \"32e4128c-f059-4cc9-9333-46ce8e04edee\") " pod="default/nfs-server-provisioner-0" Jan 30 17:42:17.142565 kubelet[1883]: E0130 17:42:17.142460 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:17.208473 containerd[1508]: time="2025-01-30T17:42:17.208298398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32e4128c-f059-4cc9-9333-46ce8e04edee,Namespace:default,Attempt:0,}" Jan 30 17:42:17.264640 systemd-networkd[1424]: lxcf0f3d369af8e: Link UP Jan 30 17:42:17.272325 kernel: eth0: renamed from tmp12e62 Jan 30 17:42:17.277661 systemd-networkd[1424]: lxcf0f3d369af8e: Gained carrier Jan 30 17:42:17.549066 containerd[1508]: time="2025-01-30T17:42:17.548421568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 17:42:17.549066 containerd[1508]: time="2025-01-30T17:42:17.548542528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 17:42:17.549066 containerd[1508]: time="2025-01-30T17:42:17.548566973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:17.549066 containerd[1508]: time="2025-01-30T17:42:17.548726620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:17.579405 systemd[1]: Started cri-containerd-12e62029670f9d5eed7de003b513bce5249849a4bc80576dc2f53e02f19ceee0.scope - libcontainer container 12e62029670f9d5eed7de003b513bce5249849a4bc80576dc2f53e02f19ceee0. Jan 30 17:42:17.640648 containerd[1508]: time="2025-01-30T17:42:17.640480393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32e4128c-f059-4cc9-9333-46ce8e04edee,Namespace:default,Attempt:0,} returns sandbox id \"12e62029670f9d5eed7de003b513bce5249849a4bc80576dc2f53e02f19ceee0\"" Jan 30 17:42:17.645754 containerd[1508]: time="2025-01-30T17:42:17.645409185Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 17:42:18.111761 kubelet[1883]: E0130 17:42:18.111692 1883 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:18.143386 kubelet[1883]: E0130 17:42:18.143318 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:19.144590 kubelet[1883]: E0130 17:42:19.144522 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:19.321958 systemd-networkd[1424]: lxcf0f3d369af8e: Gained IPv6LL Jan 30 17:42:20.145608 kubelet[1883]: E0130 17:42:20.145486 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:20.972836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount165919174.mount: Deactivated successfully. Jan 30 17:42:21.146461 kubelet[1883]: E0130 17:42:21.145981 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:22.147943 kubelet[1883]: E0130 17:42:22.147072 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:23.147488 kubelet[1883]: E0130 17:42:23.147346 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:23.887111 containerd[1508]: time="2025-01-30T17:42:23.885451494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:42:23.888937 containerd[1508]: time="2025-01-30T17:42:23.888889661Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 30 17:42:23.890238 containerd[1508]: time="2025-01-30T17:42:23.890203573Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:42:23.895482 containerd[1508]: time="2025-01-30T17:42:23.895436215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:42:23.896584 containerd[1508]: time="2025-01-30T17:42:23.896544632Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.251077632s" Jan 30 17:42:23.896747 containerd[1508]: time="2025-01-30T17:42:23.896715610Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 17:42:23.901481 containerd[1508]: time="2025-01-30T17:42:23.901444934Z" level=info msg="CreateContainer within sandbox \"12e62029670f9d5eed7de003b513bce5249849a4bc80576dc2f53e02f19ceee0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 17:42:23.919359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489318439.mount: Deactivated successfully. Jan 30 17:42:23.922605 containerd[1508]: time="2025-01-30T17:42:23.922550128Z" level=info msg="CreateContainer within sandbox \"12e62029670f9d5eed7de003b513bce5249849a4bc80576dc2f53e02f19ceee0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d1440246f4ace8b7cf930b5706a1d3e64a24518b6af996c34a647ca328ce103a\"" Jan 30 17:42:23.924828 containerd[1508]: time="2025-01-30T17:42:23.923460132Z" level=info msg="StartContainer for \"d1440246f4ace8b7cf930b5706a1d3e64a24518b6af996c34a647ca328ce103a\"" Jan 30 17:42:23.974913 systemd[1]: Started cri-containerd-d1440246f4ace8b7cf930b5706a1d3e64a24518b6af996c34a647ca328ce103a.scope - libcontainer container d1440246f4ace8b7cf930b5706a1d3e64a24518b6af996c34a647ca328ce103a. Jan 30 17:42:24.024488 containerd[1508]: time="2025-01-30T17:42:24.024416867Z" level=info msg="StartContainer for \"d1440246f4ace8b7cf930b5706a1d3e64a24518b6af996c34a647ca328ce103a\" returns successfully" Jan 30 17:42:24.148609 kubelet[1883]: E0130 17:42:24.148472 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:24.446594 kubelet[1883]: I0130 17:42:24.446005 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.192060899 podStartE2EDuration="8.445970647s" podCreationTimestamp="2025-01-30 17:42:16 +0000 UTC" firstStartedPulling="2025-01-30 17:42:17.644506355 +0000 UTC m=+40.316142035" lastFinishedPulling="2025-01-30 17:42:23.898416102 +0000 UTC m=+46.570051783" observedRunningTime="2025-01-30 17:42:24.444875302 +0000 UTC m=+47.116511002" watchObservedRunningTime="2025-01-30 17:42:24.445970647 +0000 UTC m=+47.117606339" Jan 30 17:42:25.149591 kubelet[1883]: E0130 17:42:25.149514 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:26.150328 kubelet[1883]: E0130 17:42:26.150250 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:27.150750 kubelet[1883]: E0130 17:42:27.150675 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:28.150914 kubelet[1883]: E0130 17:42:28.150832 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:29.151372 kubelet[1883]: E0130 17:42:29.151302 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:30.152444 kubelet[1883]: E0130 17:42:30.152360 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:31.153429 kubelet[1883]: E0130 17:42:31.153343 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:32.154374 kubelet[1883]: E0130 17:42:32.154290 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:33.155336 kubelet[1883]: E0130 17:42:33.155238 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:33.572949 systemd[1]: Created slice kubepods-besteffort-pod589f5fe3_4bc1_43b5_aabe_a9629b2e7865.slice - libcontainer container kubepods-besteffort-pod589f5fe3_4bc1_43b5_aabe_a9629b2e7865.slice. Jan 30 17:42:33.608947 kubelet[1883]: I0130 17:42:33.608778 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-99969019-9664-4112-a946-d1f95f8c5ca9\" (UniqueName: \"kubernetes.io/nfs/589f5fe3-4bc1-43b5-aabe-a9629b2e7865-pvc-99969019-9664-4112-a946-d1f95f8c5ca9\") pod \"test-pod-1\" (UID: \"589f5fe3-4bc1-43b5-aabe-a9629b2e7865\") " pod="default/test-pod-1" Jan 30 17:42:33.608947 kubelet[1883]: I0130 17:42:33.608854 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-556xh\" (UniqueName: \"kubernetes.io/projected/589f5fe3-4bc1-43b5-aabe-a9629b2e7865-kube-api-access-556xh\") pod \"test-pod-1\" (UID: \"589f5fe3-4bc1-43b5-aabe-a9629b2e7865\") " pod="default/test-pod-1" Jan 30 17:42:33.764303 kernel: FS-Cache: Loaded Jan 30 17:42:33.857822 kernel: RPC: Registered named UNIX socket transport module. Jan 30 17:42:33.858072 kernel: RPC: Registered udp transport module. Jan 30 17:42:33.858130 kernel: RPC: Registered tcp transport module. Jan 30 17:42:33.858686 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 17:42:33.859771 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 17:42:34.156334 kubelet[1883]: E0130 17:42:34.156233 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:34.217369 kernel: NFS: Registering the id_resolver key type Jan 30 17:42:34.217521 kernel: Key type id_resolver registered Jan 30 17:42:34.217587 kernel: Key type id_legacy registered Jan 30 17:42:34.271102 nfsidmap[3297]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 30 17:42:34.279709 nfsidmap[3300]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 30 17:42:34.480394 containerd[1508]: time="2025-01-30T17:42:34.479794025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:589f5fe3-4bc1-43b5-aabe-a9629b2e7865,Namespace:default,Attempt:0,}" Jan 30 17:42:34.528335 systemd-networkd[1424]: lxc65ebf4b27ae1: Link UP Jan 30 17:42:34.534204 kernel: eth0: renamed from tmp71947 Jan 30 17:42:34.540767 systemd-networkd[1424]: lxc65ebf4b27ae1: Gained carrier Jan 30 17:42:34.782019 containerd[1508]: time="2025-01-30T17:42:34.781462326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 17:42:34.782019 containerd[1508]: time="2025-01-30T17:42:34.781574385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 17:42:34.782019 containerd[1508]: time="2025-01-30T17:42:34.781599531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:34.782660 containerd[1508]: time="2025-01-30T17:42:34.782497517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:34.813023 systemd[1]: run-containerd-runc-k8s.io-71947d33157a68c7ae015268e99308ba15ae622617b3390514f9d7b097e205ef-runc.lemqhg.mount: Deactivated successfully. Jan 30 17:42:34.824503 systemd[1]: Started cri-containerd-71947d33157a68c7ae015268e99308ba15ae622617b3390514f9d7b097e205ef.scope - libcontainer container 71947d33157a68c7ae015268e99308ba15ae622617b3390514f9d7b097e205ef. Jan 30 17:42:34.880648 containerd[1508]: time="2025-01-30T17:42:34.880247751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:589f5fe3-4bc1-43b5-aabe-a9629b2e7865,Namespace:default,Attempt:0,} returns sandbox id \"71947d33157a68c7ae015268e99308ba15ae622617b3390514f9d7b097e205ef\"" Jan 30 17:42:34.882828 containerd[1508]: time="2025-01-30T17:42:34.882589475Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 17:42:35.156592 kubelet[1883]: E0130 17:42:35.156503 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:35.236275 containerd[1508]: time="2025-01-30T17:42:35.235309687Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:42:35.238081 containerd[1508]: time="2025-01-30T17:42:35.238027547Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 17:42:35.242396 containerd[1508]: time="2025-01-30T17:42:35.242353193Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 359.723448ms" Jan 30 17:42:35.242507 containerd[1508]: time="2025-01-30T17:42:35.242403196Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 17:42:35.246089 containerd[1508]: time="2025-01-30T17:42:35.246045830Z" level=info msg="CreateContainer within sandbox \"71947d33157a68c7ae015268e99308ba15ae622617b3390514f9d7b097e205ef\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 17:42:35.266559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556696470.mount: Deactivated successfully. Jan 30 17:42:35.268849 containerd[1508]: time="2025-01-30T17:42:35.268793313Z" level=info msg="CreateContainer within sandbox \"71947d33157a68c7ae015268e99308ba15ae622617b3390514f9d7b097e205ef\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"89908408007434d5c13c0a565ed3d36503221871f7b3625796c2de288244dcf9\"" Jan 30 17:42:35.271133 containerd[1508]: time="2025-01-30T17:42:35.269818973Z" level=info msg="StartContainer for \"89908408007434d5c13c0a565ed3d36503221871f7b3625796c2de288244dcf9\"" Jan 30 17:42:35.320537 systemd[1]: Started cri-containerd-89908408007434d5c13c0a565ed3d36503221871f7b3625796c2de288244dcf9.scope - libcontainer container 89908408007434d5c13c0a565ed3d36503221871f7b3625796c2de288244dcf9. Jan 30 17:42:35.356813 containerd[1508]: time="2025-01-30T17:42:35.356525310Z" level=info msg="StartContainer for \"89908408007434d5c13c0a565ed3d36503221871f7b3625796c2de288244dcf9\" returns successfully" Jan 30 17:42:35.472119 kubelet[1883]: I0130 17:42:35.471282 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.109450655 podStartE2EDuration="17.471238748s" podCreationTimestamp="2025-01-30 17:42:18 +0000 UTC" firstStartedPulling="2025-01-30 17:42:34.881907153 +0000 UTC m=+57.553542834" lastFinishedPulling="2025-01-30 17:42:35.243695247 +0000 UTC m=+57.915330927" observedRunningTime="2025-01-30 17:42:35.470591129 +0000 UTC m=+58.142226836" watchObservedRunningTime="2025-01-30 17:42:35.471238748 +0000 UTC m=+58.142874442" Jan 30 17:42:35.705855 systemd-networkd[1424]: lxc65ebf4b27ae1: Gained IPv6LL Jan 30 17:42:36.156899 kubelet[1883]: E0130 17:42:36.156820 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:37.157694 kubelet[1883]: E0130 17:42:37.157529 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:38.112093 kubelet[1883]: E0130 17:42:38.111978 1883 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:38.158285 kubelet[1883]: E0130 17:42:38.158234 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:39.158595 kubelet[1883]: E0130 17:42:39.158460 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:40.159655 kubelet[1883]: E0130 17:42:40.159516 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:41.160557 kubelet[1883]: E0130 17:42:41.160494 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:42.161711 kubelet[1883]: E0130 17:42:42.161637 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:43.162630 kubelet[1883]: E0130 17:42:43.162556 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:44.163711 kubelet[1883]: E0130 17:42:44.163617 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:44.595600 containerd[1508]: time="2025-01-30T17:42:44.595117777Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 17:42:44.632982 containerd[1508]: time="2025-01-30T17:42:44.632891713Z" level=info msg="StopContainer for \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\" with timeout 2 (s)" Jan 30 17:42:44.633570 containerd[1508]: time="2025-01-30T17:42:44.633438827Z" level=info msg="Stop container \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\" with signal terminated" Jan 30 17:42:44.644912 systemd-networkd[1424]: lxc_health: Link DOWN Jan 30 17:42:44.644929 systemd-networkd[1424]: lxc_health: Lost carrier Jan 30 17:42:44.667090 systemd[1]: cri-containerd-e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8.scope: Deactivated successfully. Jan 30 17:42:44.667986 systemd[1]: cri-containerd-e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8.scope: Consumed 10.141s CPU time. Jan 30 17:42:44.704255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8-rootfs.mount: Deactivated successfully. Jan 30 17:42:44.718434 containerd[1508]: time="2025-01-30T17:42:44.718172444Z" level=info msg="shim disconnected" id=e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8 namespace=k8s.io Jan 30 17:42:44.718434 containerd[1508]: time="2025-01-30T17:42:44.718429260Z" level=warning msg="cleaning up after shim disconnected" id=e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8 namespace=k8s.io Jan 30 17:42:44.718985 containerd[1508]: time="2025-01-30T17:42:44.718455623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:42:44.739565 containerd[1508]: time="2025-01-30T17:42:44.739497339Z" level=info msg="StopContainer for \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\" returns successfully" Jan 30 17:42:44.754791 containerd[1508]: time="2025-01-30T17:42:44.753921315Z" level=info msg="StopPodSandbox for \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\"" Jan 30 17:42:44.754791 containerd[1508]: time="2025-01-30T17:42:44.753987843Z" level=info msg="Container to stop \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 17:42:44.754791 containerd[1508]: time="2025-01-30T17:42:44.754011334Z" level=info msg="Container to stop \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 17:42:44.754791 containerd[1508]: time="2025-01-30T17:42:44.754029789Z" level=info msg="Container to stop \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 17:42:44.754791 containerd[1508]: time="2025-01-30T17:42:44.754046042Z" level=info msg="Container to stop \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 17:42:44.754791 containerd[1508]: time="2025-01-30T17:42:44.754073991Z" level=info msg="Container to stop \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 17:42:44.758500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d-shm.mount: Deactivated successfully. Jan 30 17:42:44.765812 systemd[1]: cri-containerd-c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d.scope: Deactivated successfully. Jan 30 17:42:44.794045 containerd[1508]: time="2025-01-30T17:42:44.793488017Z" level=info msg="shim disconnected" id=c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d namespace=k8s.io Jan 30 17:42:44.794045 containerd[1508]: time="2025-01-30T17:42:44.793558803Z" level=warning msg="cleaning up after shim disconnected" id=c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d namespace=k8s.io Jan 30 17:42:44.794045 containerd[1508]: time="2025-01-30T17:42:44.793574771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:42:44.793564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d-rootfs.mount: Deactivated successfully. Jan 30 17:42:44.821577 containerd[1508]: time="2025-01-30T17:42:44.821485684Z" level=info msg="TearDown network for sandbox \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" successfully" Jan 30 17:42:44.821577 containerd[1508]: time="2025-01-30T17:42:44.821543464Z" level=info msg="StopPodSandbox for \"c3526dbb2ead0b8315f4aecfffd88dd378cbe607ecf26be84b50b03be3e1226d\" returns successfully" Jan 30 17:42:44.883285 kubelet[1883]: I0130 17:42:44.883059 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-host-proc-sys-kernel\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883285 kubelet[1883]: I0130 17:42:44.883132 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-cgroup\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883285 kubelet[1883]: I0130 17:42:44.883162 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-etc-cni-netd\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883285 kubelet[1883]: I0130 17:42:44.883219 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-lib-modules\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883285 kubelet[1883]: I0130 17:42:44.883266 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-host-proc-sys-net\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883285 kubelet[1883]: I0130 17:42:44.883302 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cni-path\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883794 kubelet[1883]: I0130 17:42:44.883354 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvd6x\" (UniqueName: \"kubernetes.io/projected/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-kube-api-access-rvd6x\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883794 kubelet[1883]: I0130 17:42:44.883381 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-bpf-maps\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883794 kubelet[1883]: I0130 17:42:44.883417 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-config-path\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883794 kubelet[1883]: I0130 17:42:44.883441 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-run\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883794 kubelet[1883]: I0130 17:42:44.883475 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-hubble-tls\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.883794 kubelet[1883]: I0130 17:42:44.883500 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-hostproc\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.884086 kubelet[1883]: I0130 17:42:44.883547 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-clustermesh-secrets\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.884086 kubelet[1883]: I0130 17:42:44.883573 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-xtables-lock\") pod \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\" (UID: \"fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45\") " Jan 30 17:42:44.884086 kubelet[1883]: I0130 17:42:44.883747 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.884086 kubelet[1883]: I0130 17:42:44.883831 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.884086 kubelet[1883]: I0130 17:42:44.883865 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.884400 kubelet[1883]: I0130 17:42:44.883893 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.884400 kubelet[1883]: I0130 17:42:44.883923 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.884400 kubelet[1883]: I0130 17:42:44.883950 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.884400 kubelet[1883]: I0130 17:42:44.883976 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cni-path" (OuterVolumeSpecName: "cni-path") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.886439 kubelet[1883]: I0130 17:42:44.884534 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.886439 kubelet[1883]: I0130 17:42:44.884579 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.887165 kubelet[1883]: I0130 17:42:44.887121 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-hostproc" (OuterVolumeSpecName: "hostproc") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 17:42:44.892907 kubelet[1883]: I0130 17:42:44.892860 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 17:42:44.893664 kubelet[1883]: I0130 17:42:44.893633 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 17:42:44.894558 kubelet[1883]: I0130 17:42:44.894527 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 17:42:44.895846 systemd[1]: var-lib-kubelet-pods-fab9ca69\x2db3fa\x2d4ae4\x2d8969\x2dfeb0ba4a7d45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drvd6x.mount: Deactivated successfully. Jan 30 17:42:44.896003 systemd[1]: var-lib-kubelet-pods-fab9ca69\x2db3fa\x2d4ae4\x2d8969\x2dfeb0ba4a7d45-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 17:42:44.897787 kubelet[1883]: I0130 17:42:44.896860 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-kube-api-access-rvd6x" (OuterVolumeSpecName: "kube-api-access-rvd6x") pod "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" (UID: "fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45"). InnerVolumeSpecName "kube-api-access-rvd6x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 17:42:44.984312 kubelet[1883]: I0130 17:42:44.984209 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-cgroup\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984312 kubelet[1883]: I0130 17:42:44.984283 1883 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-host-proc-sys-kernel\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984312 kubelet[1883]: I0130 17:42:44.984312 1883 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cni-path\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984312 kubelet[1883]: I0130 17:42:44.984329 1883 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rvd6x\" (UniqueName: \"kubernetes.io/projected/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-kube-api-access-rvd6x\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984678 kubelet[1883]: I0130 17:42:44.984344 1883 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-bpf-maps\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984678 kubelet[1883]: I0130 17:42:44.984358 1883 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-etc-cni-netd\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984678 kubelet[1883]: I0130 17:42:44.984372 1883 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-lib-modules\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984678 kubelet[1883]: I0130 17:42:44.984386 1883 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-host-proc-sys-net\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984678 kubelet[1883]: I0130 17:42:44.984399 1883 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-hostproc\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984678 kubelet[1883]: I0130 17:42:44.984413 1883 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-clustermesh-secrets\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984678 kubelet[1883]: I0130 17:42:44.984427 1883 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-xtables-lock\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.984678 kubelet[1883]: I0130 17:42:44.984441 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-config-path\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.985121 kubelet[1883]: I0130 17:42:44.984455 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-cilium-run\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:44.985121 kubelet[1883]: I0130 17:42:44.984468 1883 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45-hubble-tls\") on node \"10.244.11.222\" DevicePath \"\"" Jan 30 17:42:45.164709 kubelet[1883]: E0130 17:42:45.164466 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:45.487721 kubelet[1883]: I0130 17:42:45.487158 1883 scope.go:117] "RemoveContainer" containerID="e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8" Jan 30 17:42:45.490201 containerd[1508]: time="2025-01-30T17:42:45.489138543Z" level=info msg="RemoveContainer for \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\"" Jan 30 17:42:45.499948 systemd[1]: Removed slice kubepods-burstable-podfab9ca69_b3fa_4ae4_8969_feb0ba4a7d45.slice - libcontainer container kubepods-burstable-podfab9ca69_b3fa_4ae4_8969_feb0ba4a7d45.slice. Jan 30 17:42:45.500309 systemd[1]: kubepods-burstable-podfab9ca69_b3fa_4ae4_8969_feb0ba4a7d45.slice: Consumed 10.268s CPU time. Jan 30 17:42:45.510815 containerd[1508]: time="2025-01-30T17:42:45.510769799Z" level=info msg="RemoveContainer for \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\" returns successfully" Jan 30 17:42:45.511452 kubelet[1883]: I0130 17:42:45.511413 1883 scope.go:117] "RemoveContainer" containerID="4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7" Jan 30 17:42:45.513223 containerd[1508]: time="2025-01-30T17:42:45.513067291Z" level=info msg="RemoveContainer for \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\"" Jan 30 17:42:45.515877 containerd[1508]: time="2025-01-30T17:42:45.515823267Z" level=info msg="RemoveContainer for \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\" returns successfully" Jan 30 17:42:45.516316 kubelet[1883]: I0130 17:42:45.516023 1883 scope.go:117] "RemoveContainer" containerID="03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa" Jan 30 17:42:45.518165 containerd[1508]: time="2025-01-30T17:42:45.518085506Z" level=info msg="RemoveContainer for \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\"" Jan 30 17:42:45.521368 containerd[1508]: time="2025-01-30T17:42:45.521315838Z" level=info msg="RemoveContainer for \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\" returns successfully" Jan 30 17:42:45.521602 kubelet[1883]: I0130 17:42:45.521552 1883 scope.go:117] "RemoveContainer" containerID="8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9" Jan 30 17:42:45.523491 containerd[1508]: time="2025-01-30T17:42:45.523441793Z" level=info msg="RemoveContainer for \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\"" Jan 30 17:42:45.526072 containerd[1508]: time="2025-01-30T17:42:45.526000020Z" level=info msg="RemoveContainer for \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\" returns successfully" Jan 30 17:42:45.526218 kubelet[1883]: I0130 17:42:45.526163 1883 scope.go:117] "RemoveContainer" containerID="d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a" Jan 30 17:42:45.527960 containerd[1508]: time="2025-01-30T17:42:45.527880683Z" level=info msg="RemoveContainer for \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\"" Jan 30 17:42:45.530392 containerd[1508]: time="2025-01-30T17:42:45.530338102Z" level=info msg="RemoveContainer for \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\" returns successfully" Jan 30 17:42:45.530607 kubelet[1883]: I0130 17:42:45.530568 1883 scope.go:117] "RemoveContainer" containerID="e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8" Jan 30 17:42:45.535229 containerd[1508]: time="2025-01-30T17:42:45.535146002Z" level=error msg="ContainerStatus for \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\": not found" Jan 30 17:42:45.549854 kubelet[1883]: E0130 17:42:45.549794 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\": not found" containerID="e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8" Jan 30 17:42:45.549984 kubelet[1883]: I0130 17:42:45.549878 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8"} err="failed to get container status \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\": rpc error: code = NotFound desc = an error occurred when try to find container \"e52e24c05a7831006bda1ff26254b96fdb4c98c76111cf13ba1b747f89f38ac8\": not found" Jan 30 17:42:45.549984 kubelet[1883]: I0130 17:42:45.549963 1883 scope.go:117] "RemoveContainer" containerID="4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7" Jan 30 17:42:45.550768 containerd[1508]: time="2025-01-30T17:42:45.550606563Z" level=error msg="ContainerStatus for \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\": not found" Jan 30 17:42:45.550905 kubelet[1883]: E0130 17:42:45.550853 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\": not found" containerID="4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7" Jan 30 17:42:45.550905 kubelet[1883]: I0130 17:42:45.550896 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7"} err="failed to get container status \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a6d8223099de2dc9fe1dffe60298f6a0c2e8786363abfad68b3929723e318e7\": not found" Jan 30 17:42:45.551009 kubelet[1883]: I0130 17:42:45.550926 1883 scope.go:117] "RemoveContainer" containerID="03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa" Jan 30 17:42:45.551338 containerd[1508]: time="2025-01-30T17:42:45.551237552Z" level=error msg="ContainerStatus for \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\": not found" Jan 30 17:42:45.551791 kubelet[1883]: E0130 17:42:45.551573 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\": not found" containerID="03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa" Jan 30 17:42:45.551791 kubelet[1883]: I0130 17:42:45.551627 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa"} err="failed to get container status \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"03d2b9e4cbb4b03143ed4b49405fef2d616d3c55ce9f7ac87e51724e980c34aa\": not found" Jan 30 17:42:45.551791 kubelet[1883]: I0130 17:42:45.551662 1883 scope.go:117] "RemoveContainer" containerID="8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9" Jan 30 17:42:45.552007 containerd[1508]: time="2025-01-30T17:42:45.551966824Z" level=error msg="ContainerStatus for \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\": not found" Jan 30 17:42:45.552404 kubelet[1883]: E0130 17:42:45.552164 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\": not found" containerID="8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9" Jan 30 17:42:45.552404 kubelet[1883]: I0130 17:42:45.552282 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9"} err="failed to get container status \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f73b0cd7fd6b475ba7518d7b3397dec673ba2672592de37c5a967907db3dbd9\": not found" Jan 30 17:42:45.552404 kubelet[1883]: I0130 17:42:45.552308 1883 scope.go:117] "RemoveContainer" containerID="d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a" Jan 30 17:42:45.552614 containerd[1508]: time="2025-01-30T17:42:45.552511444Z" level=error msg="ContainerStatus for \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\": not found" Jan 30 17:42:45.552862 kubelet[1883]: E0130 17:42:45.552763 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\": not found" containerID="d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a" Jan 30 17:42:45.552862 kubelet[1883]: I0130 17:42:45.552798 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a"} err="failed to get container status \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d30b0d713a590964cd9afeb507f4e957c56538f6be1c6f995d3d44b716129a6a\": not found" Jan 30 17:42:45.576156 systemd[1]: var-lib-kubelet-pods-fab9ca69\x2db3fa\x2d4ae4\x2d8969\x2dfeb0ba4a7d45-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 17:42:46.165576 kubelet[1883]: E0130 17:42:46.165487 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:46.258251 kubelet[1883]: I0130 17:42:46.257828 1883 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" path="/var/lib/kubelet/pods/fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45/volumes" Jan 30 17:42:47.165846 kubelet[1883]: E0130 17:42:47.165773 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:48.166520 kubelet[1883]: E0130 17:42:48.166448 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:48.264942 kubelet[1883]: E0130 17:42:48.264688 1883 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 17:42:49.166746 kubelet[1883]: E0130 17:42:49.166671 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:49.298540 kubelet[1883]: I0130 17:42:49.298480 1883 memory_manager.go:355] "RemoveStaleState removing state" podUID="fab9ca69-b3fa-4ae4-8969-feb0ba4a7d45" containerName="cilium-agent" Jan 30 17:42:49.307487 systemd[1]: Created slice kubepods-besteffort-pod800f07d6_7d05_46ba_ad82_5d1676d7b85b.slice - libcontainer container kubepods-besteffort-pod800f07d6_7d05_46ba_ad82_5d1676d7b85b.slice. Jan 30 17:42:49.316101 kubelet[1883]: I0130 17:42:49.316019 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp87j\" (UniqueName: \"kubernetes.io/projected/800f07d6-7d05-46ba-ad82-5d1676d7b85b-kube-api-access-qp87j\") pod \"cilium-operator-6c4d7847fc-gndj2\" (UID: \"800f07d6-7d05-46ba-ad82-5d1676d7b85b\") " pod="kube-system/cilium-operator-6c4d7847fc-gndj2" Jan 30 17:42:49.316101 kubelet[1883]: I0130 17:42:49.316087 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/800f07d6-7d05-46ba-ad82-5d1676d7b85b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gndj2\" (UID: \"800f07d6-7d05-46ba-ad82-5d1676d7b85b\") " pod="kube-system/cilium-operator-6c4d7847fc-gndj2" Jan 30 17:42:49.365732 systemd[1]: Created slice kubepods-burstable-podae24bfe8_f921_42bc_b375_ab6f3063669c.slice - libcontainer container kubepods-burstable-podae24bfe8_f921_42bc_b375_ab6f3063669c.slice. Jan 30 17:42:49.417410 kubelet[1883]: I0130 17:42:49.416724 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-lib-modules\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417410 kubelet[1883]: I0130 17:42:49.416833 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae24bfe8-f921-42bc-b375-ab6f3063669c-clustermesh-secrets\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417410 kubelet[1883]: I0130 17:42:49.416878 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-host-proc-sys-net\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417410 kubelet[1883]: I0130 17:42:49.416916 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-cilium-run\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417410 kubelet[1883]: I0130 17:42:49.416945 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-bpf-maps\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417410 kubelet[1883]: I0130 17:42:49.417036 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-etc-cni-netd\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417830 kubelet[1883]: I0130 17:42:49.417064 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ae24bfe8-f921-42bc-b375-ab6f3063669c-cilium-ipsec-secrets\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417830 kubelet[1883]: I0130 17:42:49.417100 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-244f2\" (UniqueName: \"kubernetes.io/projected/ae24bfe8-f921-42bc-b375-ab6f3063669c-kube-api-access-244f2\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417830 kubelet[1883]: I0130 17:42:49.417142 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-hostproc\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417830 kubelet[1883]: I0130 17:42:49.417166 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-cilium-cgroup\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417830 kubelet[1883]: I0130 17:42:49.417226 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae24bfe8-f921-42bc-b375-ab6f3063669c-hubble-tls\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.417830 kubelet[1883]: I0130 17:42:49.417279 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-host-proc-sys-kernel\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.418154 kubelet[1883]: I0130 17:42:49.417307 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-cni-path\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.418154 kubelet[1883]: I0130 17:42:49.417345 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae24bfe8-f921-42bc-b375-ab6f3063669c-xtables-lock\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.418154 kubelet[1883]: I0130 17:42:49.417372 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae24bfe8-f921-42bc-b375-ab6f3063669c-cilium-config-path\") pod \"cilium-k7fqj\" (UID: \"ae24bfe8-f921-42bc-b375-ab6f3063669c\") " pod="kube-system/cilium-k7fqj" Jan 30 17:42:49.612573 containerd[1508]: time="2025-01-30T17:42:49.612506079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gndj2,Uid:800f07d6-7d05-46ba-ad82-5d1676d7b85b,Namespace:kube-system,Attempt:0,}" Jan 30 17:42:49.641323 containerd[1508]: time="2025-01-30T17:42:49.640371615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 17:42:49.641323 containerd[1508]: time="2025-01-30T17:42:49.640484277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 17:42:49.641323 containerd[1508]: time="2025-01-30T17:42:49.640508117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:49.641323 containerd[1508]: time="2025-01-30T17:42:49.640634512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:49.669516 systemd[1]: Started cri-containerd-e7ab2a21c84e9d2ec90f7a0a8ae268c0a0a5679daca02af1a84d679eda9fcc5e.scope - libcontainer container e7ab2a21c84e9d2ec90f7a0a8ae268c0a0a5679daca02af1a84d679eda9fcc5e. Jan 30 17:42:49.680227 containerd[1508]: time="2025-01-30T17:42:49.680153538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7fqj,Uid:ae24bfe8-f921-42bc-b375-ab6f3063669c,Namespace:kube-system,Attempt:0,}" Jan 30 17:42:49.715921 containerd[1508]: time="2025-01-30T17:42:49.715743993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 17:42:49.716099 containerd[1508]: time="2025-01-30T17:42:49.715988351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 17:42:49.717205 containerd[1508]: time="2025-01-30T17:42:49.716128843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:49.717205 containerd[1508]: time="2025-01-30T17:42:49.716450384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 17:42:49.739236 containerd[1508]: time="2025-01-30T17:42:49.739095444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gndj2,Uid:800f07d6-7d05-46ba-ad82-5d1676d7b85b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7ab2a21c84e9d2ec90f7a0a8ae268c0a0a5679daca02af1a84d679eda9fcc5e\"" Jan 30 17:42:49.747352 containerd[1508]: time="2025-01-30T17:42:49.747299303Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 17:42:49.754561 systemd[1]: Started cri-containerd-d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0.scope - libcontainer container d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0. Jan 30 17:42:49.791866 containerd[1508]: time="2025-01-30T17:42:49.791815384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7fqj,Uid:ae24bfe8-f921-42bc-b375-ab6f3063669c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\"" Jan 30 17:42:49.797517 containerd[1508]: time="2025-01-30T17:42:49.797252661Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 17:42:49.819369 kubelet[1883]: I0130 17:42:49.817098 1883 setters.go:602] "Node became not ready" node="10.244.11.222" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T17:42:49Z","lastTransitionTime":"2025-01-30T17:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 17:42:49.832618 containerd[1508]: time="2025-01-30T17:42:49.832545015Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d8bc0e9f8766e6b9c0cb687415497663aa075f0c3a858bb25f731c4588c3f7eb\"" Jan 30 17:42:49.833957 containerd[1508]: time="2025-01-30T17:42:49.833827784Z" level=info msg="StartContainer for \"d8bc0e9f8766e6b9c0cb687415497663aa075f0c3a858bb25f731c4588c3f7eb\"" Jan 30 17:42:49.871530 systemd[1]: Started cri-containerd-d8bc0e9f8766e6b9c0cb687415497663aa075f0c3a858bb25f731c4588c3f7eb.scope - libcontainer container d8bc0e9f8766e6b9c0cb687415497663aa075f0c3a858bb25f731c4588c3f7eb. Jan 30 17:42:49.906100 containerd[1508]: time="2025-01-30T17:42:49.905897185Z" level=info msg="StartContainer for \"d8bc0e9f8766e6b9c0cb687415497663aa075f0c3a858bb25f731c4588c3f7eb\" returns successfully" Jan 30 17:42:49.925105 systemd[1]: cri-containerd-d8bc0e9f8766e6b9c0cb687415497663aa075f0c3a858bb25f731c4588c3f7eb.scope: Deactivated successfully. Jan 30 17:42:49.969207 containerd[1508]: time="2025-01-30T17:42:49.969054945Z" level=info msg="shim disconnected" id=d8bc0e9f8766e6b9c0cb687415497663aa075f0c3a858bb25f731c4588c3f7eb namespace=k8s.io Jan 30 17:42:49.969207 containerd[1508]: time="2025-01-30T17:42:49.969135368Z" level=warning msg="cleaning up after shim disconnected" id=d8bc0e9f8766e6b9c0cb687415497663aa075f0c3a858bb25f731c4588c3f7eb namespace=k8s.io Jan 30 17:42:49.969207 containerd[1508]: time="2025-01-30T17:42:49.969165115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:42:50.167267 kubelet[1883]: E0130 17:42:50.167099 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:50.505448 containerd[1508]: time="2025-01-30T17:42:50.505360703Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 17:42:50.519685 containerd[1508]: time="2025-01-30T17:42:50.519543518Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"10a4c27fc965b9df6e491ff5ba55b9fbf287cdd8550bc9314b8dcf96016bbda0\"" Jan 30 17:42:50.520713 containerd[1508]: time="2025-01-30T17:42:50.520672226Z" level=info msg="StartContainer for \"10a4c27fc965b9df6e491ff5ba55b9fbf287cdd8550bc9314b8dcf96016bbda0\"" Jan 30 17:42:50.567469 systemd[1]: Started cri-containerd-10a4c27fc965b9df6e491ff5ba55b9fbf287cdd8550bc9314b8dcf96016bbda0.scope - libcontainer container 10a4c27fc965b9df6e491ff5ba55b9fbf287cdd8550bc9314b8dcf96016bbda0. Jan 30 17:42:50.605239 containerd[1508]: time="2025-01-30T17:42:50.604494012Z" level=info msg="StartContainer for \"10a4c27fc965b9df6e491ff5ba55b9fbf287cdd8550bc9314b8dcf96016bbda0\" returns successfully" Jan 30 17:42:50.618982 systemd[1]: cri-containerd-10a4c27fc965b9df6e491ff5ba55b9fbf287cdd8550bc9314b8dcf96016bbda0.scope: Deactivated successfully. Jan 30 17:42:50.649634 containerd[1508]: time="2025-01-30T17:42:50.649518743Z" level=info msg="shim disconnected" id=10a4c27fc965b9df6e491ff5ba55b9fbf287cdd8550bc9314b8dcf96016bbda0 namespace=k8s.io Jan 30 17:42:50.650201 containerd[1508]: time="2025-01-30T17:42:50.649644026Z" level=warning msg="cleaning up after shim disconnected" id=10a4c27fc965b9df6e491ff5ba55b9fbf287cdd8550bc9314b8dcf96016bbda0 namespace=k8s.io Jan 30 17:42:50.650201 containerd[1508]: time="2025-01-30T17:42:50.649663636Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:42:51.167991 kubelet[1883]: E0130 17:42:51.167914 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:51.433213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10a4c27fc965b9df6e491ff5ba55b9fbf287cdd8550bc9314b8dcf96016bbda0-rootfs.mount: Deactivated successfully. Jan 30 17:42:51.520739 containerd[1508]: time="2025-01-30T17:42:51.520647572Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 17:42:51.543713 containerd[1508]: time="2025-01-30T17:42:51.543542221Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"decfb33cc292845f66e8ed83ddd66a74507ed54bac07034190dfdd6ff2ee2098\"" Jan 30 17:42:51.546075 containerd[1508]: time="2025-01-30T17:42:51.544351386Z" level=info msg="StartContainer for \"decfb33cc292845f66e8ed83ddd66a74507ed54bac07034190dfdd6ff2ee2098\"" Jan 30 17:42:51.595524 systemd[1]: Started cri-containerd-decfb33cc292845f66e8ed83ddd66a74507ed54bac07034190dfdd6ff2ee2098.scope - libcontainer container decfb33cc292845f66e8ed83ddd66a74507ed54bac07034190dfdd6ff2ee2098. Jan 30 17:42:51.635083 containerd[1508]: time="2025-01-30T17:42:51.635035738Z" level=info msg="StartContainer for \"decfb33cc292845f66e8ed83ddd66a74507ed54bac07034190dfdd6ff2ee2098\" returns successfully" Jan 30 17:42:51.642797 systemd[1]: cri-containerd-decfb33cc292845f66e8ed83ddd66a74507ed54bac07034190dfdd6ff2ee2098.scope: Deactivated successfully. Jan 30 17:42:51.675622 containerd[1508]: time="2025-01-30T17:42:51.675447075Z" level=info msg="shim disconnected" id=decfb33cc292845f66e8ed83ddd66a74507ed54bac07034190dfdd6ff2ee2098 namespace=k8s.io Jan 30 17:42:51.676507 containerd[1508]: time="2025-01-30T17:42:51.676258169Z" level=warning msg="cleaning up after shim disconnected" id=decfb33cc292845f66e8ed83ddd66a74507ed54bac07034190dfdd6ff2ee2098 namespace=k8s.io Jan 30 17:42:51.676507 containerd[1508]: time="2025-01-30T17:42:51.676305025Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:42:52.168528 kubelet[1883]: E0130 17:42:52.168412 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:52.435555 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-decfb33cc292845f66e8ed83ddd66a74507ed54bac07034190dfdd6ff2ee2098-rootfs.mount: Deactivated successfully. Jan 30 17:42:52.531327 containerd[1508]: time="2025-01-30T17:42:52.530242147Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 17:42:52.561583 containerd[1508]: time="2025-01-30T17:42:52.561423824Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"04058d27e0413b683ca607419747f78c5ea99439f408f7e23e9bc38aa077807e\"" Jan 30 17:42:52.563208 containerd[1508]: time="2025-01-30T17:42:52.562025041Z" level=info msg="StartContainer for \"04058d27e0413b683ca607419747f78c5ea99439f408f7e23e9bc38aa077807e\"" Jan 30 17:42:52.625865 systemd[1]: Started cri-containerd-04058d27e0413b683ca607419747f78c5ea99439f408f7e23e9bc38aa077807e.scope - libcontainer container 04058d27e0413b683ca607419747f78c5ea99439f408f7e23e9bc38aa077807e. Jan 30 17:42:52.683951 systemd[1]: cri-containerd-04058d27e0413b683ca607419747f78c5ea99439f408f7e23e9bc38aa077807e.scope: Deactivated successfully. Jan 30 17:42:52.690209 containerd[1508]: time="2025-01-30T17:42:52.689614293Z" level=info msg="StartContainer for \"04058d27e0413b683ca607419747f78c5ea99439f408f7e23e9bc38aa077807e\" returns successfully" Jan 30 17:42:52.726382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04058d27e0413b683ca607419747f78c5ea99439f408f7e23e9bc38aa077807e-rootfs.mount: Deactivated successfully. Jan 30 17:42:52.852765 containerd[1508]: time="2025-01-30T17:42:52.852454578Z" level=info msg="shim disconnected" id=04058d27e0413b683ca607419747f78c5ea99439f408f7e23e9bc38aa077807e namespace=k8s.io Jan 30 17:42:52.852765 containerd[1508]: time="2025-01-30T17:42:52.852548533Z" level=warning msg="cleaning up after shim disconnected" id=04058d27e0413b683ca607419747f78c5ea99439f408f7e23e9bc38aa077807e namespace=k8s.io Jan 30 17:42:52.852765 containerd[1508]: time="2025-01-30T17:42:52.852564758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 17:42:52.911418 containerd[1508]: time="2025-01-30T17:42:52.911359607Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:42:52.913405 containerd[1508]: time="2025-01-30T17:42:52.913312319Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 17:42:52.916210 containerd[1508]: time="2025-01-30T17:42:52.914477523Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 17:42:52.916386 containerd[1508]: time="2025-01-30T17:42:52.916352362Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.168919516s" Jan 30 17:42:52.916572 containerd[1508]: time="2025-01-30T17:42:52.916541515Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 17:42:52.920370 containerd[1508]: time="2025-01-30T17:42:52.920328960Z" level=info msg="CreateContainer within sandbox \"e7ab2a21c84e9d2ec90f7a0a8ae268c0a0a5679daca02af1a84d679eda9fcc5e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 17:42:52.942552 containerd[1508]: time="2025-01-30T17:42:52.942407486Z" level=info msg="CreateContainer within sandbox \"e7ab2a21c84e9d2ec90f7a0a8ae268c0a0a5679daca02af1a84d679eda9fcc5e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2cfa1cf9a1608234384f14f8a7887dae79165b753cb4404ea596032e598902aa\"" Jan 30 17:42:52.943983 containerd[1508]: time="2025-01-30T17:42:52.943950964Z" level=info msg="StartContainer for \"2cfa1cf9a1608234384f14f8a7887dae79165b753cb4404ea596032e598902aa\"" Jan 30 17:42:52.985451 systemd[1]: Started cri-containerd-2cfa1cf9a1608234384f14f8a7887dae79165b753cb4404ea596032e598902aa.scope - libcontainer container 2cfa1cf9a1608234384f14f8a7887dae79165b753cb4404ea596032e598902aa. Jan 30 17:42:53.018567 containerd[1508]: time="2025-01-30T17:42:53.018483190Z" level=info msg="StartContainer for \"2cfa1cf9a1608234384f14f8a7887dae79165b753cb4404ea596032e598902aa\" returns successfully" Jan 30 17:42:53.169567 kubelet[1883]: E0130 17:42:53.169510 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:53.267051 kubelet[1883]: E0130 17:42:53.266777 1883 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 17:42:53.537420 containerd[1508]: time="2025-01-30T17:42:53.536729894Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 17:42:53.562412 containerd[1508]: time="2025-01-30T17:42:53.561497629Z" level=info msg="CreateContainer within sandbox \"d3bb0101c0a0c3a8cd6858645f944b8aec27a508f26c80122364f878a9b166c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2bf1bd999ecca04ebcd5cf8bd6bd9ebcbdeec69714e3efb981c9a6e741387b66\"" Jan 30 17:42:53.564209 containerd[1508]: time="2025-01-30T17:42:53.563495392Z" level=info msg="StartContainer for \"2bf1bd999ecca04ebcd5cf8bd6bd9ebcbdeec69714e3efb981c9a6e741387b66\"" Jan 30 17:42:53.652491 systemd[1]: Started cri-containerd-2bf1bd999ecca04ebcd5cf8bd6bd9ebcbdeec69714e3efb981c9a6e741387b66.scope - libcontainer container 2bf1bd999ecca04ebcd5cf8bd6bd9ebcbdeec69714e3efb981c9a6e741387b66. Jan 30 17:42:53.704812 containerd[1508]: time="2025-01-30T17:42:53.704733720Z" level=info msg="StartContainer for \"2bf1bd999ecca04ebcd5cf8bd6bd9ebcbdeec69714e3efb981c9a6e741387b66\" returns successfully" Jan 30 17:42:54.171296 kubelet[1883]: E0130 17:42:54.171138 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:54.410338 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 17:42:54.571441 kubelet[1883]: I0130 17:42:54.571114 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k7fqj" podStartSLOduration=5.571058141 podStartE2EDuration="5.571058141s" podCreationTimestamp="2025-01-30 17:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 17:42:54.570594693 +0000 UTC m=+77.242230392" watchObservedRunningTime="2025-01-30 17:42:54.571058141 +0000 UTC m=+77.242693823" Jan 30 17:42:54.571798 kubelet[1883]: I0130 17:42:54.571483 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gndj2" podStartSLOduration=2.398095067 podStartE2EDuration="5.571460266s" podCreationTimestamp="2025-01-30 17:42:49 +0000 UTC" firstStartedPulling="2025-01-30 17:42:49.745369122 +0000 UTC m=+72.417004804" lastFinishedPulling="2025-01-30 17:42:52.918734318 +0000 UTC m=+75.590370003" observedRunningTime="2025-01-30 17:42:53.573869404 +0000 UTC m=+76.245505110" watchObservedRunningTime="2025-01-30 17:42:54.571460266 +0000 UTC m=+77.243095957" Jan 30 17:42:54.998380 systemd[1]: run-containerd-runc-k8s.io-2bf1bd999ecca04ebcd5cf8bd6bd9ebcbdeec69714e3efb981c9a6e741387b66-runc.1IAabk.mount: Deactivated successfully. Jan 30 17:42:55.171966 kubelet[1883]: E0130 17:42:55.171861 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:56.172489 kubelet[1883]: E0130 17:42:56.172413 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:57.173604 kubelet[1883]: E0130 17:42:57.173534 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:58.015066 systemd-networkd[1424]: lxc_health: Link UP Jan 30 17:42:58.030429 systemd-networkd[1424]: lxc_health: Gained carrier Jan 30 17:42:58.111881 kubelet[1883]: E0130 17:42:58.111819 1883 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:58.174578 kubelet[1883]: E0130 17:42:58.174506 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:59.175033 kubelet[1883]: E0130 17:42:59.174941 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:42:59.897492 systemd-networkd[1424]: lxc_health: Gained IPv6LL Jan 30 17:43:00.175973 kubelet[1883]: E0130 17:43:00.175798 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:43:01.177227 kubelet[1883]: E0130 17:43:01.176018 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:43:01.740251 systemd[1]: run-containerd-runc-k8s.io-2bf1bd999ecca04ebcd5cf8bd6bd9ebcbdeec69714e3efb981c9a6e741387b66-runc.TwwPN8.mount: Deactivated successfully. Jan 30 17:43:02.176386 kubelet[1883]: E0130 17:43:02.176300 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:43:03.176898 kubelet[1883]: E0130 17:43:03.176761 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:43:04.008781 systemd[1]: run-containerd-runc-k8s.io-2bf1bd999ecca04ebcd5cf8bd6bd9ebcbdeec69714e3efb981c9a6e741387b66-runc.z77v9c.mount: Deactivated successfully. Jan 30 17:43:04.177548 kubelet[1883]: E0130 17:43:04.177405 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:43:05.178280 kubelet[1883]: E0130 17:43:05.178162 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:43:06.179240 kubelet[1883]: E0130 17:43:06.179092 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:43:07.180383 kubelet[1883]: E0130 17:43:07.180284 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 17:43:08.181471 kubelet[1883]: E0130 17:43:08.181349 1883 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"