Jan 13 20:51:26.049188 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:51:26.049247 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:51:26.049262 kernel: BIOS-provided physical RAM map: Jan 13 20:51:26.049278 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:51:26.049289 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:51:26.049299 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:51:26.049324 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 13 20:51:26.049335 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 13 20:51:26.049347 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 20:51:26.049358 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 20:51:26.049369 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 20:51:26.049381 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:51:26.049397 kernel: NX (Execute Disable) protection: active Jan 13 20:51:26.049409 kernel: APIC: Static calls initialized Jan 13 20:51:26.049422 kernel: SMBIOS 2.8 present. Jan 13 20:51:26.049435 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 13 20:51:26.049447 kernel: Hypervisor detected: KVM Jan 13 20:51:26.049464 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:51:26.049477 kernel: kvm-clock: using sched offset of 4481435421 cycles Jan 13 20:51:26.049489 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:51:26.049502 kernel: tsc: Detected 2499.998 MHz processor Jan 13 20:51:26.049515 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:51:26.049527 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:51:26.049539 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 13 20:51:26.049552 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:51:26.049564 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:51:26.049581 kernel: Using GB pages for direct mapping Jan 13 20:51:26.049593 kernel: ACPI: Early table checksum verification disabled Jan 13 20:51:26.049605 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 13 20:51:26.049618 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:51:26.049630 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:51:26.049643 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:51:26.049655 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 13 20:51:26.049667 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:51:26.049680 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:51:26.049697 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:51:26.049710 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:51:26.049722 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 13 20:51:26.049734 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 13 20:51:26.049746 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 13 20:51:26.049765 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 13 20:51:26.049777 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 13 20:51:26.049843 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 13 20:51:26.049860 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 13 20:51:26.049873 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:51:26.049886 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:51:26.049898 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 13 20:51:26.049911 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 13 20:51:26.049924 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 13 20:51:26.049937 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 13 20:51:26.049956 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 13 20:51:26.049969 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 13 20:51:26.049981 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 13 20:51:26.049994 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 13 20:51:26.050007 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 13 20:51:26.050019 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 13 20:51:26.050032 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 13 20:51:26.050044 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 13 20:51:26.050057 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 13 20:51:26.050074 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 13 20:51:26.050087 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 20:51:26.050100 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 13 20:51:26.050113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 13 20:51:26.050126 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 13 20:51:26.050147 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 13 20:51:26.050160 kernel: Zone ranges: Jan 13 20:51:26.050173 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:51:26.050186 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 13 20:51:26.050199 kernel: Normal empty Jan 13 20:51:26.050221 kernel: Movable zone start for each node Jan 13 20:51:26.050234 kernel: Early memory node ranges Jan 13 20:51:26.050247 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:51:26.050259 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 13 20:51:26.050272 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 13 20:51:26.050285 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:51:26.050298 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:51:26.050310 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 13 20:51:26.050323 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 20:51:26.050340 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:51:26.050353 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:51:26.050366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 20:51:26.050379 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:51:26.050392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:51:26.050404 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:51:26.050417 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:51:26.050430 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:51:26.050443 kernel: TSC deadline timer available Jan 13 20:51:26.050460 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 13 20:51:26.050473 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:51:26.050486 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 20:51:26.050499 kernel: Booting paravirtualized kernel on KVM Jan 13 20:51:26.050512 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:51:26.050525 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 20:51:26.050538 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 20:51:26.050551 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 20:51:26.050563 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 20:51:26.050581 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:51:26.050594 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:51:26.050608 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:51:26.050621 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:51:26.050634 kernel: random: crng init done Jan 13 20:51:26.050647 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:51:26.050660 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:51:26.050673 kernel: Fallback order for Node 0: 0 Jan 13 20:51:26.050691 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 13 20:51:26.050704 kernel: Policy zone: DMA32 Jan 13 20:51:26.050717 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:51:26.050742 kernel: software IO TLB: area num 16. Jan 13 20:51:26.050754 kernel: Memory: 1899484K/2096616K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 196872K reserved, 0K cma-reserved) Jan 13 20:51:26.050767 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 20:51:26.050779 kernel: Kernel/User page tables isolation: enabled Jan 13 20:51:26.052878 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:51:26.052893 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:51:26.052914 kernel: Dynamic Preempt: voluntary Jan 13 20:51:26.052927 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:51:26.052941 kernel: rcu: RCU event tracing is enabled. Jan 13 20:51:26.052955 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 20:51:26.052968 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:51:26.052993 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:51:26.053011 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:51:26.053025 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:51:26.053039 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 20:51:26.053052 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 13 20:51:26.053066 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:51:26.053079 kernel: Console: colour VGA+ 80x25 Jan 13 20:51:26.053097 kernel: printk: console [tty0] enabled Jan 13 20:51:26.053111 kernel: printk: console [ttyS0] enabled Jan 13 20:51:26.053125 kernel: ACPI: Core revision 20230628 Jan 13 20:51:26.053139 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:51:26.053152 kernel: x2apic enabled Jan 13 20:51:26.053170 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:51:26.053184 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 20:51:26.053198 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 13 20:51:26.053211 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 20:51:26.053225 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 20:51:26.053250 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 20:51:26.053263 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:51:26.053275 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:51:26.053288 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:51:26.053314 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:51:26.053330 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 13 20:51:26.053343 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:51:26.053355 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:51:26.053368 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 20:51:26.053380 kernel: MMIO Stale Data: Unknown: No mitigations Jan 13 20:51:26.053406 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 13 20:51:26.053418 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:51:26.053433 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:51:26.053446 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:51:26.053472 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:51:26.053496 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 20:51:26.053510 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:51:26.053523 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:51:26.053536 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:51:26.053550 kernel: landlock: Up and running. Jan 13 20:51:26.053563 kernel: SELinux: Initializing. Jan 13 20:51:26.053576 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:51:26.053590 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:51:26.053603 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 13 20:51:26.053617 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:51:26.053630 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:51:26.053648 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 20:51:26.053662 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 13 20:51:26.053676 kernel: signal: max sigframe size: 1776 Jan 13 20:51:26.053689 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:51:26.053703 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:51:26.053716 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:51:26.053730 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:51:26.053743 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:51:26.053757 kernel: .... node #0, CPUs: #1 Jan 13 20:51:26.053775 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 13 20:51:26.053788 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:51:26.054974 kernel: smpboot: Max logical packages: 16 Jan 13 20:51:26.054990 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 13 20:51:26.055004 kernel: devtmpfs: initialized Jan 13 20:51:26.055017 kernel: x86/mm: Memory block size: 128MB Jan 13 20:51:26.055031 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:51:26.055045 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 20:51:26.055058 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:51:26.055080 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:51:26.055094 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:51:26.055107 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:51:26.055121 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:51:26.055135 kernel: audit: type=2000 audit(1736801484.552:1): state=initialized audit_enabled=0 res=1 Jan 13 20:51:26.055148 kernel: cpuidle: using governor menu Jan 13 20:51:26.055162 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:51:26.055175 kernel: dca service started, version 1.12.1 Jan 13 20:51:26.055189 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 20:51:26.055207 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 20:51:26.055221 kernel: PCI: Using configuration type 1 for base access Jan 13 20:51:26.055235 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:51:26.055248 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:51:26.055262 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:51:26.055275 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:51:26.055289 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:51:26.055302 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:51:26.055316 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:51:26.055334 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:51:26.055347 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:51:26.055373 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:51:26.055386 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:51:26.055398 kernel: ACPI: Interpreter enabled Jan 13 20:51:26.055411 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:51:26.055436 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:51:26.055450 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:51:26.055463 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:51:26.055480 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 20:51:26.055493 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:51:26.055786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:51:26.058977 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:51:26.059150 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:51:26.059171 kernel: PCI host bridge to bus 0000:00 Jan 13 20:51:26.059369 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:51:26.059549 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:51:26.059707 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:51:26.059918 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 13 20:51:26.060079 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 20:51:26.060247 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 13 20:51:26.060405 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:51:26.060605 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 20:51:26.061208 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 13 20:51:26.061387 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 13 20:51:26.061555 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 13 20:51:26.061739 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 13 20:51:26.063990 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:51:26.064204 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:51:26.064383 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 13 20:51:26.064592 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:51:26.064763 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 13 20:51:26.064980 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:51:26.065162 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 13 20:51:26.065364 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:51:26.065532 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 13 20:51:26.065736 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:51:26.067338 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 13 20:51:26.067538 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:51:26.067725 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 13 20:51:26.068025 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:51:26.068208 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 13 20:51:26.068413 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:51:26.068594 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 13 20:51:26.068833 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:51:26.069014 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 20:51:26.069183 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 13 20:51:26.069350 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 13 20:51:26.069532 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 13 20:51:26.069705 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 20:51:26.071953 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 20:51:26.072138 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 13 20:51:26.072337 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 13 20:51:26.072550 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 20:51:26.072722 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 20:51:26.072971 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 20:51:26.073145 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 13 20:51:26.073325 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 13 20:51:26.073509 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 20:51:26.073687 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 20:51:26.075950 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 13 20:51:26.076148 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 13 20:51:26.076340 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 20:51:26.076542 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 20:51:26.076712 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 20:51:26.076956 kernel: pci_bus 0000:02: extended config space not accessible Jan 13 20:51:26.077152 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 13 20:51:26.077355 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 13 20:51:26.077526 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 20:51:26.077695 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 20:51:26.081952 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:51:26.082140 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 13 20:51:26.082317 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 20:51:26.082488 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 20:51:26.082657 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 20:51:26.082928 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:51:26.083108 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 13 20:51:26.083279 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 20:51:26.083462 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 20:51:26.083648 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 20:51:26.087202 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 20:51:26.087394 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 20:51:26.087583 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 20:51:26.087761 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 20:51:26.087974 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 20:51:26.088145 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 20:51:26.088319 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 20:51:26.088489 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 20:51:26.088682 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 20:51:26.089479 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 20:51:26.089681 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 20:51:26.089949 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 20:51:26.090142 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 20:51:26.090328 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 20:51:26.090497 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 20:51:26.090518 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:51:26.090533 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:51:26.090546 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:51:26.090560 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:51:26.090582 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 20:51:26.090596 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 20:51:26.090610 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 20:51:26.090624 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 20:51:26.090638 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 20:51:26.090652 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 20:51:26.090666 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 20:51:26.090688 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 20:51:26.090701 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 20:51:26.090720 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 20:51:26.090740 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 20:51:26.090754 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 20:51:26.090768 kernel: iommu: Default domain type: Translated Jan 13 20:51:26.090782 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:51:26.090830 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:51:26.090847 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:51:26.090861 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:51:26.090875 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 13 20:51:26.091049 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 20:51:26.091242 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 20:51:26.091410 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:51:26.091430 kernel: vgaarb: loaded Jan 13 20:51:26.091444 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:51:26.091458 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:51:26.091472 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:51:26.091486 kernel: pnp: PnP ACPI init Jan 13 20:51:26.091678 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 20:51:26.091699 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:51:26.091713 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:51:26.091727 kernel: NET: Registered PF_INET protocol family Jan 13 20:51:26.091740 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:51:26.091762 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:51:26.091775 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:51:26.091829 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:51:26.091851 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:51:26.091865 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:51:26.091879 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:51:26.091892 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:51:26.091906 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:51:26.091920 kernel: NET: Registered PF_XDP protocol family Jan 13 20:51:26.092089 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 13 20:51:26.092270 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:51:26.092457 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:51:26.092629 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:51:26.092864 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:51:26.093037 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:51:26.093208 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:51:26.093377 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:51:26.093554 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:51:26.093743 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:51:26.094016 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:51:26.094205 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:51:26.094394 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:51:26.094559 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:51:26.094741 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:51:26.094984 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:51:26.095193 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 20:51:26.095375 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 20:51:26.095555 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 20:51:26.095729 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:51:26.095956 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 20:51:26.096135 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 20:51:26.096307 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 20:51:26.096482 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:51:26.096665 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 20:51:26.096882 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 20:51:26.097054 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 20:51:26.097229 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:51:26.097391 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 20:51:26.097579 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 20:51:26.097755 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 20:51:26.097959 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:51:26.098153 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 20:51:26.098313 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 20:51:26.098472 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 20:51:26.098661 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:51:26.098877 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 20:51:26.099049 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 20:51:26.099228 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 20:51:26.099420 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:51:26.099590 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 20:51:26.099760 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 20:51:26.099968 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 20:51:26.100160 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:51:26.100336 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 20:51:26.100517 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 20:51:26.100700 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 20:51:26.100905 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:51:26.101089 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 20:51:26.101254 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 20:51:26.101421 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:51:26.101581 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:51:26.101749 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:51:26.102000 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 13 20:51:26.102164 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 20:51:26.102310 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 13 20:51:26.102485 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:51:26.102665 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 13 20:51:26.102855 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 20:51:26.103026 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 13 20:51:26.103225 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 13 20:51:26.103385 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 13 20:51:26.103565 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 20:51:26.103742 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 13 20:51:26.103947 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 13 20:51:26.104119 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 20:51:26.104318 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 13 20:51:26.104472 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 13 20:51:26.104636 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 20:51:26.104869 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 13 20:51:26.105034 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 13 20:51:26.105205 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 20:51:26.105390 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 13 20:51:26.105560 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 13 20:51:26.105721 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 20:51:26.105954 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 13 20:51:26.106127 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 13 20:51:26.106297 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 20:51:26.106466 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 13 20:51:26.106625 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 13 20:51:26.106866 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 20:51:26.106890 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 20:51:26.106906 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:51:26.106928 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:51:26.106942 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 13 20:51:26.106957 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:51:26.106972 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 20:51:26.106986 kernel: Initialise system trusted keyrings Jan 13 20:51:26.107005 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:51:26.107019 kernel: Key type asymmetric registered Jan 13 20:51:26.107034 kernel: Asymmetric key parser 'x509' registered Jan 13 20:51:26.107048 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:51:26.107063 kernel: io scheduler mq-deadline registered Jan 13 20:51:26.107077 kernel: io scheduler kyber registered Jan 13 20:51:26.107091 kernel: io scheduler bfq registered Jan 13 20:51:26.107259 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 13 20:51:26.107446 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 13 20:51:26.107618 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:51:26.107827 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 13 20:51:26.108001 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 13 20:51:26.108184 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:51:26.108355 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 13 20:51:26.108512 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 13 20:51:26.108677 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:51:26.108895 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 13 20:51:26.109065 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 13 20:51:26.109247 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:51:26.109425 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 13 20:51:26.109602 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 13 20:51:26.109786 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:51:26.110018 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 13 20:51:26.110188 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 13 20:51:26.110355 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:51:26.110532 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 13 20:51:26.110705 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 13 20:51:26.110951 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:51:26.111122 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 13 20:51:26.111301 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 13 20:51:26.111462 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:51:26.111483 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:51:26.111498 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 20:51:26.111518 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 20:51:26.111532 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:51:26.111546 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:51:26.111559 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:51:26.111573 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:51:26.111587 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:51:26.111600 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:51:26.111758 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 13 20:51:26.111968 kernel: rtc_cmos 00:03: registered as rtc0 Jan 13 20:51:26.112136 kernel: rtc_cmos 00:03: setting system clock to 2025-01-13T20:51:25 UTC (1736801485) Jan 13 20:51:26.112303 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 13 20:51:26.112328 kernel: intel_pstate: CPU model not supported Jan 13 20:51:26.112342 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:51:26.112356 kernel: Segment Routing with IPv6 Jan 13 20:51:26.112370 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:51:26.112384 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:51:26.112398 kernel: Key type dns_resolver registered Jan 13 20:51:26.112418 kernel: IPI shorthand broadcast: enabled Jan 13 20:51:26.112433 kernel: sched_clock: Marking stable (1110026061, 233198964)->(1574804374, -231579349) Jan 13 20:51:26.112447 kernel: registered taskstats version 1 Jan 13 20:51:26.112474 kernel: Loading compiled-in X.509 certificates Jan 13 20:51:26.112492 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:51:26.112505 kernel: Key type .fscrypt registered Jan 13 20:51:26.112518 kernel: Key type fscrypt-provisioning registered Jan 13 20:51:26.112532 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:51:26.112545 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:51:26.112576 kernel: ima: No architecture policies found Jan 13 20:51:26.112589 kernel: clk: Disabling unused clocks Jan 13 20:51:26.112603 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:51:26.112617 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:51:26.112640 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:51:26.112654 kernel: Run /init as init process Jan 13 20:51:26.112668 kernel: with arguments: Jan 13 20:51:26.112681 kernel: /init Jan 13 20:51:26.112703 kernel: with environment: Jan 13 20:51:26.112721 kernel: HOME=/ Jan 13 20:51:26.112734 kernel: TERM=linux Jan 13 20:51:26.112748 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:51:26.112774 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:51:26.112834 systemd[1]: Detected virtualization kvm. Jan 13 20:51:26.112850 systemd[1]: Detected architecture x86-64. Jan 13 20:51:26.112865 systemd[1]: Running in initrd. Jan 13 20:51:26.112880 systemd[1]: No hostname configured, using default hostname. Jan 13 20:51:26.112902 systemd[1]: Hostname set to . Jan 13 20:51:26.112917 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:51:26.112933 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:51:26.112948 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:51:26.112963 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:51:26.112979 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:51:26.112995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:51:26.113010 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:51:26.113031 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:51:26.113048 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:51:26.113064 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:51:26.113079 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:51:26.113094 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:51:26.113109 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:51:26.113129 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:51:26.113145 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:51:26.113160 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:51:26.113175 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:51:26.113191 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:51:26.113206 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:51:26.113221 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:51:26.113236 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:51:26.113251 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:51:26.113271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:51:26.113287 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:51:26.113302 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:51:26.113317 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:51:26.113333 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:51:26.113348 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:51:26.113363 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:51:26.113379 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:51:26.113394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:26.113426 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:51:26.113441 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:51:26.113501 systemd-journald[200]: Collecting audit messages is disabled. Jan 13 20:51:26.113535 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:51:26.113557 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:51:26.113584 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:51:26.113603 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:51:26.113617 kernel: Bridge firewalling registered Jan 13 20:51:26.113642 systemd-journald[200]: Journal started Jan 13 20:51:26.113669 systemd-journald[200]: Runtime Journal (/run/log/journal/a88b44632d1a4699a0dfe059f578846c) is 4.7M, max 37.9M, 33.2M free. Jan 13 20:51:26.042546 systemd-modules-load[201]: Inserted module 'overlay' Jan 13 20:51:26.159442 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:51:26.088826 systemd-modules-load[201]: Inserted module 'br_netfilter' Jan 13 20:51:26.160464 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:51:26.161685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:26.176028 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:51:26.177998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:51:26.183974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:51:26.187201 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:51:26.209017 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:51:26.211119 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:51:26.213037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:51:26.215082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:26.222013 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:51:26.227855 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:51:26.240094 dracut-cmdline[236]: dracut-dracut-053 Jan 13 20:51:26.245081 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:51:26.269559 systemd-resolved[237]: Positive Trust Anchors: Jan 13 20:51:26.269587 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:51:26.269633 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:51:26.273761 systemd-resolved[237]: Defaulting to hostname 'linux'. Jan 13 20:51:26.275687 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:51:26.279023 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:51:26.351847 kernel: SCSI subsystem initialized Jan 13 20:51:26.363844 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:51:26.376833 kernel: iscsi: registered transport (tcp) Jan 13 20:51:26.403022 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:51:26.403080 kernel: QLogic iSCSI HBA Driver Jan 13 20:51:26.460813 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:51:26.466995 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:51:26.500303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:51:26.500379 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:51:26.501140 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:51:26.550852 kernel: raid6: sse2x4 gen() 13007 MB/s Jan 13 20:51:26.568846 kernel: raid6: sse2x2 gen() 8991 MB/s Jan 13 20:51:26.587489 kernel: raid6: sse2x1 gen() 9369 MB/s Jan 13 20:51:26.587529 kernel: raid6: using algorithm sse2x4 gen() 13007 MB/s Jan 13 20:51:26.606493 kernel: raid6: .... xor() 7598 MB/s, rmw enabled Jan 13 20:51:26.606538 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 20:51:26.632835 kernel: xor: automatically using best checksumming function avx Jan 13 20:51:26.805878 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:51:26.820520 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:51:26.826998 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:51:26.858040 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 13 20:51:26.865619 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:51:26.875986 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:51:26.896943 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 13 20:51:26.937746 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:51:26.942995 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:51:27.061679 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:51:27.069069 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:51:27.103308 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:51:27.106480 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:51:27.109929 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:51:27.113171 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:51:27.123021 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:51:27.149600 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:51:27.189820 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 13 20:51:27.277949 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 13 20:51:27.278158 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:51:27.278182 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:51:27.278202 kernel: GPT:17805311 != 125829119 Jan 13 20:51:27.278233 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:51:27.278251 kernel: GPT:17805311 != 125829119 Jan 13 20:51:27.278268 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:51:27.278295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:51:27.278320 kernel: libata version 3.00 loaded. Jan 13 20:51:27.278339 kernel: AVX version of gcm_enc/dec engaged. Jan 13 20:51:27.278364 kernel: AES CTR mode by8 optimization enabled Jan 13 20:51:27.231987 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:51:27.232168 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:27.424880 kernel: ACPI: bus type USB registered Jan 13 20:51:27.424923 kernel: usbcore: registered new interface driver usbfs Jan 13 20:51:27.424945 kernel: usbcore: registered new interface driver hub Jan 13 20:51:27.424964 kernel: usbcore: registered new device driver usb Jan 13 20:51:27.424983 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 20:51:27.425248 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 20:51:27.425272 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (464) Jan 13 20:51:27.425299 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) Jan 13 20:51:27.425319 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 20:51:27.425541 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 20:51:27.425778 kernel: scsi host0: ahci Jan 13 20:51:27.426756 kernel: scsi host1: ahci Jan 13 20:51:27.427005 kernel: scsi host2: ahci Jan 13 20:51:27.427209 kernel: scsi host3: ahci Jan 13 20:51:27.427426 kernel: scsi host4: ahci Jan 13 20:51:27.427631 kernel: scsi host5: ahci Jan 13 20:51:27.427880 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 13 20:51:27.427904 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 13 20:51:27.427923 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 13 20:51:27.427942 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 13 20:51:27.427961 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 13 20:51:27.427989 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 13 20:51:27.233171 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:51:27.234929 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:51:27.235096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:27.235865 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:27.243888 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:27.377932 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:51:27.431773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:27.442838 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:51:27.450328 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:51:27.456562 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:51:27.457462 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:51:27.464003 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:51:27.465981 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:51:27.475618 disk-uuid[562]: Primary Header is updated. Jan 13 20:51:27.475618 disk-uuid[562]: Secondary Entries is updated. Jan 13 20:51:27.475618 disk-uuid[562]: Secondary Header is updated. Jan 13 20:51:27.482853 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:51:27.500789 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:27.705844 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:27.705935 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:27.706828 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:27.709008 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:27.711698 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:27.713823 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 20:51:27.722756 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 20:51:27.739975 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:51:27.740228 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:51:27.740440 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 20:51:27.740672 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:51:27.740953 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:51:27.741162 kernel: hub 1-0:1.0: USB hub found Jan 13 20:51:27.741400 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:51:27.741617 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:51:27.741889 kernel: hub 2-0:1.0: USB hub found Jan 13 20:51:27.742114 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:51:27.972907 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:51:28.115846 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:51:28.122523 kernel: usbcore: registered new interface driver usbhid Jan 13 20:51:28.122573 kernel: usbhid: USB HID core driver Jan 13 20:51:28.130283 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 13 20:51:28.130353 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 13 20:51:28.494848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:51:28.497050 disk-uuid[563]: The operation has completed successfully. Jan 13 20:51:28.552159 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:51:28.552330 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:51:28.576018 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:51:28.593631 sh[583]: Success Jan 13 20:51:28.610847 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 13 20:51:28.678815 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:51:28.689928 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:51:28.691858 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:51:28.729010 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:51:28.729083 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:28.729117 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:51:28.729136 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:51:28.730652 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:51:28.742302 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:51:28.743830 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:51:28.749050 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:51:28.751481 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:51:28.773114 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:28.773178 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:28.773220 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:51:28.777819 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:51:28.790906 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:51:28.794847 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:28.802881 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:51:28.807988 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:51:28.900755 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:51:28.918072 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:51:28.953760 systemd-networkd[769]: lo: Link UP Jan 13 20:51:28.953780 systemd-networkd[769]: lo: Gained carrier Jan 13 20:51:28.956232 systemd-networkd[769]: Enumeration completed Jan 13 20:51:28.956946 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:51:28.957162 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:51:28.957168 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:28.960639 systemd-networkd[769]: eth0: Link UP Jan 13 20:51:28.960645 systemd-networkd[769]: eth0: Gained carrier Jan 13 20:51:28.960657 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:51:28.961495 systemd[1]: Reached target network.target - Network. Jan 13 20:51:28.975881 systemd-networkd[769]: eth0: DHCPv4 address 10.230.36.26/30, gateway 10.230.36.25 acquired from 10.230.36.25 Jan 13 20:51:28.978578 ignition[674]: Ignition 2.20.0 Jan 13 20:51:28.978602 ignition[674]: Stage: fetch-offline Jan 13 20:51:28.978674 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:28.978693 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:51:28.978909 ignition[674]: parsed url from cmdline: "" Jan 13 20:51:28.978920 ignition[674]: no config URL provided Jan 13 20:51:28.978930 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:51:28.982616 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:51:28.978946 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:51:28.978955 ignition[674]: failed to fetch config: resource requires networking Jan 13 20:51:28.979206 ignition[674]: Ignition finished successfully Jan 13 20:51:28.996431 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:51:29.012506 ignition[777]: Ignition 2.20.0 Jan 13 20:51:29.012526 ignition[777]: Stage: fetch Jan 13 20:51:29.012822 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:29.012845 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:51:29.012971 ignition[777]: parsed url from cmdline: "" Jan 13 20:51:29.012978 ignition[777]: no config URL provided Jan 13 20:51:29.012987 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:51:29.013003 ignition[777]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:51:29.013134 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 20:51:29.013154 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 20:51:29.013182 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 20:51:29.029174 ignition[777]: GET result: OK Jan 13 20:51:29.029276 ignition[777]: parsing config with SHA512: ab7e9e6ce5d85e3c5142153a0c0f97b2835b1434902efaa168f14efbcbbefe2462c93a1d594f6ce0da00a8ae86ce0fc12360c272e6ac0aeda8e2de6965ad2343 Jan 13 20:51:29.033132 unknown[777]: fetched base config from "system" Jan 13 20:51:29.033149 unknown[777]: fetched base config from "system" Jan 13 20:51:29.033456 ignition[777]: fetch: fetch complete Jan 13 20:51:29.033159 unknown[777]: fetched user config from "openstack" Jan 13 20:51:29.033467 ignition[777]: fetch: fetch passed Jan 13 20:51:29.035768 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:51:29.033527 ignition[777]: Ignition finished successfully Jan 13 20:51:29.051975 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:51:29.069826 ignition[783]: Ignition 2.20.0 Jan 13 20:51:29.069845 ignition[783]: Stage: kargs Jan 13 20:51:29.070102 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:29.070122 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:51:29.071245 ignition[783]: kargs: kargs passed Jan 13 20:51:29.071326 ignition[783]: Ignition finished successfully Jan 13 20:51:29.074134 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:51:29.081990 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:51:29.099885 ignition[789]: Ignition 2.20.0 Jan 13 20:51:29.099909 ignition[789]: Stage: disks Jan 13 20:51:29.100164 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:29.102447 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:51:29.100184 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:51:29.101158 ignition[789]: disks: disks passed Jan 13 20:51:29.105791 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:51:29.101237 ignition[789]: Ignition finished successfully Jan 13 20:51:29.106882 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:51:29.108226 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:51:29.108941 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:51:29.110435 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:51:29.118018 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:51:29.139009 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:51:29.142025 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:51:29.147901 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:51:29.263824 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:51:29.264586 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:51:29.266701 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:51:29.273916 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:51:29.276929 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:51:29.279272 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:51:29.286011 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 20:51:29.288185 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:51:29.297015 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806) Jan 13 20:51:29.297050 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:29.297071 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:29.297090 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:51:29.288237 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:51:29.292019 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:51:29.309021 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:51:29.317821 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:51:29.322159 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:51:29.388347 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:51:29.398021 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:51:29.405245 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:51:29.416388 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:51:29.520368 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:51:29.525921 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:51:29.528005 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:51:29.542827 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:29.563973 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:51:29.574328 ignition[923]: INFO : Ignition 2.20.0 Jan 13 20:51:29.576677 ignition[923]: INFO : Stage: mount Jan 13 20:51:29.576677 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:29.576677 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:51:29.576677 ignition[923]: INFO : mount: mount passed Jan 13 20:51:29.576677 ignition[923]: INFO : Ignition finished successfully Jan 13 20:51:29.578684 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:51:29.723355 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:51:30.649159 systemd-networkd[769]: eth0: Gained IPv6LL Jan 13 20:51:32.155889 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8906:24:19ff:fee6:241a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8906:24:19ff:fee6:241a/64 assigned by NDisc. Jan 13 20:51:32.155907 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 20:51:36.476242 coreos-metadata[808]: Jan 13 20:51:36.476 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:51:36.500863 coreos-metadata[808]: Jan 13 20:51:36.500 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:51:36.511937 coreos-metadata[808]: Jan 13 20:51:36.511 INFO Fetch successful Jan 13 20:51:36.512814 coreos-metadata[808]: Jan 13 20:51:36.512 INFO wrote hostname srv-so511.gb1.brightbox.com to /sysroot/etc/hostname Jan 13 20:51:36.514776 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 20:51:36.514950 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 20:51:36.524937 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:51:36.545059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:51:36.566870 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Jan 13 20:51:36.570425 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:51:36.570480 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:51:36.572196 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:51:36.577864 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:51:36.580476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:51:36.607330 ignition[957]: INFO : Ignition 2.20.0 Jan 13 20:51:36.609685 ignition[957]: INFO : Stage: files Jan 13 20:51:36.609685 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:36.609685 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:51:36.609685 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:51:36.612954 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:51:36.612954 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:51:36.614862 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:51:36.614862 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:51:36.616740 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:51:36.615349 unknown[957]: wrote ssh authorized keys file for user: core Jan 13 20:51:36.618645 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:51:36.618645 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:51:36.618645 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:51:36.618645 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:51:36.618645 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:51:36.618645 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:51:36.618645 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:51:36.633053 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:51:37.168733 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:51:38.496689 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:51:38.498764 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:51:38.498764 ignition[957]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:51:38.498764 ignition[957]: INFO : files: files passed Jan 13 20:51:38.498764 ignition[957]: INFO : Ignition finished successfully Jan 13 20:51:38.499077 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:51:38.514054 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:51:38.517297 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:51:38.519846 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:51:38.521257 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:51:38.539502 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:51:38.541008 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:51:38.542711 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:51:38.542952 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:51:38.545209 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:51:38.559433 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:51:38.600445 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:51:38.601554 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:51:38.603928 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:51:38.604683 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:51:38.606376 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:51:38.612078 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:51:38.637886 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:51:38.644004 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:51:38.668973 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:51:38.670819 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:51:38.671728 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:51:38.672537 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:51:38.672704 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:51:38.674667 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:51:38.675605 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:51:38.676915 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:51:38.678460 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:51:38.679866 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:51:38.681191 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:51:38.682625 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:51:38.684214 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:51:38.685675 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:51:38.687095 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:51:38.688497 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:51:38.688659 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:51:38.690496 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:51:38.691372 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:51:38.692681 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:51:38.692889 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:51:38.694274 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:51:38.694495 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:51:38.696292 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:51:38.696460 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:51:38.698251 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:51:38.698404 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:51:38.707528 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:51:38.711101 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:51:38.711841 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:51:38.712088 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:51:38.714873 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:51:38.715148 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:51:38.728586 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:51:38.731210 ignition[1009]: INFO : Ignition 2.20.0 Jan 13 20:51:38.731210 ignition[1009]: INFO : Stage: umount Jan 13 20:51:38.731210 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:51:38.731210 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 20:51:38.731210 ignition[1009]: INFO : umount: umount passed Jan 13 20:51:38.731210 ignition[1009]: INFO : Ignition finished successfully Jan 13 20:51:38.728760 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:51:38.735664 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:51:38.735847 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:51:38.737732 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:51:38.738070 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:51:38.739484 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:51:38.739558 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:51:38.740311 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:51:38.740379 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:51:38.741776 systemd[1]: Stopped target network.target - Network. Jan 13 20:51:38.744168 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:51:38.744237 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:51:38.747859 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:51:38.749134 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:51:38.755431 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:51:38.756254 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:51:38.756921 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:51:38.757571 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:51:38.757635 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:51:38.758324 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:51:38.758385 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:51:38.765993 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:51:38.766065 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:51:38.767327 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:51:38.767403 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:51:38.768899 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:51:38.770241 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:51:38.772891 systemd-networkd[769]: eth0: DHCPv6 lease lost Jan 13 20:51:38.773356 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:51:38.777133 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:51:38.777408 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:51:38.779061 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:51:38.779229 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:51:38.786111 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:51:38.786261 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:51:38.796982 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:51:38.797703 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:51:38.797776 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:51:38.800403 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:51:38.800497 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:51:38.801442 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:51:38.801539 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:51:38.803331 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:51:38.803402 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:51:38.804954 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:51:38.815722 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:51:38.816869 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:51:38.819289 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:51:38.819377 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:51:38.821088 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:51:38.821148 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:51:38.822529 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:51:38.822596 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:51:38.824640 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:51:38.824705 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:51:38.825962 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:51:38.826033 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:51:38.839965 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:51:38.840765 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:51:38.840901 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:51:38.843565 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:51:38.843677 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:51:38.844464 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:51:38.844545 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:51:38.846932 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:51:38.847008 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:38.848626 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:51:38.848853 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:51:38.849992 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:51:38.850140 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:51:38.851450 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:51:38.851598 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:51:38.855551 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:51:38.856941 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:51:38.857031 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:51:38.864007 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:51:38.876032 systemd[1]: Switching root. Jan 13 20:51:38.918001 systemd-journald[200]: Journal stopped Jan 13 20:51:40.379209 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Jan 13 20:51:40.379327 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:51:40.379383 kernel: SELinux: policy capability open_perms=1 Jan 13 20:51:40.379411 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:51:40.379446 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:51:40.379468 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:51:40.379507 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:51:40.379529 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:51:40.379549 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:51:40.379574 kernel: audit: type=1403 audit(1736801499.142:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:51:40.379611 systemd[1]: Successfully loaded SELinux policy in 49.210ms. Jan 13 20:51:40.379647 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.016ms. Jan 13 20:51:40.379677 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:51:40.379699 systemd[1]: Detected virtualization kvm. Jan 13 20:51:40.379736 systemd[1]: Detected architecture x86-64. Jan 13 20:51:40.379778 systemd[1]: Detected first boot. Jan 13 20:51:40.379816 systemd[1]: Hostname set to . Jan 13 20:51:40.379846 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:51:40.379876 zram_generator::config[1051]: No configuration found. Jan 13 20:51:40.379904 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:51:40.379931 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:51:40.379953 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:51:40.379975 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:51:40.380008 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:51:40.380033 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:51:40.380060 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:51:40.380081 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:51:40.380102 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:51:40.380124 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:51:40.380150 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:51:40.380172 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:51:40.380199 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:51:40.380235 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:51:40.380258 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:51:40.380287 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:51:40.380309 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:51:40.380332 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:51:40.380353 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:51:40.380374 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:51:40.380395 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:51:40.380435 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:51:40.380461 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:51:40.380482 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:51:40.380503 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:51:40.380524 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:51:40.380545 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:51:40.380565 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:51:40.380600 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:51:40.380623 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:51:40.380646 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:51:40.380667 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:51:40.380687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:51:40.380709 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:51:40.380731 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:51:40.380752 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:51:40.380773 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:51:40.380831 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:51:40.380857 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:51:40.380878 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:51:40.380899 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:51:40.380921 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:51:40.380942 systemd[1]: Reached target machines.target - Containers. Jan 13 20:51:40.380963 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:51:40.380985 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:51:40.381021 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:51:40.381044 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:51:40.381072 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:51:40.381096 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:51:40.381136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:51:40.381172 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:51:40.381206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:51:40.381229 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:51:40.381251 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:51:40.381279 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:51:40.381301 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:51:40.381322 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:51:40.381343 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:51:40.381363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:51:40.381384 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:51:40.381417 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:51:40.381450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:51:40.381479 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:51:40.381502 systemd[1]: Stopped verity-setup.service. Jan 13 20:51:40.381524 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:51:40.381546 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:51:40.381568 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:51:40.381589 kernel: loop: module loaded Jan 13 20:51:40.381622 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:51:40.381645 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:51:40.381666 kernel: fuse: init (API version 7.39) Jan 13 20:51:40.381686 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:51:40.381707 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:51:40.381739 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:51:40.381762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:51:40.381783 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:51:40.381829 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:51:40.381866 kernel: ACPI: bus type drm_connector registered Jan 13 20:51:40.381904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:51:40.381939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:51:40.381962 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:51:40.381983 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:51:40.382033 systemd-journald[1144]: Collecting audit messages is disabled. Jan 13 20:51:40.382072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:51:40.382094 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:51:40.382115 systemd-journald[1144]: Journal started Jan 13 20:51:40.382162 systemd-journald[1144]: Runtime Journal (/run/log/journal/a88b44632d1a4699a0dfe059f578846c) is 4.7M, max 37.9M, 33.2M free. Jan 13 20:51:39.980078 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:51:39.996764 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:51:39.997518 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:51:40.387823 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:51:40.389352 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:51:40.389607 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:51:40.390685 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:51:40.390947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:51:40.391999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:51:40.393105 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:51:40.394182 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:51:40.409881 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:51:40.419170 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:51:40.429864 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:51:40.430640 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:51:40.430695 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:51:40.433305 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:51:40.439103 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:51:40.447596 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:51:40.448471 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:51:40.454986 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:51:40.465994 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:51:40.467702 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:51:40.473759 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:51:40.474871 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:51:40.482685 systemd-journald[1144]: Time spent on flushing to /var/log/journal/a88b44632d1a4699a0dfe059f578846c is 85.570ms for 1117 entries. Jan 13 20:51:40.482685 systemd-journald[1144]: System Journal (/var/log/journal/a88b44632d1a4699a0dfe059f578846c) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:51:40.596456 systemd-journald[1144]: Received client request to flush runtime journal. Jan 13 20:51:40.482078 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:51:40.492997 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:51:40.503995 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:51:40.508940 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:51:40.512036 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:51:40.604917 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 20:51:40.513854 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:51:40.597332 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:51:40.600702 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:51:40.611043 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:51:40.613553 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:51:40.646873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:51:40.661054 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:51:40.667435 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:51:40.668962 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:51:40.679917 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 13 20:51:40.679944 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Jan 13 20:51:40.692835 kernel: loop1: detected capacity change from 0 to 205544 Jan 13 20:51:40.702256 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:51:40.716162 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:51:40.729664 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:51:40.740053 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:51:40.753446 kernel: loop2: detected capacity change from 0 to 141000 Jan 13 20:51:40.828336 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:51:40.835404 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:51:40.851846 kernel: loop3: detected capacity change from 0 to 8 Jan 13 20:51:40.849351 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:51:40.873379 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 13 20:51:40.873863 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Jan 13 20:51:40.880984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:51:40.887877 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 20:51:40.918832 kernel: loop5: detected capacity change from 0 to 205544 Jan 13 20:51:40.941858 kernel: loop6: detected capacity change from 0 to 141000 Jan 13 20:51:40.993750 kernel: loop7: detected capacity change from 0 to 8 Jan 13 20:51:40.995015 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 20:51:40.996584 (sd-merge)[1214]: Merged extensions into '/usr'. Jan 13 20:51:41.004075 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:51:41.004244 systemd[1]: Reloading... Jan 13 20:51:41.200431 zram_generator::config[1240]: No configuration found. Jan 13 20:51:41.236225 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:51:41.407836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:51:41.479997 systemd[1]: Reloading finished in 474 ms. Jan 13 20:51:41.526321 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:51:41.527711 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:51:41.530024 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:51:41.542019 systemd[1]: Starting ensure-sysext.service... Jan 13 20:51:41.544629 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:51:41.552975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:51:41.559062 systemd[1]: Reloading requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:51:41.559205 systemd[1]: Reloading... Jan 13 20:51:41.596679 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:51:41.597706 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:51:41.599299 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:51:41.599891 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 13 20:51:41.600131 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Jan 13 20:51:41.605153 systemd-udevd[1299]: Using default interface naming scheme 'v255'. Jan 13 20:51:41.609739 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:51:41.610302 systemd-tmpfiles[1298]: Skipping /boot Jan 13 20:51:41.655100 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:51:41.655119 systemd-tmpfiles[1298]: Skipping /boot Jan 13 20:51:41.662825 zram_generator::config[1321]: No configuration found. Jan 13 20:51:41.841825 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1337) Jan 13 20:51:41.926872 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:51:41.940841 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:51:41.993828 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:51:42.018178 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 20:51:42.026316 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 20:51:42.026625 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 20:51:42.054837 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:51:42.069781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:51:42.237038 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:51:42.237317 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:51:42.238224 systemd[1]: Reloading finished in 678 ms. Jan 13 20:51:42.275387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:51:42.286008 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:51:42.309700 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:51:42.318355 systemd[1]: Finished ensure-sysext.service. Jan 13 20:51:42.337901 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:51:42.343077 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:51:42.351270 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:51:42.352455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:51:42.353990 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:51:42.358028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:51:42.368062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:51:42.388278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:51:42.391166 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:51:42.397994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:51:42.398894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:51:42.408996 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:51:42.414040 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:51:42.420041 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:51:42.426023 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:51:42.438002 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:51:42.441779 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:51:42.451979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:51:42.453029 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:51:42.456871 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:51:42.458064 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:51:42.458932 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:51:42.460371 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:51:42.460588 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:51:42.463574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:51:42.463894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:51:42.466290 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:51:42.466530 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:51:42.482311 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:51:42.492158 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:51:42.503003 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:51:42.503857 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:51:42.504698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:51:42.513734 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:51:42.514902 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:51:42.516238 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:51:42.518868 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:51:42.544257 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:51:42.545119 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:51:42.556415 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:51:42.571216 augenrules[1456]: No rules Jan 13 20:51:42.573584 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:51:42.575591 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:51:42.596974 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:51:42.607387 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:51:42.615155 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:51:42.738071 systemd-resolved[1426]: Positive Trust Anchors: Jan 13 20:51:42.738096 systemd-resolved[1426]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:51:42.738141 systemd-resolved[1426]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:51:42.742300 systemd-networkd[1425]: lo: Link UP Jan 13 20:51:42.742313 systemd-networkd[1425]: lo: Gained carrier Jan 13 20:51:42.748746 systemd-networkd[1425]: Enumeration completed Jan 13 20:51:42.748945 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:51:42.749423 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:51:42.749429 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:51:42.754018 systemd-resolved[1426]: Using system hostname 'srv-so511.gb1.brightbox.com'. Jan 13 20:51:42.754228 systemd-networkd[1425]: eth0: Link UP Jan 13 20:51:42.754346 systemd-networkd[1425]: eth0: Gained carrier Jan 13 20:51:42.754477 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:51:42.768936 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:51:42.770763 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:51:42.772115 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:51:42.773634 systemd[1]: Reached target network.target - Network. Jan 13 20:51:42.774326 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:51:42.774889 systemd-networkd[1425]: eth0: DHCPv4 address 10.230.36.26/30, gateway 10.230.36.25 acquired from 10.230.36.25 Jan 13 20:51:42.775179 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:51:42.776171 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:51:42.776223 systemd-timesyncd[1428]: Network configuration changed, trying to establish connection. Jan 13 20:51:42.777881 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:51:42.778829 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:51:42.779856 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:51:42.779902 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:51:42.780547 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:51:42.781493 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:51:42.782437 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:51:42.783248 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:51:42.785561 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:51:42.788485 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:51:42.795057 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:51:42.797728 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:51:42.799345 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:51:42.800191 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:51:42.800878 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:51:42.801568 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:51:42.801622 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:51:42.810088 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:51:42.816022 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:51:42.832110 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:51:42.836957 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:51:42.842001 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:51:42.844584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:51:42.854627 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:51:42.860032 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:51:42.865275 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:51:42.877987 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:51:42.880618 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:51:42.881298 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:51:42.882101 extend-filesystems[1485]: Found loop4 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found loop5 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found loop6 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found loop7 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found vda Jan 13 20:51:42.887939 extend-filesystems[1485]: Found vda1 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found vda2 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found vda3 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found usr Jan 13 20:51:42.887939 extend-filesystems[1485]: Found vda4 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found vda6 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found vda7 Jan 13 20:51:42.887939 extend-filesystems[1485]: Found vda9 Jan 13 20:51:42.887939 extend-filesystems[1485]: Checking size of /dev/vda9 Jan 13 20:51:42.953660 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 13 20:51:42.953734 jq[1482]: false Jan 13 20:51:42.954980 extend-filesystems[1485]: Resized partition /dev/vda9 Jan 13 20:51:42.889980 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:51:42.957836 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:51:42.961920 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1323) Jan 13 20:51:42.908998 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:51:42.916344 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:51:42.917253 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:51:42.962564 jq[1496]: true Jan 13 20:51:42.917757 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:51:42.918975 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:51:42.920348 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:51:42.921419 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:51:42.968755 jq[1510]: true Jan 13 20:51:42.977809 dbus-daemon[1481]: [system] SELinux support is enabled Jan 13 20:51:42.978302 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:51:42.982644 dbus-daemon[1481]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1425 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:51:42.984445 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:51:42.986686 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:51:42.984492 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:51:42.987035 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:51:42.987076 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:51:43.012348 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:51:43.015492 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:51:43.031015 update_engine[1493]: I20250113 20:51:43.030354 1493 main.cc:92] Flatcar Update Engine starting Jan 13 20:51:43.041749 update_engine[1493]: I20250113 20:51:43.041492 1493 update_check_scheduler.cc:74] Next update check in 2m47s Jan 13 20:51:43.044335 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:51:43.054020 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:51:43.239837 systemd-logind[1491]: Watching system buttons on /dev/input/event2 (Power Button) Jan 13 20:51:43.239889 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:51:43.240452 systemd-logind[1491]: New seat seat0. Jan 13 20:51:43.242833 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:51:43.254282 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:51:43.256235 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:51:43.267177 systemd[1]: Starting sshkeys.service... Jan 13 20:51:43.283057 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:51:43.287949 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 13 20:51:43.291985 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:51:43.320662 extend-filesystems[1504]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:51:43.320662 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 13 20:51:43.320662 extend-filesystems[1504]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 13 20:51:43.338588 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Jan 13 20:51:43.323896 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:51:43.324210 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:51:43.350034 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:51:43.357249 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:51:43.357452 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:51:43.359482 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1518 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:51:43.373246 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:51:43.408722 polkitd[1551]: Started polkitd version 121 Jan 13 20:51:43.427320 polkitd[1551]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:51:43.427596 polkitd[1551]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:51:43.430020 polkitd[1551]: Finished loading, compiling and executing 2 rules Jan 13 20:51:43.432335 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:51:43.432602 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:51:43.433516 polkitd[1551]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:51:43.436657 containerd[1512]: time="2025-01-13T20:51:43.436376325Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:51:43.461994 systemd-hostnamed[1518]: Hostname set to (static) Jan 13 20:51:43.481686 containerd[1512]: time="2025-01-13T20:51:43.481635619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:43.484776 containerd[1512]: time="2025-01-13T20:51:43.484733677Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:43.484931 containerd[1512]: time="2025-01-13T20:51:43.484905158Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.485006074Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.485341026Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.485392358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.485498938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.485522946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.485782083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.485806129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.485845982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.485865043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.486005419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:43.486828 containerd[1512]: time="2025-01-13T20:51:43.486419425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:51:43.487276 containerd[1512]: time="2025-01-13T20:51:43.486551525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:51:43.487276 containerd[1512]: time="2025-01-13T20:51:43.486574758Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:51:43.487276 containerd[1512]: time="2025-01-13T20:51:43.486718626Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:51:43.487449 containerd[1512]: time="2025-01-13T20:51:43.487424626Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:51:43.491163 containerd[1512]: time="2025-01-13T20:51:43.491134126Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:51:43.491402 containerd[1512]: time="2025-01-13T20:51:43.491376093Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:51:43.491582 containerd[1512]: time="2025-01-13T20:51:43.491556837Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:51:43.491697 containerd[1512]: time="2025-01-13T20:51:43.491673350Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:51:43.491803 containerd[1512]: time="2025-01-13T20:51:43.491768018Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:51:43.492154 containerd[1512]: time="2025-01-13T20:51:43.492128146Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:51:43.492741 containerd[1512]: time="2025-01-13T20:51:43.492706205Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:51:43.493060 containerd[1512]: time="2025-01-13T20:51:43.493033349Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:51:43.493160 containerd[1512]: time="2025-01-13T20:51:43.493137170Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:51:43.493295 containerd[1512]: time="2025-01-13T20:51:43.493269398Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:51:43.493436 containerd[1512]: time="2025-01-13T20:51:43.493412029Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:51:43.493529 containerd[1512]: time="2025-01-13T20:51:43.493506999Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:51:43.493621 containerd[1512]: time="2025-01-13T20:51:43.493598111Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:51:43.493724 containerd[1512]: time="2025-01-13T20:51:43.493703255Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.493867438Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.493904286Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.493924665Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.493953460Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.493986448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.494021260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.494039909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.494058276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.494076940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.494095424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.494113334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.494132380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.494150952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494438 containerd[1512]: time="2025-01-13T20:51:43.494171680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494963 containerd[1512]: time="2025-01-13T20:51:43.494191368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494963 containerd[1512]: time="2025-01-13T20:51:43.494210414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494963 containerd[1512]: time="2025-01-13T20:51:43.494228513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494963 containerd[1512]: time="2025-01-13T20:51:43.494248396Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:51:43.494963 containerd[1512]: time="2025-01-13T20:51:43.494284879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494963 containerd[1512]: time="2025-01-13T20:51:43.494318024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.494963 containerd[1512]: time="2025-01-13T20:51:43.494336384Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:51:43.496719 containerd[1512]: time="2025-01-13T20:51:43.495211738Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:51:43.496719 containerd[1512]: time="2025-01-13T20:51:43.495332734Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:51:43.496719 containerd[1512]: time="2025-01-13T20:51:43.495368339Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:51:43.496719 containerd[1512]: time="2025-01-13T20:51:43.495389131Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:51:43.496719 containerd[1512]: time="2025-01-13T20:51:43.495404987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.496719 containerd[1512]: time="2025-01-13T20:51:43.495424340Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:51:43.496719 containerd[1512]: time="2025-01-13T20:51:43.495449562Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:51:43.496719 containerd[1512]: time="2025-01-13T20:51:43.495469561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:51:43.497045 containerd[1512]: time="2025-01-13T20:51:43.495871048Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:51:43.497045 containerd[1512]: time="2025-01-13T20:51:43.495936016Z" level=info msg="Connect containerd service" Jan 13 20:51:43.497045 containerd[1512]: time="2025-01-13T20:51:43.495996540Z" level=info msg="using legacy CRI server" Jan 13 20:51:43.497045 containerd[1512]: time="2025-01-13T20:51:43.496012436Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:51:43.497045 containerd[1512]: time="2025-01-13T20:51:43.496242906Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:51:43.498045 containerd[1512]: time="2025-01-13T20:51:43.498011781Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:51:43.498292 containerd[1512]: time="2025-01-13T20:51:43.498241899Z" level=info msg="Start subscribing containerd event" Jan 13 20:51:43.498421 containerd[1512]: time="2025-01-13T20:51:43.498396768Z" level=info msg="Start recovering state" Jan 13 20:51:43.498601 containerd[1512]: time="2025-01-13T20:51:43.498577627Z" level=info msg="Start event monitor" Jan 13 20:51:43.498719 containerd[1512]: time="2025-01-13T20:51:43.498698000Z" level=info msg="Start snapshots syncer" Jan 13 20:51:43.498828 containerd[1512]: time="2025-01-13T20:51:43.498788370Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:51:43.498962 containerd[1512]: time="2025-01-13T20:51:43.498937275Z" level=info msg="Start streaming server" Jan 13 20:51:43.499783 containerd[1512]: time="2025-01-13T20:51:43.499756991Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:51:43.500822 containerd[1512]: time="2025-01-13T20:51:43.500057208Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:51:43.503974 containerd[1512]: time="2025-01-13T20:51:43.503948380Z" level=info msg="containerd successfully booted in 0.068996s" Jan 13 20:51:43.504064 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:51:43.639443 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:51:43.667430 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:51:43.675210 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:51:43.694415 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:51:43.694717 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:51:43.710280 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:51:43.723393 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:51:43.731304 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:51:43.738142 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:51:43.739450 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:51:43.788856 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:51:43.795216 systemd[1]: Started sshd@0-10.230.36.26:22-139.178.68.195:57354.service - OpenSSH per-connection server daemon (139.178.68.195:57354). Jan 13 20:51:44.374695 systemd-resolved[1426]: Clock change detected. Flushing caches. Jan 13 20:51:44.374915 systemd-timesyncd[1428]: Contacted time server 217.144.90.27:123 (0.flatcar.pool.ntp.org). Jan 13 20:51:44.375041 systemd-timesyncd[1428]: Initial clock synchronization to Mon 2025-01-13 20:51:44.374612 UTC. Jan 13 20:51:44.928475 systemd-networkd[1425]: eth0: Gained IPv6LL Jan 13 20:51:44.933659 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:51:44.935768 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:51:44.944934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:51:44.956492 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:51:44.994955 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:51:45.232805 sshd[1580]: Accepted publickey for core from 139.178.68.195 port 57354 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:51:45.236382 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:51:45.256758 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:51:45.266223 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:51:45.272234 systemd-logind[1491]: New session 1 of user core. Jan 13 20:51:45.288830 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:51:45.299350 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:51:45.314289 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:51:45.457656 systemd[1597]: Queued start job for default target default.target. Jan 13 20:51:45.466792 systemd[1597]: Created slice app.slice - User Application Slice. Jan 13 20:51:45.466984 systemd[1597]: Reached target paths.target - Paths. Jan 13 20:51:45.467036 systemd[1597]: Reached target timers.target - Timers. Jan 13 20:51:45.470170 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:51:45.487535 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:51:45.487737 systemd[1597]: Reached target sockets.target - Sockets. Jan 13 20:51:45.487763 systemd[1597]: Reached target basic.target - Basic System. Jan 13 20:51:45.487832 systemd[1597]: Reached target default.target - Main User Target. Jan 13 20:51:45.487905 systemd[1597]: Startup finished in 161ms. Jan 13 20:51:45.488058 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:51:45.498392 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:51:45.867258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:51:45.869490 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:51:46.139203 systemd[1]: Started sshd@1-10.230.36.26:22-139.178.68.195:33648.service - OpenSSH per-connection server daemon (139.178.68.195:33648). Jan 13 20:51:46.298022 systemd-networkd[1425]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8906:24:19ff:fee6:241a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8906:24:19ff:fee6:241a/64 assigned by NDisc. Jan 13 20:51:46.298567 systemd-networkd[1425]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 20:51:46.490994 kubelet[1612]: E0113 20:51:46.490769 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:51:46.493967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:51:46.494260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:51:46.494786 systemd[1]: kubelet.service: Consumed 1.002s CPU time. Jan 13 20:51:47.032598 sshd[1618]: Accepted publickey for core from 139.178.68.195 port 33648 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:51:47.034819 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:51:47.042979 systemd-logind[1491]: New session 2 of user core. Jan 13 20:51:47.059312 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:51:47.650681 sshd[1626]: Connection closed by 139.178.68.195 port 33648 Jan 13 20:51:47.651766 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Jan 13 20:51:47.656162 systemd[1]: sshd@1-10.230.36.26:22-139.178.68.195:33648.service: Deactivated successfully. Jan 13 20:51:47.659429 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:51:47.661535 systemd-logind[1491]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:51:47.663381 systemd-logind[1491]: Removed session 2. Jan 13 20:51:47.811444 systemd[1]: Started sshd@2-10.230.36.26:22-139.178.68.195:33664.service - OpenSSH per-connection server daemon (139.178.68.195:33664). Jan 13 20:51:48.701424 sshd[1631]: Accepted publickey for core from 139.178.68.195 port 33664 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:51:48.703593 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:51:48.711027 systemd-logind[1491]: New session 3 of user core. Jan 13 20:51:48.727423 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:51:49.300814 agetty[1577]: failed to open credentials directory Jan 13 20:51:49.300863 agetty[1578]: failed to open credentials directory Jan 13 20:51:49.317084 sshd[1633]: Connection closed by 139.178.68.195 port 33664 Jan 13 20:51:49.317557 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Jan 13 20:51:49.317998 login[1577]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:51:49.322504 login[1578]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 20:51:49.326142 systemd[1]: sshd@2-10.230.36.26:22-139.178.68.195:33664.service: Deactivated successfully. Jan 13 20:51:49.328564 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:51:49.331894 systemd-logind[1491]: New session 4 of user core. Jan 13 20:51:49.338319 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:51:49.339154 systemd-logind[1491]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:51:49.343527 systemd-logind[1491]: New session 5 of user core. Jan 13 20:51:49.351273 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:51:49.352574 systemd-logind[1491]: Removed session 3. Jan 13 20:51:50.428341 coreos-metadata[1480]: Jan 13 20:51:50.428 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:51:50.454694 coreos-metadata[1480]: Jan 13 20:51:50.454 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 20:51:50.460413 coreos-metadata[1480]: Jan 13 20:51:50.460 INFO Fetch failed with 404: resource not found Jan 13 20:51:50.460537 coreos-metadata[1480]: Jan 13 20:51:50.460 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 20:51:50.461076 coreos-metadata[1480]: Jan 13 20:51:50.461 INFO Fetch successful Jan 13 20:51:50.461076 coreos-metadata[1480]: Jan 13 20:51:50.461 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 20:51:50.471669 coreos-metadata[1480]: Jan 13 20:51:50.471 INFO Fetch successful Jan 13 20:51:50.471669 coreos-metadata[1480]: Jan 13 20:51:50.471 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 20:51:50.486122 coreos-metadata[1480]: Jan 13 20:51:50.486 INFO Fetch successful Jan 13 20:51:50.486122 coreos-metadata[1480]: Jan 13 20:51:50.486 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 20:51:50.500071 coreos-metadata[1480]: Jan 13 20:51:50.499 INFO Fetch successful Jan 13 20:51:50.500071 coreos-metadata[1480]: Jan 13 20:51:50.500 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 20:51:50.519430 coreos-metadata[1480]: Jan 13 20:51:50.519 INFO Fetch successful Jan 13 20:51:50.557952 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:51:50.559445 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:51:50.880372 coreos-metadata[1544]: Jan 13 20:51:50.880 WARN failed to locate config-drive, using the metadata service API instead Jan 13 20:51:50.904794 coreos-metadata[1544]: Jan 13 20:51:50.904 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 20:51:50.930310 coreos-metadata[1544]: Jan 13 20:51:50.930 INFO Fetch successful Jan 13 20:51:50.930732 coreos-metadata[1544]: Jan 13 20:51:50.930 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:51:50.963362 coreos-metadata[1544]: Jan 13 20:51:50.963 INFO Fetch successful Jan 13 20:51:50.966040 unknown[1544]: wrote ssh authorized keys file for user: core Jan 13 20:51:51.010913 update-ssh-keys[1672]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:51:51.013280 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:51:51.015780 systemd[1]: Finished sshkeys.service. Jan 13 20:51:51.019487 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:51:51.019710 systemd[1]: Startup finished in 1.292s (kernel) + 13.385s (initrd) + 11.405s (userspace) = 26.084s. Jan 13 20:51:56.745338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:51:56.757339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:51:56.916773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:51:56.933497 (kubelet)[1684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:51:57.000979 kubelet[1684]: E0113 20:51:57.000735 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:51:57.004984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:51:57.005312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:51:59.477206 systemd[1]: Started sshd@3-10.230.36.26:22-139.178.68.195:46044.service - OpenSSH per-connection server daemon (139.178.68.195:46044). Jan 13 20:52:00.385451 sshd[1693]: Accepted publickey for core from 139.178.68.195 port 46044 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:52:00.387548 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:00.394732 systemd-logind[1491]: New session 6 of user core. Jan 13 20:52:00.413443 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:52:01.007070 sshd[1695]: Connection closed by 139.178.68.195 port 46044 Jan 13 20:52:01.008019 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:01.013163 systemd[1]: sshd@3-10.230.36.26:22-139.178.68.195:46044.service: Deactivated successfully. Jan 13 20:52:01.015435 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:52:01.016297 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:52:01.017812 systemd-logind[1491]: Removed session 6. Jan 13 20:52:01.165366 systemd[1]: Started sshd@4-10.230.36.26:22-139.178.68.195:46052.service - OpenSSH per-connection server daemon (139.178.68.195:46052). Jan 13 20:52:02.060776 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 46052 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:52:02.062693 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:02.069054 systemd-logind[1491]: New session 7 of user core. Jan 13 20:52:02.084234 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:52:02.676554 sshd[1702]: Connection closed by 139.178.68.195 port 46052 Jan 13 20:52:02.675498 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:02.680588 systemd[1]: sshd@4-10.230.36.26:22-139.178.68.195:46052.service: Deactivated successfully. Jan 13 20:52:02.683411 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:52:02.685718 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:52:02.687340 systemd-logind[1491]: Removed session 7. Jan 13 20:52:02.841377 systemd[1]: Started sshd@5-10.230.36.26:22-139.178.68.195:46058.service - OpenSSH per-connection server daemon (139.178.68.195:46058). Jan 13 20:52:03.735044 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 46058 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:52:03.736953 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:03.744821 systemd-logind[1491]: New session 8 of user core. Jan 13 20:52:03.751239 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:52:04.354390 sshd[1709]: Connection closed by 139.178.68.195 port 46058 Jan 13 20:52:04.355310 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:04.359761 systemd[1]: sshd@5-10.230.36.26:22-139.178.68.195:46058.service: Deactivated successfully. Jan 13 20:52:04.361831 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:52:04.362765 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:52:04.364352 systemd-logind[1491]: Removed session 8. Jan 13 20:52:04.513364 systemd[1]: Started sshd@6-10.230.36.26:22-139.178.68.195:46072.service - OpenSSH per-connection server daemon (139.178.68.195:46072). Jan 13 20:52:05.406892 sshd[1714]: Accepted publickey for core from 139.178.68.195 port 46072 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:52:05.408786 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:05.414658 systemd-logind[1491]: New session 9 of user core. Jan 13 20:52:05.422253 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:52:05.897048 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:52:05.897558 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:52:05.912056 sudo[1717]: pam_unix(sudo:session): session closed for user root Jan 13 20:52:06.056064 sshd[1716]: Connection closed by 139.178.68.195 port 46072 Jan 13 20:52:06.057442 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:06.063290 systemd[1]: sshd@6-10.230.36.26:22-139.178.68.195:46072.service: Deactivated successfully. Jan 13 20:52:06.065647 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:52:06.066677 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:52:06.068440 systemd-logind[1491]: Removed session 9. Jan 13 20:52:06.214443 systemd[1]: Started sshd@7-10.230.36.26:22-139.178.68.195:39218.service - OpenSSH per-connection server daemon (139.178.68.195:39218). Jan 13 20:52:07.112316 sshd[1722]: Accepted publickey for core from 139.178.68.195 port 39218 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:52:07.114335 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:07.116140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:52:07.131389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:07.139401 systemd-logind[1491]: New session 10 of user core. Jan 13 20:52:07.141322 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:52:07.275242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:07.285825 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:52:07.337889 kubelet[1733]: E0113 20:52:07.337803 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:52:07.341041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:52:07.341298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:52:07.591230 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:52:07.592430 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:52:07.598673 sudo[1742]: pam_unix(sudo:session): session closed for user root Jan 13 20:52:07.607701 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:52:07.608236 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:52:07.630452 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:52:07.671458 augenrules[1764]: No rules Jan 13 20:52:07.673238 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:52:07.673519 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:52:07.675978 sudo[1741]: pam_unix(sudo:session): session closed for user root Jan 13 20:52:07.819078 sshd[1727]: Connection closed by 139.178.68.195 port 39218 Jan 13 20:52:07.819879 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:07.823614 systemd[1]: sshd@7-10.230.36.26:22-139.178.68.195:39218.service: Deactivated successfully. Jan 13 20:52:07.825856 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:52:07.827639 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:52:07.829195 systemd-logind[1491]: Removed session 10. Jan 13 20:52:07.981335 systemd[1]: Started sshd@8-10.230.36.26:22-139.178.68.195:39232.service - OpenSSH per-connection server daemon (139.178.68.195:39232). Jan 13 20:52:08.877317 sshd[1772]: Accepted publickey for core from 139.178.68.195 port 39232 ssh2: RSA SHA256:hnRa+lrXktC2wPLY5bcSKNUrJK0GTTLH7jAG9gNraiM Jan 13 20:52:08.879497 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:52:08.887614 systemd-logind[1491]: New session 11 of user core. Jan 13 20:52:08.894217 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:52:09.354585 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:52:09.355107 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:52:10.042127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:10.050353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:10.092912 systemd[1]: Reloading requested from client PID 1807 ('systemctl') (unit session-11.scope)... Jan 13 20:52:10.093175 systemd[1]: Reloading... Jan 13 20:52:10.224084 zram_generator::config[1846]: No configuration found. Jan 13 20:52:10.412438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:52:10.525521 systemd[1]: Reloading finished in 431 ms. Jan 13 20:52:10.603530 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:52:10.603890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:10.612374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:52:10.753363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:52:10.767448 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:52:10.842483 kubelet[1914]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:52:10.842483 kubelet[1914]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:52:10.842483 kubelet[1914]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:52:10.843771 kubelet[1914]: I0113 20:52:10.843672 1914 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:52:11.745039 kubelet[1914]: I0113 20:52:11.744291 1914 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:52:11.745039 kubelet[1914]: I0113 20:52:11.744334 1914 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:52:11.745039 kubelet[1914]: I0113 20:52:11.744824 1914 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:52:11.785955 kubelet[1914]: I0113 20:52:11.785909 1914 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:52:11.795579 kubelet[1914]: E0113 20:52:11.795547 1914 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:52:11.795689 kubelet[1914]: I0113 20:52:11.795669 1914 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:52:11.803501 kubelet[1914]: I0113 20:52:11.803477 1914 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:52:11.805109 kubelet[1914]: I0113 20:52:11.805083 1914 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:52:11.805543 kubelet[1914]: I0113 20:52:11.805498 1914 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:52:11.805910 kubelet[1914]: I0113 20:52:11.805620 1914 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.36.26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:52:11.806255 kubelet[1914]: I0113 20:52:11.806234 1914 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:52:11.806732 kubelet[1914]: I0113 20:52:11.806361 1914 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:52:11.806732 kubelet[1914]: I0113 20:52:11.806548 1914 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:52:11.808585 kubelet[1914]: I0113 20:52:11.808160 1914 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:52:11.808585 kubelet[1914]: I0113 20:52:11.808187 1914 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:52:11.808585 kubelet[1914]: I0113 20:52:11.808244 1914 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:52:11.808585 kubelet[1914]: I0113 20:52:11.808282 1914 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:52:11.808814 kubelet[1914]: E0113 20:52:11.808778 1914 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:11.808912 kubelet[1914]: E0113 20:52:11.808875 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:11.814350 kubelet[1914]: I0113 20:52:11.814189 1914 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:52:11.815615 kubelet[1914]: W0113 20:52:11.815387 1914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:52:11.815615 kubelet[1914]: E0113 20:52:11.815435 1914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:52:11.815615 kubelet[1914]: W0113 20:52:11.815552 1914 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.36.26" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:52:11.815615 kubelet[1914]: E0113 20:52:11.815578 1914 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.230.36.26\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:52:11.816143 kubelet[1914]: I0113 20:52:11.816121 1914 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:52:11.816893 kubelet[1914]: W0113 20:52:11.816860 1914 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:52:11.818015 kubelet[1914]: I0113 20:52:11.817918 1914 server.go:1269] "Started kubelet" Jan 13 20:52:11.820618 kubelet[1914]: I0113 20:52:11.820580 1914 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:52:11.827720 kubelet[1914]: I0113 20:52:11.827668 1914 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:52:11.829658 kubelet[1914]: I0113 20:52:11.829144 1914 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:52:11.832110 kubelet[1914]: I0113 20:52:11.830326 1914 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:52:11.832110 kubelet[1914]: E0113 20:52:11.831021 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:11.832110 kubelet[1914]: I0113 20:52:11.831716 1914 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:52:11.832720 kubelet[1914]: I0113 20:52:11.832698 1914 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:52:11.836278 kubelet[1914]: I0113 20:52:11.833191 1914 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:52:11.838378 kubelet[1914]: I0113 20:52:11.838353 1914 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:52:11.838587 kubelet[1914]: I0113 20:52:11.837317 1914 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:52:11.838811 kubelet[1914]: I0113 20:52:11.838776 1914 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:52:11.839136 kubelet[1914]: I0113 20:52:11.833565 1914 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:52:11.847395 kubelet[1914]: I0113 20:52:11.847366 1914 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:52:11.848713 kubelet[1914]: E0113 20:52:11.848374 1914 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:52:11.855024 kubelet[1914]: E0113 20:52:11.853097 1914 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.36.26\" not found" node="10.230.36.26" Jan 13 20:52:11.874496 kubelet[1914]: I0113 20:52:11.874457 1914 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:52:11.874663 kubelet[1914]: I0113 20:52:11.874643 1914 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:52:11.874793 kubelet[1914]: I0113 20:52:11.874776 1914 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:52:11.878155 kubelet[1914]: I0113 20:52:11.878125 1914 policy_none.go:49] "None policy: Start" Jan 13 20:52:11.878966 kubelet[1914]: I0113 20:52:11.878935 1914 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:52:11.879058 kubelet[1914]: I0113 20:52:11.878975 1914 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:52:11.889163 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:52:11.907940 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:52:11.913708 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:52:11.923791 kubelet[1914]: I0113 20:52:11.922379 1914 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:52:11.923791 kubelet[1914]: I0113 20:52:11.922704 1914 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:52:11.923791 kubelet[1914]: I0113 20:52:11.922728 1914 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:52:11.923791 kubelet[1914]: I0113 20:52:11.923650 1914 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:52:11.926152 kubelet[1914]: E0113 20:52:11.926127 1914 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.36.26\" not found" Jan 13 20:52:11.930557 kubelet[1914]: I0113 20:52:11.930507 1914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:52:11.933496 kubelet[1914]: I0113 20:52:11.933453 1914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:52:11.933597 kubelet[1914]: I0113 20:52:11.933526 1914 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:52:11.933597 kubelet[1914]: I0113 20:52:11.933558 1914 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:52:11.933710 kubelet[1914]: E0113 20:52:11.933686 1914 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 20:52:12.024854 kubelet[1914]: I0113 20:52:12.024702 1914 kubelet_node_status.go:72] "Attempting to register node" node="10.230.36.26" Jan 13 20:52:12.031412 kubelet[1914]: I0113 20:52:12.031347 1914 kubelet_node_status.go:75] "Successfully registered node" node="10.230.36.26" Jan 13 20:52:12.031412 kubelet[1914]: E0113 20:52:12.031384 1914 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.230.36.26\": node \"10.230.36.26\" not found" Jan 13 20:52:12.046518 kubelet[1914]: E0113 20:52:12.046479 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:12.146980 kubelet[1914]: E0113 20:52:12.146900 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:12.247504 kubelet[1914]: E0113 20:52:12.247423 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:12.295247 sudo[1775]: pam_unix(sudo:session): session closed for user root Jan 13 20:52:12.347755 kubelet[1914]: E0113 20:52:12.347697 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:12.438577 sshd[1774]: Connection closed by 139.178.68.195 port 39232 Jan 13 20:52:12.439549 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Jan 13 20:52:12.443489 systemd[1]: sshd@8-10.230.36.26:22-139.178.68.195:39232.service: Deactivated successfully. Jan 13 20:52:12.446943 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:52:12.448622 kubelet[1914]: E0113 20:52:12.448582 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:12.449038 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:52:12.450746 systemd-logind[1491]: Removed session 11. Jan 13 20:52:12.549468 kubelet[1914]: E0113 20:52:12.549315 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:12.650050 kubelet[1914]: E0113 20:52:12.649980 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:12.749065 kubelet[1914]: I0113 20:52:12.748945 1914 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:52:12.749266 kubelet[1914]: W0113 20:52:12.749214 1914 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:52:12.749266 kubelet[1914]: W0113 20:52:12.749262 1914 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:52:12.751155 kubelet[1914]: E0113 20:52:12.751097 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:12.809731 kubelet[1914]: E0113 20:52:12.809570 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:12.852130 kubelet[1914]: E0113 20:52:12.852079 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:12.952571 kubelet[1914]: E0113 20:52:12.952524 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:13.053411 kubelet[1914]: E0113 20:52:13.053353 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.36.26\" not found" Jan 13 20:52:13.154953 kubelet[1914]: I0113 20:52:13.154802 1914 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:52:13.155750 containerd[1512]: time="2025-01-13T20:52:13.155633994Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:52:13.156709 kubelet[1914]: I0113 20:52:13.155953 1914 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:52:13.809769 kubelet[1914]: E0113 20:52:13.809686 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:13.809769 kubelet[1914]: I0113 20:52:13.809740 1914 apiserver.go:52] "Watching apiserver" Jan 13 20:52:13.833011 kubelet[1914]: I0113 20:52:13.832635 1914 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:52:13.834746 systemd[1]: Created slice kubepods-besteffort-poda08a96b4_fff9_48de_9aaa_5ed91102ca3f.slice - libcontainer container kubepods-besteffort-poda08a96b4_fff9_48de_9aaa_5ed91102ca3f.slice. Jan 13 20:52:13.845615 kubelet[1914]: I0113 20:52:13.845585 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-etc-cni-netd\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.845813 kubelet[1914]: I0113 20:52:13.845769 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35d96245-70d6-498b-a42f-dba77c1c7503-clustermesh-secrets\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.845966 kubelet[1914]: I0113 20:52:13.845940 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-config-path\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.846181 kubelet[1914]: I0113 20:52:13.846142 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35d96245-70d6-498b-a42f-dba77c1c7503-hubble-tls\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.846343 kubelet[1914]: I0113 20:52:13.846309 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a08a96b4-fff9-48de-9aaa-5ed91102ca3f-lib-modules\") pod \"kube-proxy-h9944\" (UID: \"a08a96b4-fff9-48de-9aaa-5ed91102ca3f\") " pod="kube-system/kube-proxy-h9944" Jan 13 20:52:13.846476 kubelet[1914]: I0113 20:52:13.846454 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-hostproc\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.846617 kubelet[1914]: I0113 20:52:13.846582 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cni-path\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.846735 kubelet[1914]: I0113 20:52:13.846713 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtcxn\" (UniqueName: \"kubernetes.io/projected/35d96245-70d6-498b-a42f-dba77c1c7503-kube-api-access-qtcxn\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.846926 kubelet[1914]: I0113 20:52:13.846903 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-run\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.847074 kubelet[1914]: I0113 20:52:13.847051 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-bpf-maps\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.847585 kubelet[1914]: I0113 20:52:13.847213 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a08a96b4-fff9-48de-9aaa-5ed91102ca3f-kube-proxy\") pod \"kube-proxy-h9944\" (UID: \"a08a96b4-fff9-48de-9aaa-5ed91102ca3f\") " pod="kube-system/kube-proxy-h9944" Jan 13 20:52:13.847585 kubelet[1914]: I0113 20:52:13.847253 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a08a96b4-fff9-48de-9aaa-5ed91102ca3f-xtables-lock\") pod \"kube-proxy-h9944\" (UID: \"a08a96b4-fff9-48de-9aaa-5ed91102ca3f\") " pod="kube-system/kube-proxy-h9944" Jan 13 20:52:13.847585 kubelet[1914]: I0113 20:52:13.847306 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc66q\" (UniqueName: \"kubernetes.io/projected/a08a96b4-fff9-48de-9aaa-5ed91102ca3f-kube-api-access-kc66q\") pod \"kube-proxy-h9944\" (UID: \"a08a96b4-fff9-48de-9aaa-5ed91102ca3f\") " pod="kube-system/kube-proxy-h9944" Jan 13 20:52:13.847585 kubelet[1914]: I0113 20:52:13.847351 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-lib-modules\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.847585 kubelet[1914]: I0113 20:52:13.847392 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-host-proc-sys-net\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.847859 kubelet[1914]: I0113 20:52:13.847469 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-host-proc-sys-kernel\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.847859 kubelet[1914]: I0113 20:52:13.847497 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-cgroup\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.847859 kubelet[1914]: I0113 20:52:13.847520 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-xtables-lock\") pod \"cilium-75bp6\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " pod="kube-system/cilium-75bp6" Jan 13 20:52:13.862568 systemd[1]: Created slice kubepods-burstable-pod35d96245_70d6_498b_a42f_dba77c1c7503.slice - libcontainer container kubepods-burstable-pod35d96245_70d6_498b_a42f_dba77c1c7503.slice. Jan 13 20:52:14.162312 containerd[1512]: time="2025-01-13T20:52:14.162167162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9944,Uid:a08a96b4-fff9-48de-9aaa-5ed91102ca3f,Namespace:kube-system,Attempt:0,}" Jan 13 20:52:14.176047 containerd[1512]: time="2025-01-13T20:52:14.175807046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75bp6,Uid:35d96245-70d6-498b-a42f-dba77c1c7503,Namespace:kube-system,Attempt:0,}" Jan 13 20:52:14.810159 kubelet[1914]: E0113 20:52:14.810056 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:15.069065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846613736.mount: Deactivated successfully. Jan 13 20:52:15.078226 containerd[1512]: time="2025-01-13T20:52:15.077972310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:52:15.079645 containerd[1512]: time="2025-01-13T20:52:15.079603492Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:52:15.080737 containerd[1512]: time="2025-01-13T20:52:15.080676686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 20:52:15.081913 containerd[1512]: time="2025-01-13T20:52:15.081855038Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:52:15.082646 containerd[1512]: time="2025-01-13T20:52:15.082558851Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:52:15.086989 containerd[1512]: time="2025-01-13T20:52:15.086908984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:52:15.088454 containerd[1512]: time="2025-01-13T20:52:15.088142036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 912.227741ms" Jan 13 20:52:15.091369 containerd[1512]: time="2025-01-13T20:52:15.091202957Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 928.76851ms" Jan 13 20:52:15.229272 containerd[1512]: time="2025-01-13T20:52:15.228625945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:15.229272 containerd[1512]: time="2025-01-13T20:52:15.228801154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:15.229272 containerd[1512]: time="2025-01-13T20:52:15.228839209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:15.229272 containerd[1512]: time="2025-01-13T20:52:15.228975850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:15.230457 containerd[1512]: time="2025-01-13T20:52:15.224583714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:15.230457 containerd[1512]: time="2025-01-13T20:52:15.229780439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:15.230457 containerd[1512]: time="2025-01-13T20:52:15.229839547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:15.230457 containerd[1512]: time="2025-01-13T20:52:15.229995575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:15.337249 systemd[1]: Started cri-containerd-5d41983404c2e59f4c20ec0520616a8d1e6f8788fa3c8248be4a3fbbf49499c5.scope - libcontainer container 5d41983404c2e59f4c20ec0520616a8d1e6f8788fa3c8248be4a3fbbf49499c5. Jan 13 20:52:15.341098 systemd[1]: Started cri-containerd-ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4.scope - libcontainer container ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4. Jan 13 20:52:15.385953 containerd[1512]: time="2025-01-13T20:52:15.385895195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-75bp6,Uid:35d96245-70d6-498b-a42f-dba77c1c7503,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\"" Jan 13 20:52:15.392035 containerd[1512]: time="2025-01-13T20:52:15.391649939Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:52:15.393357 containerd[1512]: time="2025-01-13T20:52:15.393323401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9944,Uid:a08a96b4-fff9-48de-9aaa-5ed91102ca3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d41983404c2e59f4c20ec0520616a8d1e6f8788fa3c8248be4a3fbbf49499c5\"" Jan 13 20:52:15.811216 kubelet[1914]: E0113 20:52:15.811137 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:16.340783 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:52:16.812194 kubelet[1914]: E0113 20:52:16.812112 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:17.812977 kubelet[1914]: E0113 20:52:17.812910 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:18.813550 kubelet[1914]: E0113 20:52:18.813351 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:19.813832 kubelet[1914]: E0113 20:52:19.813497 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:20.813890 kubelet[1914]: E0113 20:52:20.813855 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:21.814854 kubelet[1914]: E0113 20:52:21.814789 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:22.642498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288879751.mount: Deactivated successfully. Jan 13 20:52:22.815290 kubelet[1914]: E0113 20:52:22.815228 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:23.816206 kubelet[1914]: E0113 20:52:23.816098 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:24.816602 kubelet[1914]: E0113 20:52:24.816527 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:25.426810 containerd[1512]: time="2025-01-13T20:52:25.426757292Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:25.428304 containerd[1512]: time="2025-01-13T20:52:25.428245745Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735295" Jan 13 20:52:25.428975 containerd[1512]: time="2025-01-13T20:52:25.428542872Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:25.431813 containerd[1512]: time="2025-01-13T20:52:25.431315978Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.039623538s" Jan 13 20:52:25.431813 containerd[1512]: time="2025-01-13T20:52:25.431359024Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 20:52:25.433183 containerd[1512]: time="2025-01-13T20:52:25.432948833Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:52:25.435549 containerd[1512]: time="2025-01-13T20:52:25.435501030Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:52:25.490659 containerd[1512]: time="2025-01-13T20:52:25.490617279Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\"" Jan 13 20:52:25.492056 containerd[1512]: time="2025-01-13T20:52:25.492006886Z" level=info msg="StartContainer for \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\"" Jan 13 20:52:25.529992 systemd[1]: run-containerd-runc-k8s.io-c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6-runc.B99VJF.mount: Deactivated successfully. Jan 13 20:52:25.539291 systemd[1]: Started cri-containerd-c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6.scope - libcontainer container c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6. Jan 13 20:52:25.574666 containerd[1512]: time="2025-01-13T20:52:25.574461110Z" level=info msg="StartContainer for \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\" returns successfully" Jan 13 20:52:25.587714 systemd[1]: cri-containerd-c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6.scope: Deactivated successfully. Jan 13 20:52:25.816825 kubelet[1914]: E0113 20:52:25.816691 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:25.845176 containerd[1512]: time="2025-01-13T20:52:25.844858753Z" level=info msg="shim disconnected" id=c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6 namespace=k8s.io Jan 13 20:52:25.845176 containerd[1512]: time="2025-01-13T20:52:25.844956510Z" level=warning msg="cleaning up after shim disconnected" id=c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6 namespace=k8s.io Jan 13 20:52:25.845176 containerd[1512]: time="2025-01-13T20:52:25.844978063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:52:25.861142 containerd[1512]: time="2025-01-13T20:52:25.861069805Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:52:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:52:25.979176 containerd[1512]: time="2025-01-13T20:52:25.979124320Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:52:25.994664 containerd[1512]: time="2025-01-13T20:52:25.994583770Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\"" Jan 13 20:52:25.995449 containerd[1512]: time="2025-01-13T20:52:25.995412217Z" level=info msg="StartContainer for \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\"" Jan 13 20:52:26.030217 systemd[1]: Started cri-containerd-b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c.scope - libcontainer container b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c. Jan 13 20:52:26.072504 containerd[1512]: time="2025-01-13T20:52:26.072261993Z" level=info msg="StartContainer for \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\" returns successfully" Jan 13 20:52:26.091622 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:52:26.092247 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:52:26.092365 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:52:26.101358 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:52:26.101649 systemd[1]: cri-containerd-b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c.scope: Deactivated successfully. Jan 13 20:52:26.135967 containerd[1512]: time="2025-01-13T20:52:26.135856283Z" level=info msg="shim disconnected" id=b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c namespace=k8s.io Jan 13 20:52:26.136224 containerd[1512]: time="2025-01-13T20:52:26.135980439Z" level=warning msg="cleaning up after shim disconnected" id=b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c namespace=k8s.io Jan 13 20:52:26.136224 containerd[1512]: time="2025-01-13T20:52:26.135997959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:52:26.140623 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:52:26.465085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6-rootfs.mount: Deactivated successfully. Jan 13 20:52:26.817265 kubelet[1914]: E0113 20:52:26.816970 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:26.982421 containerd[1512]: time="2025-01-13T20:52:26.982154952Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:52:27.023202 containerd[1512]: time="2025-01-13T20:52:27.022560462Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\"" Jan 13 20:52:27.024091 containerd[1512]: time="2025-01-13T20:52:27.023461418Z" level=info msg="StartContainer for \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\"" Jan 13 20:52:27.074385 systemd[1]: Started cri-containerd-55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d.scope - libcontainer container 55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d. Jan 13 20:52:27.131409 systemd[1]: cri-containerd-55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d.scope: Deactivated successfully. Jan 13 20:52:27.133612 containerd[1512]: time="2025-01-13T20:52:27.132902113Z" level=info msg="StartContainer for \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\" returns successfully" Jan 13 20:52:27.260918 containerd[1512]: time="2025-01-13T20:52:27.260832998Z" level=info msg="shim disconnected" id=55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d namespace=k8s.io Jan 13 20:52:27.260918 containerd[1512]: time="2025-01-13T20:52:27.260901446Z" level=warning msg="cleaning up after shim disconnected" id=55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d namespace=k8s.io Jan 13 20:52:27.260918 containerd[1512]: time="2025-01-13T20:52:27.260916706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:52:27.279690 containerd[1512]: time="2025-01-13T20:52:27.279247744Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:52:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:52:27.461542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d-rootfs.mount: Deactivated successfully. Jan 13 20:52:27.463440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2270502057.mount: Deactivated successfully. Jan 13 20:52:27.817881 kubelet[1914]: E0113 20:52:27.817798 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:27.900638 containerd[1512]: time="2025-01-13T20:52:27.900438328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:27.901531 containerd[1512]: time="2025-01-13T20:52:27.901487307Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30230251" Jan 13 20:52:27.902217 containerd[1512]: time="2025-01-13T20:52:27.902144324Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:27.904923 containerd[1512]: time="2025-01-13T20:52:27.904861086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:27.906736 containerd[1512]: time="2025-01-13T20:52:27.906077050Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.473089758s" Jan 13 20:52:27.906736 containerd[1512]: time="2025-01-13T20:52:27.906119389Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 20:52:27.908524 containerd[1512]: time="2025-01-13T20:52:27.908490067Z" level=info msg="CreateContainer within sandbox \"5d41983404c2e59f4c20ec0520616a8d1e6f8788fa3c8248be4a3fbbf49499c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:52:27.930465 containerd[1512]: time="2025-01-13T20:52:27.930366899Z" level=info msg="CreateContainer within sandbox \"5d41983404c2e59f4c20ec0520616a8d1e6f8788fa3c8248be4a3fbbf49499c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3ad950a49eb1ae3599c750019bd1863326cff99a3ca19f84b4c112e77467c3de\"" Jan 13 20:52:27.931041 containerd[1512]: time="2025-01-13T20:52:27.930921936Z" level=info msg="StartContainer for \"3ad950a49eb1ae3599c750019bd1863326cff99a3ca19f84b4c112e77467c3de\"" Jan 13 20:52:27.974298 systemd[1]: Started cri-containerd-3ad950a49eb1ae3599c750019bd1863326cff99a3ca19f84b4c112e77467c3de.scope - libcontainer container 3ad950a49eb1ae3599c750019bd1863326cff99a3ca19f84b4c112e77467c3de. Jan 13 20:52:27.992207 containerd[1512]: time="2025-01-13T20:52:27.991974493Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:52:28.020767 containerd[1512]: time="2025-01-13T20:52:28.020643446Z" level=info msg="StartContainer for \"3ad950a49eb1ae3599c750019bd1863326cff99a3ca19f84b4c112e77467c3de\" returns successfully" Jan 13 20:52:28.025796 containerd[1512]: time="2025-01-13T20:52:28.024752315Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\"" Jan 13 20:52:28.025796 containerd[1512]: time="2025-01-13T20:52:28.025577520Z" level=info msg="StartContainer for \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\"" Jan 13 20:52:28.070134 systemd[1]: Started cri-containerd-7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad.scope - libcontainer container 7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad. Jan 13 20:52:28.114854 systemd[1]: cri-containerd-7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad.scope: Deactivated successfully. Jan 13 20:52:28.121095 containerd[1512]: time="2025-01-13T20:52:28.121055765Z" level=info msg="StartContainer for \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\" returns successfully" Jan 13 20:52:28.301746 containerd[1512]: time="2025-01-13T20:52:28.301613711Z" level=info msg="shim disconnected" id=7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad namespace=k8s.io Jan 13 20:52:28.302054 containerd[1512]: time="2025-01-13T20:52:28.301770516Z" level=warning msg="cleaning up after shim disconnected" id=7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad namespace=k8s.io Jan 13 20:52:28.302054 containerd[1512]: time="2025-01-13T20:52:28.301789809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:52:28.818183 kubelet[1914]: E0113 20:52:28.818116 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:28.937148 update_engine[1493]: I20250113 20:52:28.936774 1493 update_attempter.cc:509] Updating boot flags... Jan 13 20:52:28.985165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2498) Jan 13 20:52:29.023187 containerd[1512]: time="2025-01-13T20:52:29.020301052Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:52:29.062050 kubelet[1914]: I0113 20:52:29.061825 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9944" podStartSLOduration=4.549548677 podStartE2EDuration="17.06127331s" podCreationTimestamp="2025-01-13 20:52:12 +0000 UTC" firstStartedPulling="2025-01-13 20:52:15.395153828 +0000 UTC m=+4.620067806" lastFinishedPulling="2025-01-13 20:52:27.906878469 +0000 UTC m=+17.131792439" observedRunningTime="2025-01-13 20:52:29.024368683 +0000 UTC m=+18.249282670" watchObservedRunningTime="2025-01-13 20:52:29.06127331 +0000 UTC m=+18.286187299" Jan 13 20:52:29.071778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490605634.mount: Deactivated successfully. Jan 13 20:52:29.080087 containerd[1512]: time="2025-01-13T20:52:29.078397450Z" level=info msg="CreateContainer within sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\"" Jan 13 20:52:29.080890 containerd[1512]: time="2025-01-13T20:52:29.080557537Z" level=info msg="StartContainer for \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\"" Jan 13 20:52:29.129055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (2500) Jan 13 20:52:29.199210 systemd[1]: Started cri-containerd-5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9.scope - libcontainer container 5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9. Jan 13 20:52:29.251895 containerd[1512]: time="2025-01-13T20:52:29.251832248Z" level=info msg="StartContainer for \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\" returns successfully" Jan 13 20:52:29.377043 kubelet[1914]: I0113 20:52:29.376482 1914 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:52:29.818658 kubelet[1914]: E0113 20:52:29.818596 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:29.844517 kernel: Initializing XFRM netlink socket Jan 13 20:52:30.061884 kubelet[1914]: I0113 20:52:30.061561 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-75bp6" podStartSLOduration=8.019473993 podStartE2EDuration="18.061533022s" podCreationTimestamp="2025-01-13 20:52:12 +0000 UTC" firstStartedPulling="2025-01-13 20:52:15.390329545 +0000 UTC m=+4.615243520" lastFinishedPulling="2025-01-13 20:52:25.432388567 +0000 UTC m=+14.657302549" observedRunningTime="2025-01-13 20:52:30.061235681 +0000 UTC m=+19.286149673" watchObservedRunningTime="2025-01-13 20:52:30.061533022 +0000 UTC m=+19.286446997" Jan 13 20:52:30.819263 kubelet[1914]: E0113 20:52:30.819194 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:31.588430 systemd-networkd[1425]: cilium_host: Link UP Jan 13 20:52:31.590516 systemd-networkd[1425]: cilium_net: Link UP Jan 13 20:52:31.591860 systemd-networkd[1425]: cilium_net: Gained carrier Jan 13 20:52:31.594319 systemd-networkd[1425]: cilium_host: Gained carrier Jan 13 20:52:31.597344 systemd-networkd[1425]: cilium_host: Gained IPv6LL Jan 13 20:52:31.750075 systemd-networkd[1425]: cilium_vxlan: Link UP Jan 13 20:52:31.751051 systemd-networkd[1425]: cilium_vxlan: Gained carrier Jan 13 20:52:31.809160 kubelet[1914]: E0113 20:52:31.809057 1914 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:31.819777 kubelet[1914]: E0113 20:52:31.819730 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:32.160576 kernel: NET: Registered PF_ALG protocol family Jan 13 20:52:32.352321 systemd-networkd[1425]: cilium_net: Gained IPv6LL Jan 13 20:52:32.820564 kubelet[1914]: E0113 20:52:32.820497 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:33.153221 systemd-networkd[1425]: lxc_health: Link UP Jan 13 20:52:33.162667 systemd-networkd[1425]: lxc_health: Gained carrier Jan 13 20:52:33.312311 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Jan 13 20:52:33.821420 kubelet[1914]: E0113 20:52:33.821338 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:34.272186 systemd-networkd[1425]: lxc_health: Gained IPv6LL Jan 13 20:52:34.822448 kubelet[1914]: E0113 20:52:34.822376 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:35.823106 kubelet[1914]: E0113 20:52:35.822973 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:36.675710 systemd[1]: Created slice kubepods-besteffort-podebbef698_e959_46df_b912_762afb3dc35e.slice - libcontainer container kubepods-besteffort-podebbef698_e959_46df_b912_762afb3dc35e.slice. Jan 13 20:52:36.789480 kubelet[1914]: I0113 20:52:36.789280 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8btl\" (UniqueName: \"kubernetes.io/projected/ebbef698-e959-46df-b912-762afb3dc35e-kube-api-access-q8btl\") pod \"nginx-deployment-8587fbcb89-jpvpk\" (UID: \"ebbef698-e959-46df-b912-762afb3dc35e\") " pod="default/nginx-deployment-8587fbcb89-jpvpk" Jan 13 20:52:36.823491 kubelet[1914]: E0113 20:52:36.823352 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:36.984522 containerd[1512]: time="2025-01-13T20:52:36.983760274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jpvpk,Uid:ebbef698-e959-46df-b912-762afb3dc35e,Namespace:default,Attempt:0,}" Jan 13 20:52:37.078162 systemd-networkd[1425]: lxceda8e3c38fdb: Link UP Jan 13 20:52:37.089141 kernel: eth0: renamed from tmp9fddf Jan 13 20:52:37.107908 systemd-networkd[1425]: lxceda8e3c38fdb: Gained carrier Jan 13 20:52:37.823988 kubelet[1914]: E0113 20:52:37.823870 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:38.825216 kubelet[1914]: E0113 20:52:38.825094 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:38.944374 systemd-networkd[1425]: lxceda8e3c38fdb: Gained IPv6LL Jan 13 20:52:39.727122 containerd[1512]: time="2025-01-13T20:52:39.726762088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:39.728359 containerd[1512]: time="2025-01-13T20:52:39.727087798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:39.728359 containerd[1512]: time="2025-01-13T20:52:39.727119568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:39.728631 containerd[1512]: time="2025-01-13T20:52:39.728159226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:39.756366 systemd[1]: Started cri-containerd-9fddf393602b7487f472b780b322f792f9706109e7ccbec53f8ebabc2b741027.scope - libcontainer container 9fddf393602b7487f472b780b322f792f9706109e7ccbec53f8ebabc2b741027. Jan 13 20:52:39.818165 containerd[1512]: time="2025-01-13T20:52:39.818083574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-jpvpk,Uid:ebbef698-e959-46df-b912-762afb3dc35e,Namespace:default,Attempt:0,} returns sandbox id \"9fddf393602b7487f472b780b322f792f9706109e7ccbec53f8ebabc2b741027\"" Jan 13 20:52:39.821069 containerd[1512]: time="2025-01-13T20:52:39.820772966Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:52:39.826231 kubelet[1914]: E0113 20:52:39.826142 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:40.826925 kubelet[1914]: E0113 20:52:40.826820 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:41.827688 kubelet[1914]: E0113 20:52:41.827566 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:42.828567 kubelet[1914]: E0113 20:52:42.828444 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:43.649797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070472799.mount: Deactivated successfully. Jan 13 20:52:43.829640 kubelet[1914]: E0113 20:52:43.829582 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:44.830117 kubelet[1914]: E0113 20:52:44.830051 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:45.238501 containerd[1512]: time="2025-01-13T20:52:45.238433152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:45.240356 containerd[1512]: time="2025-01-13T20:52:45.240303820Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 20:52:45.241354 containerd[1512]: time="2025-01-13T20:52:45.241295208Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:45.244972 containerd[1512]: time="2025-01-13T20:52:45.244914169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:52:45.247732 containerd[1512]: time="2025-01-13T20:52:45.246752068Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.425927326s" Jan 13 20:52:45.247732 containerd[1512]: time="2025-01-13T20:52:45.246831564Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:52:45.250212 containerd[1512]: time="2025-01-13T20:52:45.250180106Z" level=info msg="CreateContainer within sandbox \"9fddf393602b7487f472b780b322f792f9706109e7ccbec53f8ebabc2b741027\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:52:45.267758 containerd[1512]: time="2025-01-13T20:52:45.267722115Z" level=info msg="CreateContainer within sandbox \"9fddf393602b7487f472b780b322f792f9706109e7ccbec53f8ebabc2b741027\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d58f2492fa94f562bb875738bb1b18993120bfbf4280957239e48bf0138a89d7\"" Jan 13 20:52:45.268653 containerd[1512]: time="2025-01-13T20:52:45.268520281Z" level=info msg="StartContainer for \"d58f2492fa94f562bb875738bb1b18993120bfbf4280957239e48bf0138a89d7\"" Jan 13 20:52:45.337234 systemd[1]: Started cri-containerd-d58f2492fa94f562bb875738bb1b18993120bfbf4280957239e48bf0138a89d7.scope - libcontainer container d58f2492fa94f562bb875738bb1b18993120bfbf4280957239e48bf0138a89d7. Jan 13 20:52:45.388931 containerd[1512]: time="2025-01-13T20:52:45.388697813Z" level=info msg="StartContainer for \"d58f2492fa94f562bb875738bb1b18993120bfbf4280957239e48bf0138a89d7\" returns successfully" Jan 13 20:52:45.830667 kubelet[1914]: E0113 20:52:45.830598 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:46.091517 kubelet[1914]: I0113 20:52:46.091125 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-jpvpk" podStartSLOduration=4.662952122 podStartE2EDuration="10.091106637s" podCreationTimestamp="2025-01-13 20:52:36 +0000 UTC" firstStartedPulling="2025-01-13 20:52:39.820153458 +0000 UTC m=+29.045067429" lastFinishedPulling="2025-01-13 20:52:45.248307975 +0000 UTC m=+34.473221944" observedRunningTime="2025-01-13 20:52:46.090929842 +0000 UTC m=+35.315843831" watchObservedRunningTime="2025-01-13 20:52:46.091106637 +0000 UTC m=+35.316020626" Jan 13 20:52:46.831802 kubelet[1914]: E0113 20:52:46.831717 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:47.832849 kubelet[1914]: E0113 20:52:47.832749 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:48.833319 kubelet[1914]: E0113 20:52:48.833246 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:49.834181 kubelet[1914]: E0113 20:52:49.834110 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:50.834797 kubelet[1914]: E0113 20:52:50.834729 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:51.808801 kubelet[1914]: E0113 20:52:51.808732 1914 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:51.835441 kubelet[1914]: E0113 20:52:51.835388 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:52.835780 kubelet[1914]: E0113 20:52:52.835719 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:53.836828 kubelet[1914]: E0113 20:52:53.836750 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:54.837194 kubelet[1914]: E0113 20:52:54.837123 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:55.838243 kubelet[1914]: E0113 20:52:55.838149 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:56.838441 kubelet[1914]: E0113 20:52:56.838359 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:57.780265 systemd[1]: Created slice kubepods-besteffort-pod3c9eec1c_fa91_460e_891f_32150d4d2945.slice - libcontainer container kubepods-besteffort-pod3c9eec1c_fa91_460e_891f_32150d4d2945.slice. Jan 13 20:52:57.839400 kubelet[1914]: E0113 20:52:57.839368 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:57.922992 kubelet[1914]: I0113 20:52:57.922917 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3c9eec1c-fa91-460e-891f-32150d4d2945-data\") pod \"nfs-server-provisioner-0\" (UID: \"3c9eec1c-fa91-460e-891f-32150d4d2945\") " pod="default/nfs-server-provisioner-0" Jan 13 20:52:57.922992 kubelet[1914]: I0113 20:52:57.922997 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw9rc\" (UniqueName: \"kubernetes.io/projected/3c9eec1c-fa91-460e-891f-32150d4d2945-kube-api-access-hw9rc\") pod \"nfs-server-provisioner-0\" (UID: \"3c9eec1c-fa91-460e-891f-32150d4d2945\") " pod="default/nfs-server-provisioner-0" Jan 13 20:52:58.085886 containerd[1512]: time="2025-01-13T20:52:58.085341840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3c9eec1c-fa91-460e-891f-32150d4d2945,Namespace:default,Attempt:0,}" Jan 13 20:52:58.142345 systemd-networkd[1425]: lxc369af06b3c64: Link UP Jan 13 20:52:58.151308 kernel: eth0: renamed from tmpc9faa Jan 13 20:52:58.161920 systemd-networkd[1425]: lxc369af06b3c64: Gained carrier Jan 13 20:52:58.456297 containerd[1512]: time="2025-01-13T20:52:58.455166869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:52:58.456297 containerd[1512]: time="2025-01-13T20:52:58.455268015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:52:58.456297 containerd[1512]: time="2025-01-13T20:52:58.455287900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:58.456297 containerd[1512]: time="2025-01-13T20:52:58.455413891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:52:58.486258 systemd[1]: run-containerd-runc-k8s.io-c9faae60f7bf21236684841ac0f9ddd08a8893c38397d4f642099fe5c3a962d3-runc.vJHGbw.mount: Deactivated successfully. Jan 13 20:52:58.495263 systemd[1]: Started cri-containerd-c9faae60f7bf21236684841ac0f9ddd08a8893c38397d4f642099fe5c3a962d3.scope - libcontainer container c9faae60f7bf21236684841ac0f9ddd08a8893c38397d4f642099fe5c3a962d3. Jan 13 20:52:58.561211 containerd[1512]: time="2025-01-13T20:52:58.561081974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3c9eec1c-fa91-460e-891f-32150d4d2945,Namespace:default,Attempt:0,} returns sandbox id \"c9faae60f7bf21236684841ac0f9ddd08a8893c38397d4f642099fe5c3a962d3\"" Jan 13 20:52:58.564663 containerd[1512]: time="2025-01-13T20:52:58.564612349Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:52:58.839583 kubelet[1914]: E0113 20:52:58.839504 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:52:59.296535 systemd-networkd[1425]: lxc369af06b3c64: Gained IPv6LL Jan 13 20:52:59.839792 kubelet[1914]: E0113 20:52:59.839697 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:00.840896 kubelet[1914]: E0113 20:53:00.840680 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:01.841357 kubelet[1914]: E0113 20:53:01.841262 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:01.932643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1624176621.mount: Deactivated successfully. Jan 13 20:53:02.842923 kubelet[1914]: E0113 20:53:02.842877 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:03.843526 kubelet[1914]: E0113 20:53:03.843481 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:04.845118 kubelet[1914]: E0113 20:53:04.845073 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:04.897535 containerd[1512]: time="2025-01-13T20:53:04.897461053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:04.898920 containerd[1512]: time="2025-01-13T20:53:04.898867619Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 13 20:53:04.899757 containerd[1512]: time="2025-01-13T20:53:04.899681829Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:04.903529 containerd[1512]: time="2025-01-13T20:53:04.903484723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:04.905934 containerd[1512]: time="2025-01-13T20:53:04.905096513Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.340421984s" Jan 13 20:53:04.905934 containerd[1512]: time="2025-01-13T20:53:04.905143677Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 20:53:04.908860 containerd[1512]: time="2025-01-13T20:53:04.908679458Z" level=info msg="CreateContainer within sandbox \"c9faae60f7bf21236684841ac0f9ddd08a8893c38397d4f642099fe5c3a962d3\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:53:04.940502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750981670.mount: Deactivated successfully. Jan 13 20:53:04.947814 containerd[1512]: time="2025-01-13T20:53:04.947764751Z" level=info msg="CreateContainer within sandbox \"c9faae60f7bf21236684841ac0f9ddd08a8893c38397d4f642099fe5c3a962d3\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"912e23940e915049a95391c4fdd20a9894bb10bbaa6167cdd0dcaa3325acd9ca\"" Jan 13 20:53:04.948528 containerd[1512]: time="2025-01-13T20:53:04.948495720Z" level=info msg="StartContainer for \"912e23940e915049a95391c4fdd20a9894bb10bbaa6167cdd0dcaa3325acd9ca\"" Jan 13 20:53:04.986403 systemd[1]: run-containerd-runc-k8s.io-912e23940e915049a95391c4fdd20a9894bb10bbaa6167cdd0dcaa3325acd9ca-runc.HMaEQA.mount: Deactivated successfully. Jan 13 20:53:05.000252 systemd[1]: Started cri-containerd-912e23940e915049a95391c4fdd20a9894bb10bbaa6167cdd0dcaa3325acd9ca.scope - libcontainer container 912e23940e915049a95391c4fdd20a9894bb10bbaa6167cdd0dcaa3325acd9ca. Jan 13 20:53:05.038959 containerd[1512]: time="2025-01-13T20:53:05.038661436Z" level=info msg="StartContainer for \"912e23940e915049a95391c4fdd20a9894bb10bbaa6167cdd0dcaa3325acd9ca\" returns successfully" Jan 13 20:53:05.847328 kubelet[1914]: E0113 20:53:05.847211 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:06.847749 kubelet[1914]: E0113 20:53:06.847683 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:07.848564 kubelet[1914]: E0113 20:53:07.848491 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:08.849677 kubelet[1914]: E0113 20:53:08.849611 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:09.850247 kubelet[1914]: E0113 20:53:09.850177 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:10.851266 kubelet[1914]: E0113 20:53:10.851182 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:11.809281 kubelet[1914]: E0113 20:53:11.809206 1914 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:11.851382 kubelet[1914]: E0113 20:53:11.851317 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:12.851733 kubelet[1914]: E0113 20:53:12.851652 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:13.852144 kubelet[1914]: E0113 20:53:13.852066 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:14.852875 kubelet[1914]: E0113 20:53:14.852794 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:15.096619 kubelet[1914]: I0113 20:53:15.096490 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.753608551 podStartE2EDuration="18.096448712s" podCreationTimestamp="2025-01-13 20:52:57 +0000 UTC" firstStartedPulling="2025-01-13 20:52:58.563505577 +0000 UTC m=+47.788419561" lastFinishedPulling="2025-01-13 20:53:04.906345754 +0000 UTC m=+54.131259722" observedRunningTime="2025-01-13 20:53:05.145901386 +0000 UTC m=+54.370815372" watchObservedRunningTime="2025-01-13 20:53:15.096448712 +0000 UTC m=+64.321362695" Jan 13 20:53:15.106121 systemd[1]: Created slice kubepods-besteffort-pod25601359_1f7b_4ad8_b942_86fdcfd2164e.slice - libcontainer container kubepods-besteffort-pod25601359_1f7b_4ad8_b942_86fdcfd2164e.slice. Jan 13 20:53:15.236874 kubelet[1914]: I0113 20:53:15.236709 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d3b2f2fd-b006-43a5-bfb6-e110cc8a8207\" (UniqueName: \"kubernetes.io/nfs/25601359-1f7b-4ad8-b942-86fdcfd2164e-pvc-d3b2f2fd-b006-43a5-bfb6-e110cc8a8207\") pod \"test-pod-1\" (UID: \"25601359-1f7b-4ad8-b942-86fdcfd2164e\") " pod="default/test-pod-1" Jan 13 20:53:15.236874 kubelet[1914]: I0113 20:53:15.236805 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t96zj\" (UniqueName: \"kubernetes.io/projected/25601359-1f7b-4ad8-b942-86fdcfd2164e-kube-api-access-t96zj\") pod \"test-pod-1\" (UID: \"25601359-1f7b-4ad8-b942-86fdcfd2164e\") " pod="default/test-pod-1" Jan 13 20:53:15.385294 kernel: FS-Cache: Loaded Jan 13 20:53:15.463429 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:53:15.463603 kernel: RPC: Registered udp transport module. Jan 13 20:53:15.464484 kernel: RPC: Registered tcp transport module. Jan 13 20:53:15.465511 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:53:15.467920 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:53:15.745131 kernel: NFS: Registering the id_resolver key type Jan 13 20:53:15.745371 kernel: Key type id_resolver registered Jan 13 20:53:15.746550 kernel: Key type id_legacy registered Jan 13 20:53:15.800833 nfsidmap[3335]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 13 20:53:15.809538 nfsidmap[3338]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 13 20:53:15.853059 kubelet[1914]: E0113 20:53:15.852956 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:16.011601 containerd[1512]: time="2025-01-13T20:53:16.011398065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:25601359-1f7b-4ad8-b942-86fdcfd2164e,Namespace:default,Attempt:0,}" Jan 13 20:53:16.093786 systemd-networkd[1425]: lxccffab60a0f43: Link UP Jan 13 20:53:16.105206 kernel: eth0: renamed from tmp3d2e3 Jan 13 20:53:16.116089 systemd-networkd[1425]: lxccffab60a0f43: Gained carrier Jan 13 20:53:16.391388 containerd[1512]: time="2025-01-13T20:53:16.390115037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:16.391388 containerd[1512]: time="2025-01-13T20:53:16.390782982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:16.391388 containerd[1512]: time="2025-01-13T20:53:16.390881972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:16.391817 containerd[1512]: time="2025-01-13T20:53:16.391119680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:16.423231 systemd[1]: Started cri-containerd-3d2e366e39840a9e3e2276b1812d182c34e4496a100b9d2b753727370852a051.scope - libcontainer container 3d2e366e39840a9e3e2276b1812d182c34e4496a100b9d2b753727370852a051. Jan 13 20:53:16.487902 containerd[1512]: time="2025-01-13T20:53:16.487772657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:25601359-1f7b-4ad8-b942-86fdcfd2164e,Namespace:default,Attempt:0,} returns sandbox id \"3d2e366e39840a9e3e2276b1812d182c34e4496a100b9d2b753727370852a051\"" Jan 13 20:53:16.491038 containerd[1512]: time="2025-01-13T20:53:16.490634354Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:53:16.847985 containerd[1512]: time="2025-01-13T20:53:16.847919130Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:16.849147 containerd[1512]: time="2025-01-13T20:53:16.849092045Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:53:16.853361 containerd[1512]: time="2025-01-13T20:53:16.853306345Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 362.632629ms" Jan 13 20:53:16.853361 containerd[1512]: time="2025-01-13T20:53:16.853346164Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:53:16.853592 kubelet[1914]: E0113 20:53:16.853542 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:16.855997 containerd[1512]: time="2025-01-13T20:53:16.855951345Z" level=info msg="CreateContainer within sandbox \"3d2e366e39840a9e3e2276b1812d182c34e4496a100b9d2b753727370852a051\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:53:16.875607 containerd[1512]: time="2025-01-13T20:53:16.875467996Z" level=info msg="CreateContainer within sandbox \"3d2e366e39840a9e3e2276b1812d182c34e4496a100b9d2b753727370852a051\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6423aa27dd4dee2f0aa5ef0c86b8e5bc6b0fdda31b8188b1c8a98d8ed7f81d9d\"" Jan 13 20:53:16.876329 containerd[1512]: time="2025-01-13T20:53:16.876192318Z" level=info msg="StartContainer for \"6423aa27dd4dee2f0aa5ef0c86b8e5bc6b0fdda31b8188b1c8a98d8ed7f81d9d\"" Jan 13 20:53:16.913266 systemd[1]: Started cri-containerd-6423aa27dd4dee2f0aa5ef0c86b8e5bc6b0fdda31b8188b1c8a98d8ed7f81d9d.scope - libcontainer container 6423aa27dd4dee2f0aa5ef0c86b8e5bc6b0fdda31b8188b1c8a98d8ed7f81d9d. Jan 13 20:53:16.949155 containerd[1512]: time="2025-01-13T20:53:16.948715849Z" level=info msg="StartContainer for \"6423aa27dd4dee2f0aa5ef0c86b8e5bc6b0fdda31b8188b1c8a98d8ed7f81d9d\" returns successfully" Jan 13 20:53:17.174082 kubelet[1914]: I0113 20:53:17.173843 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.809584724 podStartE2EDuration="18.173825931s" podCreationTimestamp="2025-01-13 20:52:59 +0000 UTC" firstStartedPulling="2025-01-13 20:53:16.489960061 +0000 UTC m=+65.714874036" lastFinishedPulling="2025-01-13 20:53:16.854201274 +0000 UTC m=+66.079115243" observedRunningTime="2025-01-13 20:53:17.172983478 +0000 UTC m=+66.397897481" watchObservedRunningTime="2025-01-13 20:53:17.173825931 +0000 UTC m=+66.398739913" Jan 13 20:53:17.853815 kubelet[1914]: E0113 20:53:17.853740 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:17.920396 systemd-networkd[1425]: lxccffab60a0f43: Gained IPv6LL Jan 13 20:53:18.854527 kubelet[1914]: E0113 20:53:18.854450 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:19.854729 kubelet[1914]: E0113 20:53:19.854655 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:20.855238 kubelet[1914]: E0113 20:53:20.855165 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:21.856354 kubelet[1914]: E0113 20:53:21.856249 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:22.857038 kubelet[1914]: E0113 20:53:22.856948 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:23.858029 kubelet[1914]: E0113 20:53:23.857967 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:24.858550 kubelet[1914]: E0113 20:53:24.858470 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:25.772550 systemd[1]: run-containerd-runc-k8s.io-5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9-runc.HaJQ6G.mount: Deactivated successfully. Jan 13 20:53:25.853412 containerd[1512]: time="2025-01-13T20:53:25.853287385Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:53:25.859376 kubelet[1914]: E0113 20:53:25.859300 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:25.873813 containerd[1512]: time="2025-01-13T20:53:25.873752908Z" level=info msg="StopContainer for \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\" with timeout 2 (s)" Jan 13 20:53:25.874197 containerd[1512]: time="2025-01-13T20:53:25.874165111Z" level=info msg="Stop container \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\" with signal terminated" Jan 13 20:53:25.885976 systemd-networkd[1425]: lxc_health: Link DOWN Jan 13 20:53:25.885994 systemd-networkd[1425]: lxc_health: Lost carrier Jan 13 20:53:25.906587 systemd[1]: cri-containerd-5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9.scope: Deactivated successfully. Jan 13 20:53:25.906938 systemd[1]: cri-containerd-5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9.scope: Consumed 10.033s CPU time. Jan 13 20:53:25.935998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9-rootfs.mount: Deactivated successfully. Jan 13 20:53:25.967412 containerd[1512]: time="2025-01-13T20:53:25.950433926Z" level=info msg="shim disconnected" id=5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9 namespace=k8s.io Jan 13 20:53:25.967412 containerd[1512]: time="2025-01-13T20:53:25.967402869Z" level=warning msg="cleaning up after shim disconnected" id=5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9 namespace=k8s.io Jan 13 20:53:25.967669 containerd[1512]: time="2025-01-13T20:53:25.967434458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:53:26.013648 containerd[1512]: time="2025-01-13T20:53:26.013584637Z" level=info msg="StopContainer for \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\" returns successfully" Jan 13 20:53:26.032951 containerd[1512]: time="2025-01-13T20:53:26.032853012Z" level=info msg="StopPodSandbox for \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\"" Jan 13 20:53:26.038685 containerd[1512]: time="2025-01-13T20:53:26.033097880Z" level=info msg="Container to stop \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:53:26.038685 containerd[1512]: time="2025-01-13T20:53:26.038523107Z" level=info msg="Container to stop \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:53:26.038685 containerd[1512]: time="2025-01-13T20:53:26.038540998Z" level=info msg="Container to stop \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:53:26.038685 containerd[1512]: time="2025-01-13T20:53:26.038559274Z" level=info msg="Container to stop \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:53:26.038685 containerd[1512]: time="2025-01-13T20:53:26.038574158Z" level=info msg="Container to stop \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:53:26.041016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4-shm.mount: Deactivated successfully. Jan 13 20:53:26.049588 systemd[1]: cri-containerd-ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4.scope: Deactivated successfully. Jan 13 20:53:26.083476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4-rootfs.mount: Deactivated successfully. Jan 13 20:53:26.088395 containerd[1512]: time="2025-01-13T20:53:26.088316413Z" level=info msg="shim disconnected" id=ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4 namespace=k8s.io Jan 13 20:53:26.088395 containerd[1512]: time="2025-01-13T20:53:26.088394957Z" level=warning msg="cleaning up after shim disconnected" id=ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4 namespace=k8s.io Jan 13 20:53:26.088658 containerd[1512]: time="2025-01-13T20:53:26.088418091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:53:26.105876 containerd[1512]: time="2025-01-13T20:53:26.105803350Z" level=info msg="TearDown network for sandbox \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" successfully" Jan 13 20:53:26.105876 containerd[1512]: time="2025-01-13T20:53:26.105867386Z" level=info msg="StopPodSandbox for \"ea637fe0e3e847123b176e32d7268a189bc97c542782bfaa0ad254fd9bd1eea4\" returns successfully" Jan 13 20:53:26.186325 kubelet[1914]: I0113 20:53:26.185992 1914 scope.go:117] "RemoveContainer" containerID="5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9" Jan 13 20:53:26.187827 containerd[1512]: time="2025-01-13T20:53:26.187774581Z" level=info msg="RemoveContainer for \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\"" Jan 13 20:53:26.193055 containerd[1512]: time="2025-01-13T20:53:26.192960871Z" level=info msg="RemoveContainer for \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\" returns successfully" Jan 13 20:53:26.193439 kubelet[1914]: I0113 20:53:26.193264 1914 scope.go:117] "RemoveContainer" containerID="7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad" Jan 13 20:53:26.195434 containerd[1512]: time="2025-01-13T20:53:26.195139767Z" level=info msg="RemoveContainer for \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\"" Jan 13 20:53:26.198909 containerd[1512]: time="2025-01-13T20:53:26.198744050Z" level=info msg="RemoveContainer for \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\" returns successfully" Jan 13 20:53:26.199054 kubelet[1914]: I0113 20:53:26.198946 1914 scope.go:117] "RemoveContainer" containerID="55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d" Jan 13 20:53:26.200348 containerd[1512]: time="2025-01-13T20:53:26.200315859Z" level=info msg="RemoveContainer for \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\"" Jan 13 20:53:26.203616 containerd[1512]: time="2025-01-13T20:53:26.203540423Z" level=info msg="RemoveContainer for \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\" returns successfully" Jan 13 20:53:26.203956 kubelet[1914]: I0113 20:53:26.203745 1914 scope.go:117] "RemoveContainer" containerID="b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c" Jan 13 20:53:26.205972 containerd[1512]: time="2025-01-13T20:53:26.205627803Z" level=info msg="RemoveContainer for \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\"" Jan 13 20:53:26.206974 kubelet[1914]: I0113 20:53:26.206927 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-run\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207090 kubelet[1914]: I0113 20:53:26.206991 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cni-path\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207090 kubelet[1914]: I0113 20:53:26.207056 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35d96245-70d6-498b-a42f-dba77c1c7503-clustermesh-secrets\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207188 kubelet[1914]: I0113 20:53:26.207103 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtcxn\" (UniqueName: \"kubernetes.io/projected/35d96245-70d6-498b-a42f-dba77c1c7503-kube-api-access-qtcxn\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207188 kubelet[1914]: I0113 20:53:26.207131 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-xtables-lock\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207188 kubelet[1914]: I0113 20:53:26.207159 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-cgroup\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207188 kubelet[1914]: I0113 20:53:26.207184 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-host-proc-sys-kernel\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207403 kubelet[1914]: I0113 20:53:26.207208 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-etc-cni-netd\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207403 kubelet[1914]: I0113 20:53:26.207235 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-lib-modules\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207403 kubelet[1914]: I0113 20:53:26.207258 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-host-proc-sys-net\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207403 kubelet[1914]: I0113 20:53:26.207282 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-hostproc\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207403 kubelet[1914]: I0113 20:53:26.207308 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35d96245-70d6-498b-a42f-dba77c1c7503-hubble-tls\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207403 kubelet[1914]: I0113 20:53:26.207332 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-bpf-maps\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.207716 kubelet[1914]: I0113 20:53:26.207371 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-config-path\") pod \"35d96245-70d6-498b-a42f-dba77c1c7503\" (UID: \"35d96245-70d6-498b-a42f-dba77c1c7503\") " Jan 13 20:53:26.208959 kubelet[1914]: I0113 20:53:26.207874 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.208959 kubelet[1914]: I0113 20:53:26.207942 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.208959 kubelet[1914]: I0113 20:53:26.207978 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cni-path" (OuterVolumeSpecName: "cni-path") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.210597 kubelet[1914]: I0113 20:53:26.210088 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.210689 containerd[1512]: time="2025-01-13T20:53:26.210198717Z" level=info msg="RemoveContainer for \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\" returns successfully" Jan 13 20:53:26.212608 kubelet[1914]: I0113 20:53:26.210314 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.212608 kubelet[1914]: I0113 20:53:26.212170 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.212608 kubelet[1914]: I0113 20:53:26.212219 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-hostproc" (OuterVolumeSpecName: "hostproc") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.214616 kubelet[1914]: I0113 20:53:26.214566 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.214729 kubelet[1914]: I0113 20:53:26.214658 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.215741 kubelet[1914]: I0113 20:53:26.215706 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:53:26.215967 kubelet[1914]: I0113 20:53:26.215932 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:53:26.216342 kubelet[1914]: I0113 20:53:26.215948 1914 scope.go:117] "RemoveContainer" containerID="c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6" Jan 13 20:53:26.218121 containerd[1512]: time="2025-01-13T20:53:26.218089472Z" level=info msg="RemoveContainer for \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\"" Jan 13 20:53:26.218431 kubelet[1914]: I0113 20:53:26.218276 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d96245-70d6-498b-a42f-dba77c1c7503-kube-api-access-qtcxn" (OuterVolumeSpecName: "kube-api-access-qtcxn") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "kube-api-access-qtcxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:53:26.219450 kubelet[1914]: I0113 20:53:26.219408 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d96245-70d6-498b-a42f-dba77c1c7503-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:53:26.221439 containerd[1512]: time="2025-01-13T20:53:26.221409613Z" level=info msg="RemoveContainer for \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\" returns successfully" Jan 13 20:53:26.221839 kubelet[1914]: I0113 20:53:26.221678 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d96245-70d6-498b-a42f-dba77c1c7503-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "35d96245-70d6-498b-a42f-dba77c1c7503" (UID: "35d96245-70d6-498b-a42f-dba77c1c7503"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:53:26.222140 kubelet[1914]: I0113 20:53:26.221992 1914 scope.go:117] "RemoveContainer" containerID="5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9" Jan 13 20:53:26.222507 containerd[1512]: time="2025-01-13T20:53:26.222453838Z" level=error msg="ContainerStatus for \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\": not found" Jan 13 20:53:26.231808 kubelet[1914]: E0113 20:53:26.231772 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\": not found" containerID="5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9" Jan 13 20:53:26.232189 kubelet[1914]: I0113 20:53:26.231954 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9"} err="failed to get container status \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5354afde89e1469184cdbb0c9f7e17d7d65a8266c581516496a551aa092804e9\": not found" Jan 13 20:53:26.232189 kubelet[1914]: I0113 20:53:26.232122 1914 scope.go:117] "RemoveContainer" containerID="7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad" Jan 13 20:53:26.232751 containerd[1512]: time="2025-01-13T20:53:26.232619958Z" level=error msg="ContainerStatus for \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\": not found" Jan 13 20:53:26.233190 kubelet[1914]: E0113 20:53:26.233051 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\": not found" containerID="7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad" Jan 13 20:53:26.233190 kubelet[1914]: I0113 20:53:26.233125 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad"} err="failed to get container status \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c4091672b327ed07e1b2ab907bc77a23b087a67cc62e0a2fc3c89199151d6ad\": not found" Jan 13 20:53:26.233190 kubelet[1914]: I0113 20:53:26.233166 1914 scope.go:117] "RemoveContainer" containerID="55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d" Jan 13 20:53:26.250147 containerd[1512]: time="2025-01-13T20:53:26.249110030Z" level=error msg="ContainerStatus for \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\": not found" Jan 13 20:53:26.250258 kubelet[1914]: E0113 20:53:26.249978 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\": not found" containerID="55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d" Jan 13 20:53:26.250258 kubelet[1914]: I0113 20:53:26.250032 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d"} err="failed to get container status \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\": rpc error: code = NotFound desc = an error occurred when try to find container \"55509c89c08a46b82260079c797e8de85a98ef02a7799c2b16586eb8a736658d\": not found" Jan 13 20:53:26.250258 kubelet[1914]: I0113 20:53:26.250057 1914 scope.go:117] "RemoveContainer" containerID="b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c" Jan 13 20:53:26.250471 containerd[1512]: time="2025-01-13T20:53:26.250268565Z" level=error msg="ContainerStatus for \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\": not found" Jan 13 20:53:26.250895 kubelet[1914]: E0113 20:53:26.250705 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\": not found" containerID="b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c" Jan 13 20:53:26.250895 kubelet[1914]: I0113 20:53:26.250757 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c"} err="failed to get container status \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2e516f304195e02ed303d0b12cb3f7e6d633243c0c59e388186d9884bb12b7c\": not found" Jan 13 20:53:26.250895 kubelet[1914]: I0113 20:53:26.250781 1914 scope.go:117] "RemoveContainer" containerID="c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6" Jan 13 20:53:26.251384 kubelet[1914]: E0113 20:53:26.251202 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\": not found" containerID="c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6" Jan 13 20:53:26.251384 kubelet[1914]: I0113 20:53:26.251269 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6"} err="failed to get container status \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\": not found" Jan 13 20:53:26.251516 containerd[1512]: time="2025-01-13T20:53:26.251034793Z" level=error msg="ContainerStatus for \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7b426039d80f59ef06bbb2e0a23291f287b776966ba3a8ceca111c4204960c6\": not found" Jan 13 20:53:26.308188 kubelet[1914]: I0113 20:53:26.307839 1914 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qtcxn\" (UniqueName: \"kubernetes.io/projected/35d96245-70d6-498b-a42f-dba77c1c7503-kube-api-access-qtcxn\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308188 kubelet[1914]: I0113 20:53:26.307888 1914 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-xtables-lock\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308188 kubelet[1914]: I0113 20:53:26.307906 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-cgroup\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308188 kubelet[1914]: I0113 20:53:26.307921 1914 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-host-proc-sys-kernel\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308188 kubelet[1914]: I0113 20:53:26.307936 1914 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-etc-cni-netd\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308188 kubelet[1914]: I0113 20:53:26.307949 1914 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35d96245-70d6-498b-a42f-dba77c1c7503-clustermesh-secrets\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308188 kubelet[1914]: I0113 20:53:26.307962 1914 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-lib-modules\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308188 kubelet[1914]: I0113 20:53:26.307975 1914 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-host-proc-sys-net\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308644 kubelet[1914]: I0113 20:53:26.307988 1914 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-hostproc\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308644 kubelet[1914]: I0113 20:53:26.308001 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-config-path\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308644 kubelet[1914]: I0113 20:53:26.308054 1914 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35d96245-70d6-498b-a42f-dba77c1c7503-hubble-tls\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308644 kubelet[1914]: I0113 20:53:26.308068 1914 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-bpf-maps\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308644 kubelet[1914]: I0113 20:53:26.308080 1914 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cni-path\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.308644 kubelet[1914]: I0113 20:53:26.308093 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35d96245-70d6-498b-a42f-dba77c1c7503-cilium-run\") on node \"10.230.36.26\" DevicePath \"\"" Jan 13 20:53:26.493494 systemd[1]: Removed slice kubepods-burstable-pod35d96245_70d6_498b_a42f_dba77c1c7503.slice - libcontainer container kubepods-burstable-pod35d96245_70d6_498b_a42f_dba77c1c7503.slice. Jan 13 20:53:26.493980 systemd[1]: kubepods-burstable-pod35d96245_70d6_498b_a42f_dba77c1c7503.slice: Consumed 10.149s CPU time. Jan 13 20:53:26.767195 systemd[1]: var-lib-kubelet-pods-35d96245\x2d70d6\x2d498b\x2da42f\x2ddba77c1c7503-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqtcxn.mount: Deactivated successfully. Jan 13 20:53:26.767376 systemd[1]: var-lib-kubelet-pods-35d96245\x2d70d6\x2d498b\x2da42f\x2ddba77c1c7503-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:53:26.767486 systemd[1]: var-lib-kubelet-pods-35d96245\x2d70d6\x2d498b\x2da42f\x2ddba77c1c7503-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:53:26.860384 kubelet[1914]: E0113 20:53:26.860312 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:26.952483 kubelet[1914]: E0113 20:53:26.952326 1914 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:53:27.860573 kubelet[1914]: E0113 20:53:27.860502 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:27.938600 kubelet[1914]: I0113 20:53:27.938532 1914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d96245-70d6-498b-a42f-dba77c1c7503" path="/var/lib/kubelet/pods/35d96245-70d6-498b-a42f-dba77c1c7503/volumes" Jan 13 20:53:28.861399 kubelet[1914]: E0113 20:53:28.861307 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:29.862286 kubelet[1914]: E0113 20:53:29.862217 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:30.054530 kubelet[1914]: E0113 20:53:30.054488 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35d96245-70d6-498b-a42f-dba77c1c7503" containerName="apply-sysctl-overwrites" Jan 13 20:53:30.054530 kubelet[1914]: E0113 20:53:30.054528 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35d96245-70d6-498b-a42f-dba77c1c7503" containerName="clean-cilium-state" Jan 13 20:53:30.054530 kubelet[1914]: E0113 20:53:30.054540 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35d96245-70d6-498b-a42f-dba77c1c7503" containerName="cilium-agent" Jan 13 20:53:30.054530 kubelet[1914]: E0113 20:53:30.054561 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35d96245-70d6-498b-a42f-dba77c1c7503" containerName="mount-cgroup" Jan 13 20:53:30.054530 kubelet[1914]: E0113 20:53:30.054583 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35d96245-70d6-498b-a42f-dba77c1c7503" containerName="mount-bpf-fs" Jan 13 20:53:30.055516 kubelet[1914]: I0113 20:53:30.054619 1914 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d96245-70d6-498b-a42f-dba77c1c7503" containerName="cilium-agent" Jan 13 20:53:30.062593 systemd[1]: Created slice kubepods-besteffort-pod75b56408_c651_4005_884e_fbd4852535d4.slice - libcontainer container kubepods-besteffort-pod75b56408_c651_4005_884e_fbd4852535d4.slice. Jan 13 20:53:30.065320 kubelet[1914]: W0113 20:53:30.065200 1914 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.230.36.26" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.230.36.26' and this object Jan 13 20:53:30.065320 kubelet[1914]: E0113 20:53:30.065258 1914 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.230.36.26\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.230.36.26' and this object" logger="UnhandledError" Jan 13 20:53:30.078711 systemd[1]: Created slice kubepods-burstable-pod35aa185c_4a44_461e_8c2e_40fc1ab3d9d4.slice - libcontainer container kubepods-burstable-pod35aa185c_4a44_461e_8c2e_40fc1ab3d9d4.slice. Jan 13 20:53:30.234223 kubelet[1914]: I0113 20:53:30.233600 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g9cp\" (UniqueName: \"kubernetes.io/projected/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-kube-api-access-8g9cp\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.234223 kubelet[1914]: I0113 20:53:30.233683 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75b56408-c651-4005-884e-fbd4852535d4-cilium-config-path\") pod \"cilium-operator-5d85765b45-vxxhb\" (UID: \"75b56408-c651-4005-884e-fbd4852535d4\") " pod="kube-system/cilium-operator-5d85765b45-vxxhb" Jan 13 20:53:30.234223 kubelet[1914]: I0113 20:53:30.233729 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-hostproc\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.234223 kubelet[1914]: I0113 20:53:30.233764 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-cni-path\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.234223 kubelet[1914]: I0113 20:53:30.233793 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-cilium-run\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.234736 kubelet[1914]: I0113 20:53:30.233817 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-bpf-maps\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.234736 kubelet[1914]: I0113 20:53:30.233840 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-cilium-cgroup\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.234736 kubelet[1914]: I0113 20:53:30.233864 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-xtables-lock\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.234736 kubelet[1914]: I0113 20:53:30.233887 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-clustermesh-secrets\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.234736 kubelet[1914]: I0113 20:53:30.233919 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-cilium-config-path\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.234736 kubelet[1914]: I0113 20:53:30.233969 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-host-proc-sys-net\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.235069 kubelet[1914]: I0113 20:53:30.233999 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-host-proc-sys-kernel\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.235069 kubelet[1914]: I0113 20:53:30.234060 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-hubble-tls\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.235069 kubelet[1914]: I0113 20:53:30.234149 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-etc-cni-netd\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.235069 kubelet[1914]: I0113 20:53:30.234238 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-lib-modules\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.235069 kubelet[1914]: I0113 20:53:30.234298 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/35aa185c-4a44-461e-8c2e-40fc1ab3d9d4-cilium-ipsec-secrets\") pod \"cilium-kprqc\" (UID: \"35aa185c-4a44-461e-8c2e-40fc1ab3d9d4\") " pod="kube-system/cilium-kprqc" Jan 13 20:53:30.235325 kubelet[1914]: I0113 20:53:30.234337 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ncj\" (UniqueName: \"kubernetes.io/projected/75b56408-c651-4005-884e-fbd4852535d4-kube-api-access-54ncj\") pod \"cilium-operator-5d85765b45-vxxhb\" (UID: \"75b56408-c651-4005-884e-fbd4852535d4\") " pod="kube-system/cilium-operator-5d85765b45-vxxhb" Jan 13 20:53:30.863526 kubelet[1914]: E0113 20:53:30.863432 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:31.267295 containerd[1512]: time="2025-01-13T20:53:31.267230204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vxxhb,Uid:75b56408-c651-4005-884e-fbd4852535d4,Namespace:kube-system,Attempt:0,}" Jan 13 20:53:31.291550 containerd[1512]: time="2025-01-13T20:53:31.291495871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kprqc,Uid:35aa185c-4a44-461e-8c2e-40fc1ab3d9d4,Namespace:kube-system,Attempt:0,}" Jan 13 20:53:31.311751 containerd[1512]: time="2025-01-13T20:53:31.308708152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:31.311751 containerd[1512]: time="2025-01-13T20:53:31.308804322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:31.311751 containerd[1512]: time="2025-01-13T20:53:31.308829187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:31.311751 containerd[1512]: time="2025-01-13T20:53:31.308962099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:31.337666 containerd[1512]: time="2025-01-13T20:53:31.336772699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:53:31.337666 containerd[1512]: time="2025-01-13T20:53:31.336857280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:53:31.337666 containerd[1512]: time="2025-01-13T20:53:31.336879724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:31.337666 containerd[1512]: time="2025-01-13T20:53:31.337038001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:53:31.361360 systemd[1]: Started cri-containerd-a78f1115cdf868ede7c64e279478b0e9ac85237b69fb2b60e947cdbc2242c42d.scope - libcontainer container a78f1115cdf868ede7c64e279478b0e9ac85237b69fb2b60e947cdbc2242c42d. Jan 13 20:53:31.392238 systemd[1]: Started cri-containerd-7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963.scope - libcontainer container 7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963. Jan 13 20:53:31.447354 containerd[1512]: time="2025-01-13T20:53:31.447082238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kprqc,Uid:35aa185c-4a44-461e-8c2e-40fc1ab3d9d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\"" Jan 13 20:53:31.452811 containerd[1512]: time="2025-01-13T20:53:31.452748715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vxxhb,Uid:75b56408-c651-4005-884e-fbd4852535d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a78f1115cdf868ede7c64e279478b0e9ac85237b69fb2b60e947cdbc2242c42d\"" Jan 13 20:53:31.459365 containerd[1512]: time="2025-01-13T20:53:31.459289342Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:53:31.460118 containerd[1512]: time="2025-01-13T20:53:31.460085839Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:53:31.485279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942457501.mount: Deactivated successfully. Jan 13 20:53:31.490347 containerd[1512]: time="2025-01-13T20:53:31.490200669Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c01d3cfbe8acdbb7d9fa8f8d30893efe5a1fa7e3909d2d5d63ad5602e86cec1\"" Jan 13 20:53:31.492371 containerd[1512]: time="2025-01-13T20:53:31.491222735Z" level=info msg="StartContainer for \"2c01d3cfbe8acdbb7d9fa8f8d30893efe5a1fa7e3909d2d5d63ad5602e86cec1\"" Jan 13 20:53:31.536256 systemd[1]: Started cri-containerd-2c01d3cfbe8acdbb7d9fa8f8d30893efe5a1fa7e3909d2d5d63ad5602e86cec1.scope - libcontainer container 2c01d3cfbe8acdbb7d9fa8f8d30893efe5a1fa7e3909d2d5d63ad5602e86cec1. Jan 13 20:53:31.579276 containerd[1512]: time="2025-01-13T20:53:31.578951364Z" level=info msg="StartContainer for \"2c01d3cfbe8acdbb7d9fa8f8d30893efe5a1fa7e3909d2d5d63ad5602e86cec1\" returns successfully" Jan 13 20:53:31.598301 systemd[1]: cri-containerd-2c01d3cfbe8acdbb7d9fa8f8d30893efe5a1fa7e3909d2d5d63ad5602e86cec1.scope: Deactivated successfully. Jan 13 20:53:31.642851 containerd[1512]: time="2025-01-13T20:53:31.642730435Z" level=info msg="shim disconnected" id=2c01d3cfbe8acdbb7d9fa8f8d30893efe5a1fa7e3909d2d5d63ad5602e86cec1 namespace=k8s.io Jan 13 20:53:31.642851 containerd[1512]: time="2025-01-13T20:53:31.642851326Z" level=warning msg="cleaning up after shim disconnected" id=2c01d3cfbe8acdbb7d9fa8f8d30893efe5a1fa7e3909d2d5d63ad5602e86cec1 namespace=k8s.io Jan 13 20:53:31.642851 containerd[1512]: time="2025-01-13T20:53:31.642867484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:53:31.663049 containerd[1512]: time="2025-01-13T20:53:31.662912566Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:53:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:53:31.809506 kubelet[1914]: E0113 20:53:31.809346 1914 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:31.863641 kubelet[1914]: E0113 20:53:31.863573 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:31.953490 kubelet[1914]: E0113 20:53:31.953416 1914 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:53:32.212282 containerd[1512]: time="2025-01-13T20:53:32.212093589Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:53:32.224036 containerd[1512]: time="2025-01-13T20:53:32.223951693Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"615ff36d83385f754ca22e21c2218ba1eafc76266cc224f10e344da2822b3096\"" Jan 13 20:53:32.224786 containerd[1512]: time="2025-01-13T20:53:32.224744118Z" level=info msg="StartContainer for \"615ff36d83385f754ca22e21c2218ba1eafc76266cc224f10e344da2822b3096\"" Jan 13 20:53:32.260219 systemd[1]: Started cri-containerd-615ff36d83385f754ca22e21c2218ba1eafc76266cc224f10e344da2822b3096.scope - libcontainer container 615ff36d83385f754ca22e21c2218ba1eafc76266cc224f10e344da2822b3096. Jan 13 20:53:32.298498 containerd[1512]: time="2025-01-13T20:53:32.298369254Z" level=info msg="StartContainer for \"615ff36d83385f754ca22e21c2218ba1eafc76266cc224f10e344da2822b3096\" returns successfully" Jan 13 20:53:32.312591 systemd[1]: cri-containerd-615ff36d83385f754ca22e21c2218ba1eafc76266cc224f10e344da2822b3096.scope: Deactivated successfully. Jan 13 20:53:32.346765 containerd[1512]: time="2025-01-13T20:53:32.346669284Z" level=info msg="shim disconnected" id=615ff36d83385f754ca22e21c2218ba1eafc76266cc224f10e344da2822b3096 namespace=k8s.io Jan 13 20:53:32.346765 containerd[1512]: time="2025-01-13T20:53:32.346762575Z" level=warning msg="cleaning up after shim disconnected" id=615ff36d83385f754ca22e21c2218ba1eafc76266cc224f10e344da2822b3096 namespace=k8s.io Jan 13 20:53:32.347101 containerd[1512]: time="2025-01-13T20:53:32.346778847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:53:32.864209 kubelet[1914]: E0113 20:53:32.864126 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:33.081091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352482748.mount: Deactivated successfully. Jan 13 20:53:33.214823 containerd[1512]: time="2025-01-13T20:53:33.214149410Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:53:33.245570 containerd[1512]: time="2025-01-13T20:53:33.245306646Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3a3ea8f06781d847ad94ee3f12c68951c59081f27629ed7dc4beb6b89a98b730\"" Jan 13 20:53:33.248037 containerd[1512]: time="2025-01-13T20:53:33.246495148Z" level=info msg="StartContainer for \"3a3ea8f06781d847ad94ee3f12c68951c59081f27629ed7dc4beb6b89a98b730\"" Jan 13 20:53:33.289330 systemd[1]: Started cri-containerd-3a3ea8f06781d847ad94ee3f12c68951c59081f27629ed7dc4beb6b89a98b730.scope - libcontainer container 3a3ea8f06781d847ad94ee3f12c68951c59081f27629ed7dc4beb6b89a98b730. Jan 13 20:53:33.335659 containerd[1512]: time="2025-01-13T20:53:33.335573654Z" level=info msg="StartContainer for \"3a3ea8f06781d847ad94ee3f12c68951c59081f27629ed7dc4beb6b89a98b730\" returns successfully" Jan 13 20:53:33.348273 systemd[1]: cri-containerd-3a3ea8f06781d847ad94ee3f12c68951c59081f27629ed7dc4beb6b89a98b730.scope: Deactivated successfully. Jan 13 20:53:33.383855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a3ea8f06781d847ad94ee3f12c68951c59081f27629ed7dc4beb6b89a98b730-rootfs.mount: Deactivated successfully. Jan 13 20:53:33.396581 containerd[1512]: time="2025-01-13T20:53:33.396436459Z" level=info msg="shim disconnected" id=3a3ea8f06781d847ad94ee3f12c68951c59081f27629ed7dc4beb6b89a98b730 namespace=k8s.io Jan 13 20:53:33.396581 containerd[1512]: time="2025-01-13T20:53:33.396522015Z" level=warning msg="cleaning up after shim disconnected" id=3a3ea8f06781d847ad94ee3f12c68951c59081f27629ed7dc4beb6b89a98b730 namespace=k8s.io Jan 13 20:53:33.396581 containerd[1512]: time="2025-01-13T20:53:33.396549637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:53:33.498136 kubelet[1914]: I0113 20:53:33.497829 1914 setters.go:600] "Node became not ready" node="10.230.36.26" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:53:33Z","lastTransitionTime":"2025-01-13T20:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:53:33.864679 kubelet[1914]: E0113 20:53:33.864507 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:34.219442 containerd[1512]: time="2025-01-13T20:53:34.219180570Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:53:34.240421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422718232.mount: Deactivated successfully. Jan 13 20:53:34.244943 containerd[1512]: time="2025-01-13T20:53:34.244850413Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"190e83813cea4a2cc642585a3517ff4164eb6466b03dd775e1ec743c416974ba\"" Jan 13 20:53:34.245665 containerd[1512]: time="2025-01-13T20:53:34.245626977Z" level=info msg="StartContainer for \"190e83813cea4a2cc642585a3517ff4164eb6466b03dd775e1ec743c416974ba\"" Jan 13 20:53:34.287235 systemd[1]: Started cri-containerd-190e83813cea4a2cc642585a3517ff4164eb6466b03dd775e1ec743c416974ba.scope - libcontainer container 190e83813cea4a2cc642585a3517ff4164eb6466b03dd775e1ec743c416974ba. Jan 13 20:53:34.320673 systemd[1]: cri-containerd-190e83813cea4a2cc642585a3517ff4164eb6466b03dd775e1ec743c416974ba.scope: Deactivated successfully. Jan 13 20:53:34.323712 containerd[1512]: time="2025-01-13T20:53:34.323664076Z" level=info msg="StartContainer for \"190e83813cea4a2cc642585a3517ff4164eb6466b03dd775e1ec743c416974ba\" returns successfully" Jan 13 20:53:34.349662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-190e83813cea4a2cc642585a3517ff4164eb6466b03dd775e1ec743c416974ba-rootfs.mount: Deactivated successfully. Jan 13 20:53:34.355577 containerd[1512]: time="2025-01-13T20:53:34.355288927Z" level=info msg="shim disconnected" id=190e83813cea4a2cc642585a3517ff4164eb6466b03dd775e1ec743c416974ba namespace=k8s.io Jan 13 20:53:34.355577 containerd[1512]: time="2025-01-13T20:53:34.355387056Z" level=warning msg="cleaning up after shim disconnected" id=190e83813cea4a2cc642585a3517ff4164eb6466b03dd775e1ec743c416974ba namespace=k8s.io Jan 13 20:53:34.355577 containerd[1512]: time="2025-01-13T20:53:34.355422431Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:53:34.865670 kubelet[1914]: E0113 20:53:34.865605 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:35.223867 containerd[1512]: time="2025-01-13T20:53:35.223806759Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:53:35.243552 containerd[1512]: time="2025-01-13T20:53:35.243350310Z" level=info msg="CreateContainer within sandbox \"7a0f19cb4ad9dc823c8f7e9963a3ece9fe7bbef5f47b5dd9af443d2f48701963\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5d792aa585eb67c29e892009b91ab95f7f5ee1f1e5c56daa1018f9af4d5aab37\"" Jan 13 20:53:35.243695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129974351.mount: Deactivated successfully. Jan 13 20:53:35.246903 containerd[1512]: time="2025-01-13T20:53:35.245918136Z" level=info msg="StartContainer for \"5d792aa585eb67c29e892009b91ab95f7f5ee1f1e5c56daa1018f9af4d5aab37\"" Jan 13 20:53:35.283242 systemd[1]: Started cri-containerd-5d792aa585eb67c29e892009b91ab95f7f5ee1f1e5c56daa1018f9af4d5aab37.scope - libcontainer container 5d792aa585eb67c29e892009b91ab95f7f5ee1f1e5c56daa1018f9af4d5aab37. Jan 13 20:53:35.334645 containerd[1512]: time="2025-01-13T20:53:35.334575835Z" level=info msg="StartContainer for \"5d792aa585eb67c29e892009b91ab95f7f5ee1f1e5c56daa1018f9af4d5aab37\" returns successfully" Jan 13 20:53:35.866073 kubelet[1914]: E0113 20:53:35.865962 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:36.016138 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 20:53:36.255071 kubelet[1914]: I0113 20:53:36.254518 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kprqc" podStartSLOduration=6.254500458 podStartE2EDuration="6.254500458s" podCreationTimestamp="2025-01-13 20:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:53:36.251922136 +0000 UTC m=+85.476836123" watchObservedRunningTime="2025-01-13 20:53:36.254500458 +0000 UTC m=+85.479414447" Jan 13 20:53:36.866562 kubelet[1914]: E0113 20:53:36.866483 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:37.867704 kubelet[1914]: E0113 20:53:37.867636 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:38.868178 kubelet[1914]: E0113 20:53:38.868090 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:39.659125 systemd-networkd[1425]: lxc_health: Link UP Jan 13 20:53:39.666960 systemd-networkd[1425]: lxc_health: Gained carrier Jan 13 20:53:39.868269 kubelet[1914]: E0113 20:53:39.868226 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:40.869777 kubelet[1914]: E0113 20:53:40.869673 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:40.896236 systemd-networkd[1425]: lxc_health: Gained IPv6LL Jan 13 20:53:41.870813 kubelet[1914]: E0113 20:53:41.870753 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:42.545072 systemd[1]: run-containerd-runc-k8s.io-5d792aa585eb67c29e892009b91ab95f7f5ee1f1e5c56daa1018f9af4d5aab37-runc.1Vwn4v.mount: Deactivated successfully. Jan 13 20:53:42.871275 kubelet[1914]: E0113 20:53:42.871024 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:43.048865 containerd[1512]: time="2025-01-13T20:53:43.048790773Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:43.056055 containerd[1512]: time="2025-01-13T20:53:43.054700452Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906597" Jan 13 20:53:43.058078 containerd[1512]: time="2025-01-13T20:53:43.058028820Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:53:43.059915 containerd[1512]: time="2025-01-13T20:53:43.059854924Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 11.599714015s" Jan 13 20:53:43.060040 containerd[1512]: time="2025-01-13T20:53:43.059918297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 20:53:43.062813 containerd[1512]: time="2025-01-13T20:53:43.062767005Z" level=info msg="CreateContainer within sandbox \"a78f1115cdf868ede7c64e279478b0e9ac85237b69fb2b60e947cdbc2242c42d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:53:43.096304 containerd[1512]: time="2025-01-13T20:53:43.092888537Z" level=info msg="CreateContainer within sandbox \"a78f1115cdf868ede7c64e279478b0e9ac85237b69fb2b60e947cdbc2242c42d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"06f157ddf7019dac5943f2c395e5f156b7529da221f3673ecd7bb4830fafd717\"" Jan 13 20:53:43.096304 containerd[1512]: time="2025-01-13T20:53:43.094401159Z" level=info msg="StartContainer for \"06f157ddf7019dac5943f2c395e5f156b7529da221f3673ecd7bb4830fafd717\"" Jan 13 20:53:43.149231 systemd[1]: Started cri-containerd-06f157ddf7019dac5943f2c395e5f156b7529da221f3673ecd7bb4830fafd717.scope - libcontainer container 06f157ddf7019dac5943f2c395e5f156b7529da221f3673ecd7bb4830fafd717. Jan 13 20:53:43.197374 containerd[1512]: time="2025-01-13T20:53:43.197288786Z" level=info msg="StartContainer for \"06f157ddf7019dac5943f2c395e5f156b7529da221f3673ecd7bb4830fafd717\" returns successfully" Jan 13 20:53:43.283522 kubelet[1914]: I0113 20:53:43.283441 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vxxhb" podStartSLOduration=1.682349563 podStartE2EDuration="13.283423359s" podCreationTimestamp="2025-01-13 20:53:30 +0000 UTC" firstStartedPulling="2025-01-13 20:53:31.459711638 +0000 UTC m=+80.684625611" lastFinishedPulling="2025-01-13 20:53:43.060785439 +0000 UTC m=+92.285699407" observedRunningTime="2025-01-13 20:53:43.279525648 +0000 UTC m=+92.504439647" watchObservedRunningTime="2025-01-13 20:53:43.283423359 +0000 UTC m=+92.508337348" Jan 13 20:53:43.872071 kubelet[1914]: E0113 20:53:43.871979 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:44.873342 kubelet[1914]: E0113 20:53:44.873286 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:45.874028 kubelet[1914]: E0113 20:53:45.873965 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:46.874357 kubelet[1914]: E0113 20:53:46.874285 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:47.874805 kubelet[1914]: E0113 20:53:47.874727 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:53:48.875444 kubelet[1914]: E0113 20:53:48.875372 1914 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"