Jan 13 22:46:53.040528 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:01:45 -00 2025 Jan 13 22:46:53.040583 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 22:46:53.040598 kernel: BIOS-provided physical RAM map: Jan 13 22:46:53.040616 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 22:46:53.040626 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 22:46:53.040636 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 22:46:53.040648 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 13 22:46:53.040658 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 13 22:46:53.040669 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 22:46:53.040679 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 22:46:53.040690 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 22:46:53.040700 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 22:46:53.040716 kernel: NX (Execute Disable) protection: active Jan 13 22:46:53.040727 kernel: APIC: Static calls initialized Jan 13 22:46:53.040739 kernel: SMBIOS 2.8 present. Jan 13 22:46:53.040751 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 13 22:46:53.040762 kernel: Hypervisor detected: KVM Jan 13 22:46:53.040778 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 22:46:53.040790 kernel: kvm-clock: using sched offset of 4570141095 cycles Jan 13 22:46:53.040802 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 22:46:53.040814 kernel: tsc: Detected 2499.998 MHz processor Jan 13 22:46:53.040826 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 22:46:53.040837 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 22:46:53.040849 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 13 22:46:53.040860 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 22:46:53.040872 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 22:46:53.040888 kernel: Using GB pages for direct mapping Jan 13 22:46:53.040899 kernel: ACPI: Early table checksum verification disabled Jan 13 22:46:53.040911 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 13 22:46:53.040922 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:46:53.040934 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:46:53.040946 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:46:53.040957 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 13 22:46:53.040968 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:46:53.040979 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:46:53.040996 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:46:53.041007 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 22:46:53.041019 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 13 22:46:53.041030 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 13 22:46:53.041075 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 13 22:46:53.041096 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 13 22:46:53.041108 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 13 22:46:53.041125 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 13 22:46:53.041137 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 13 22:46:53.041149 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 22:46:53.041161 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 22:46:53.041172 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 13 22:46:53.041184 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 13 22:46:53.041196 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 13 22:46:53.041221 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 13 22:46:53.041235 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 13 22:46:53.041247 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 13 22:46:53.041259 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 13 22:46:53.041270 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 13 22:46:53.041282 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 13 22:46:53.041294 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 13 22:46:53.041306 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 13 22:46:53.041317 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 13 22:46:53.041329 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 13 22:46:53.041347 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 13 22:46:53.041359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 22:46:53.041371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 13 22:46:53.041383 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 13 22:46:53.041395 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 13 22:46:53.041408 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 13 22:46:53.041428 kernel: Zone ranges: Jan 13 22:46:53.041441 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 22:46:53.041453 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 13 22:46:53.041471 kernel: Normal empty Jan 13 22:46:53.041483 kernel: Movable zone start for each node Jan 13 22:46:53.041510 kernel: Early memory node ranges Jan 13 22:46:53.041522 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 22:46:53.041535 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 13 22:46:53.041546 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 13 22:46:53.041558 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 22:46:53.041570 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 22:46:53.041582 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 13 22:46:53.041594 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 22:46:53.041622 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 22:46:53.041634 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 22:46:53.041646 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 22:46:53.041658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 22:46:53.041670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 22:46:53.041682 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 22:46:53.041694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 22:46:53.041706 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 22:46:53.041718 kernel: TSC deadline timer available Jan 13 22:46:53.041735 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 13 22:46:53.041748 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 22:46:53.041760 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 22:46:53.041772 kernel: Booting paravirtualized kernel on KVM Jan 13 22:46:53.041784 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 22:46:53.041796 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 13 22:46:53.041809 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 13 22:46:53.041829 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 13 22:46:53.041843 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 13 22:46:53.041860 kernel: kvm-guest: PV spinlocks enabled Jan 13 22:46:53.041872 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 22:46:53.041886 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 22:46:53.041899 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 22:46:53.041911 kernel: random: crng init done Jan 13 22:46:53.041923 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 22:46:53.041935 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 22:46:53.041947 kernel: Fallback order for Node 0: 0 Jan 13 22:46:53.041964 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 13 22:46:53.041976 kernel: Policy zone: DMA32 Jan 13 22:46:53.041988 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 22:46:53.042000 kernel: software IO TLB: area num 16. Jan 13 22:46:53.042012 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 194824K reserved, 0K cma-reserved) Jan 13 22:46:53.042024 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 13 22:46:53.042058 kernel: Kernel/User page tables isolation: enabled Jan 13 22:46:53.042073 kernel: ftrace: allocating 37920 entries in 149 pages Jan 13 22:46:53.042085 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 22:46:53.042104 kernel: Dynamic Preempt: voluntary Jan 13 22:46:53.042116 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 22:46:53.042129 kernel: rcu: RCU event tracing is enabled. Jan 13 22:46:53.042141 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 13 22:46:53.042154 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 22:46:53.042208 kernel: Rude variant of Tasks RCU enabled. Jan 13 22:46:53.042232 kernel: Tracing variant of Tasks RCU enabled. Jan 13 22:46:53.042245 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 22:46:53.042258 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 13 22:46:53.042270 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 13 22:46:53.042283 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 22:46:53.042295 kernel: Console: colour VGA+ 80x25 Jan 13 22:46:53.042313 kernel: printk: console [tty0] enabled Jan 13 22:46:53.042327 kernel: printk: console [ttyS0] enabled Jan 13 22:46:53.042339 kernel: ACPI: Core revision 20230628 Jan 13 22:46:53.042352 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 22:46:53.042364 kernel: x2apic enabled Jan 13 22:46:53.042382 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 22:46:53.042395 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 22:46:53.042408 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 13 22:46:53.042421 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 22:46:53.042434 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 13 22:46:53.042446 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 13 22:46:53.042459 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 22:46:53.042471 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 22:46:53.042483 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 22:46:53.042526 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 22:46:53.042540 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 13 22:46:53.042553 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 22:46:53.042565 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 22:46:53.042577 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 22:46:53.042590 kernel: MMIO Stale Data: Unknown: No mitigations Jan 13 22:46:53.042602 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 13 22:46:53.042615 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 22:46:53.042628 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 22:46:53.042640 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 22:46:53.042652 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 22:46:53.042670 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 22:46:53.042683 kernel: Freeing SMP alternatives memory: 32K Jan 13 22:46:53.042696 kernel: pid_max: default: 32768 minimum: 301 Jan 13 22:46:53.042708 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 22:46:53.042721 kernel: landlock: Up and running. Jan 13 22:46:53.042733 kernel: SELinux: Initializing. Jan 13 22:46:53.042746 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 22:46:53.042759 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 22:46:53.042771 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 13 22:46:53.042784 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:46:53.042797 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:46:53.042815 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 13 22:46:53.042828 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 13 22:46:53.042841 kernel: signal: max sigframe size: 1776 Jan 13 22:46:53.042853 kernel: rcu: Hierarchical SRCU implementation. Jan 13 22:46:53.042866 kernel: rcu: Max phase no-delay instances is 400. Jan 13 22:46:53.042879 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 22:46:53.042891 kernel: smp: Bringing up secondary CPUs ... Jan 13 22:46:53.042914 kernel: smpboot: x86: Booting SMP configuration: Jan 13 22:46:53.042927 kernel: .... node #0, CPUs: #1 Jan 13 22:46:53.042946 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 13 22:46:53.042959 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 22:46:53.042971 kernel: smpboot: Max logical packages: 16 Jan 13 22:46:53.042984 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 13 22:46:53.042997 kernel: devtmpfs: initialized Jan 13 22:46:53.043009 kernel: x86/mm: Memory block size: 128MB Jan 13 22:46:53.043022 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 22:46:53.043052 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 13 22:46:53.043074 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 22:46:53.043093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 22:46:53.043106 kernel: audit: initializing netlink subsys (disabled) Jan 13 22:46:53.043119 kernel: audit: type=2000 audit(1736808411.167:1): state=initialized audit_enabled=0 res=1 Jan 13 22:46:53.043131 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 22:46:53.043144 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 22:46:53.043156 kernel: cpuidle: using governor menu Jan 13 22:46:53.043169 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 22:46:53.043182 kernel: dca service started, version 1.12.1 Jan 13 22:46:53.043194 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 22:46:53.043213 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 22:46:53.043226 kernel: PCI: Using configuration type 1 for base access Jan 13 22:46:53.043238 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 22:46:53.043251 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 22:46:53.043264 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 22:46:53.043277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 22:46:53.043289 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 22:46:53.043302 kernel: ACPI: Added _OSI(Module Device) Jan 13 22:46:53.043315 kernel: ACPI: Added _OSI(Processor Device) Jan 13 22:46:53.043333 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 22:46:53.043345 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 22:46:53.043358 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 22:46:53.043371 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 22:46:53.043398 kernel: ACPI: Interpreter enabled Jan 13 22:46:53.043412 kernel: ACPI: PM: (supports S0 S5) Jan 13 22:46:53.043424 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 22:46:53.043437 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 22:46:53.043450 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 22:46:53.043469 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 22:46:53.043482 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 22:46:53.043809 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 22:46:53.043992 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 22:46:53.044194 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 22:46:53.044215 kernel: PCI host bridge to bus 0000:00 Jan 13 22:46:53.044419 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 22:46:53.044719 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 22:46:53.044943 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 22:46:53.045773 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 13 22:46:53.045934 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 22:46:53.046118 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 13 22:46:53.046287 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 22:46:53.046514 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 22:46:53.046744 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 13 22:46:53.046930 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 13 22:46:53.047168 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 13 22:46:53.047339 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 13 22:46:53.047539 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 22:46:53.047763 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 22:46:53.047953 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 13 22:46:53.048249 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 22:46:53.048424 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 13 22:46:53.048656 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 22:46:53.048834 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 13 22:46:53.049047 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 22:46:53.049302 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 13 22:46:53.049548 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 22:46:53.049759 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 13 22:46:53.049959 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 22:46:53.050229 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 13 22:46:53.050417 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 22:46:53.050610 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 13 22:46:53.050806 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 22:46:53.050973 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 13 22:46:53.051225 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 13 22:46:53.051393 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 22:46:53.051574 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 13 22:46:53.051738 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 13 22:46:53.051912 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 13 22:46:53.053078 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 13 22:46:53.053255 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 22:46:53.053422 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 13 22:46:53.053620 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 13 22:46:53.053822 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 22:46:53.053989 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 22:46:53.054261 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 22:46:53.055089 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 13 22:46:53.055283 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 13 22:46:53.055482 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 22:46:53.055666 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 22:46:53.055873 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 13 22:46:53.056079 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 13 22:46:53.056252 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 22:46:53.056419 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 22:46:53.056603 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 22:46:53.056800 kernel: pci_bus 0000:02: extended config space not accessible Jan 13 22:46:53.058266 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 13 22:46:53.058476 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 13 22:46:53.058713 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 22:46:53.058935 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 22:46:53.060291 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 22:46:53.060474 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 13 22:46:53.060661 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 22:46:53.060829 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 22:46:53.061004 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 22:46:53.061223 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 22:46:53.061401 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 13 22:46:53.061603 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 22:46:53.061771 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 22:46:53.061936 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 22:46:53.064199 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 22:46:53.064382 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 22:46:53.064581 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 22:46:53.064758 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 22:46:53.064928 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 22:46:53.066176 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 22:46:53.066351 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 22:46:53.066535 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 22:46:53.066703 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 22:46:53.066870 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 22:46:53.068092 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 22:46:53.068280 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 22:46:53.068468 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 22:46:53.068692 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 22:46:53.068871 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 22:46:53.068892 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 22:46:53.068906 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 22:46:53.068919 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 22:46:53.068940 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 22:46:53.068954 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 22:46:53.068967 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 22:46:53.068980 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 22:46:53.068992 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 22:46:53.069005 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 22:46:53.069018 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 22:46:53.069030 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 22:46:53.071087 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 22:46:53.071110 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 22:46:53.071124 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 22:46:53.071137 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 22:46:53.071150 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 22:46:53.071163 kernel: iommu: Default domain type: Translated Jan 13 22:46:53.071176 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 22:46:53.071188 kernel: PCI: Using ACPI for IRQ routing Jan 13 22:46:53.071201 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 22:46:53.071214 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 22:46:53.071251 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 13 22:46:53.071446 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 22:46:53.071637 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 22:46:53.071806 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 22:46:53.071827 kernel: vgaarb: loaded Jan 13 22:46:53.071841 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 22:46:53.071866 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 22:46:53.071880 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 22:46:53.071893 kernel: pnp: PnP ACPI init Jan 13 22:46:53.072140 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 22:46:53.072163 kernel: pnp: PnP ACPI: found 5 devices Jan 13 22:46:53.072177 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 22:46:53.072199 kernel: NET: Registered PF_INET protocol family Jan 13 22:46:53.072212 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 22:46:53.072225 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 22:46:53.072238 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 22:46:53.072261 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 22:46:53.072283 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 22:46:53.072296 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 22:46:53.072308 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 22:46:53.072321 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 22:46:53.072341 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 22:46:53.072353 kernel: NET: Registered PF_XDP protocol family Jan 13 22:46:53.072547 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 13 22:46:53.072719 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 22:46:53.072895 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 22:46:53.074117 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 22:46:53.074311 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 22:46:53.074483 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 22:46:53.074689 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 22:46:53.074861 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 22:46:53.076068 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 22:46:53.076257 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 22:46:53.076431 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 22:46:53.076632 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 22:46:53.076801 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 22:46:53.079073 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 22:46:53.079268 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 22:46:53.079444 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 22:46:53.079666 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 13 22:46:53.079851 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 13 22:46:53.080026 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 13 22:46:53.080234 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 22:46:53.080404 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 13 22:46:53.080636 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 22:46:53.080807 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 13 22:46:53.080974 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 22:46:53.081837 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 13 22:46:53.082011 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 22:46:53.082202 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 13 22:46:53.082378 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 22:46:53.082595 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 13 22:46:53.082784 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 22:46:53.082964 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 13 22:46:53.089285 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 22:46:53.089529 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 13 22:46:53.089707 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 22:46:53.089884 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 13 22:46:53.090072 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 22:46:53.090260 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 13 22:46:53.090432 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 22:46:53.090628 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 13 22:46:53.093243 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 22:46:53.093422 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 13 22:46:53.093627 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 22:46:53.093802 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 13 22:46:53.093976 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 22:46:53.094222 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 13 22:46:53.094414 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 22:46:53.094612 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 13 22:46:53.094810 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 22:46:53.095020 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 13 22:46:53.097215 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 22:46:53.097378 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 22:46:53.097547 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 22:46:53.097713 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 22:46:53.097912 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 13 22:46:53.098090 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 22:46:53.098253 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 13 22:46:53.098438 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 22:46:53.098616 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 13 22:46:53.098778 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 13 22:46:53.098964 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 13 22:46:53.103223 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 13 22:46:53.103397 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 13 22:46:53.103578 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 13 22:46:53.103761 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 13 22:46:53.103921 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 13 22:46:53.104155 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 13 22:46:53.104335 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 13 22:46:53.104508 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 13 22:46:53.104668 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 13 22:46:53.105017 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 13 22:46:53.105227 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 13 22:46:53.105419 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 13 22:46:53.105657 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 13 22:46:53.105828 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 13 22:46:53.105984 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 13 22:46:53.106245 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 13 22:46:53.106409 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 13 22:46:53.106588 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 13 22:46:53.106767 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 13 22:46:53.106928 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 13 22:46:53.107180 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 13 22:46:53.107204 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 22:46:53.107218 kernel: PCI: CLS 0 bytes, default 64 Jan 13 22:46:53.107232 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 22:46:53.107246 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 13 22:46:53.107260 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 22:46:53.107273 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 22:46:53.107287 kernel: Initialise system trusted keyrings Jan 13 22:46:53.107311 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 22:46:53.107325 kernel: Key type asymmetric registered Jan 13 22:46:53.107338 kernel: Asymmetric key parser 'x509' registered Jan 13 22:46:53.107352 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 22:46:53.107365 kernel: io scheduler mq-deadline registered Jan 13 22:46:53.107379 kernel: io scheduler kyber registered Jan 13 22:46:53.107392 kernel: io scheduler bfq registered Jan 13 22:46:53.107609 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 13 22:46:53.107784 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 13 22:46:53.107961 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 22:46:53.108165 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 13 22:46:53.108335 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 13 22:46:53.108519 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 22:46:53.108695 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 13 22:46:53.108864 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 13 22:46:53.109077 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 22:46:53.109252 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 13 22:46:53.109419 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 13 22:46:53.109600 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 22:46:53.109769 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 13 22:46:53.109935 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 13 22:46:53.110141 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 22:46:53.110311 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 13 22:46:53.110477 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 13 22:46:53.110660 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 22:46:53.110827 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 13 22:46:53.110994 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 13 22:46:53.111219 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 22:46:53.111392 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 13 22:46:53.111576 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 13 22:46:53.111742 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 22:46:53.111764 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 22:46:53.111779 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 22:46:53.111801 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 22:46:53.111815 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 22:46:53.111829 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 22:46:53.111843 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 22:46:53.111856 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 22:46:53.111870 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 22:46:53.111883 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 22:46:53.112078 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 13 22:46:53.112241 kernel: rtc_cmos 00:03: registered as rtc0 Jan 13 22:46:53.112409 kernel: rtc_cmos 00:03: setting system clock to 2025-01-13T22:46:52 UTC (1736808412) Jan 13 22:46:53.112623 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 13 22:46:53.112645 kernel: intel_pstate: CPU model not supported Jan 13 22:46:53.112659 kernel: NET: Registered PF_INET6 protocol family Jan 13 22:46:53.112672 kernel: Segment Routing with IPv6 Jan 13 22:46:53.112686 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 22:46:53.112699 kernel: NET: Registered PF_PACKET protocol family Jan 13 22:46:53.112713 kernel: Key type dns_resolver registered Jan 13 22:46:53.112733 kernel: IPI shorthand broadcast: enabled Jan 13 22:46:53.112747 kernel: sched_clock: Marking stable (1290004034, 236005327)->(1656368967, -130359606) Jan 13 22:46:53.112761 kernel: registered taskstats version 1 Jan 13 22:46:53.112774 kernel: Loading compiled-in X.509 certificates Jan 13 22:46:53.112788 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 98739e9049f62881f4df7ffd1e39335f7f55b344' Jan 13 22:46:53.112801 kernel: Key type .fscrypt registered Jan 13 22:46:53.112814 kernel: Key type fscrypt-provisioning registered Jan 13 22:46:53.112828 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 22:46:53.112841 kernel: ima: Allocated hash algorithm: sha1 Jan 13 22:46:53.112864 kernel: ima: No architecture policies found Jan 13 22:46:53.112877 kernel: clk: Disabling unused clocks Jan 13 22:46:53.112890 kernel: Freeing unused kernel image (initmem) memory: 42976K Jan 13 22:46:53.112904 kernel: Write protecting the kernel read-only data: 36864k Jan 13 22:46:53.112918 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 13 22:46:53.112931 kernel: Run /init as init process Jan 13 22:46:53.112944 kernel: with arguments: Jan 13 22:46:53.112958 kernel: /init Jan 13 22:46:53.112971 kernel: with environment: Jan 13 22:46:53.112989 kernel: HOME=/ Jan 13 22:46:53.113003 kernel: TERM=linux Jan 13 22:46:53.113016 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 22:46:53.113078 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:46:53.113101 systemd[1]: Detected virtualization kvm. Jan 13 22:46:53.113116 systemd[1]: Detected architecture x86-64. Jan 13 22:46:53.113130 systemd[1]: Running in initrd. Jan 13 22:46:53.113143 systemd[1]: No hostname configured, using default hostname. Jan 13 22:46:53.113165 systemd[1]: Hostname set to . Jan 13 22:46:53.113180 systemd[1]: Initializing machine ID from VM UUID. Jan 13 22:46:53.113194 systemd[1]: Queued start job for default target initrd.target. Jan 13 22:46:53.113209 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:46:53.113223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:46:53.113238 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 22:46:53.113253 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:46:53.113267 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 22:46:53.113287 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 22:46:53.113304 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 22:46:53.113319 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 22:46:53.113333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:46:53.113347 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:46:53.113362 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:46:53.113382 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:46:53.113396 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:46:53.113411 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:46:53.113425 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:46:53.113440 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:46:53.113454 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 22:46:53.113468 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 22:46:53.113483 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:46:53.113511 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:46:53.113532 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:46:53.113547 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:46:53.113561 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 22:46:53.113575 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:46:53.113589 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 22:46:53.113604 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 22:46:53.113618 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:46:53.113632 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:46:53.113646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:46:53.113707 systemd-journald[202]: Collecting audit messages is disabled. Jan 13 22:46:53.113740 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 22:46:53.113755 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:46:53.113769 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 22:46:53.113791 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 22:46:53.113806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 22:46:53.113820 kernel: Bridge firewalling registered Jan 13 22:46:53.113834 systemd-journald[202]: Journal started Jan 13 22:46:53.113866 systemd-journald[202]: Runtime Journal (/run/log/journal/04ae6d22867f4b65b1e09edd83e7ab02) is 4.7M, max 38.0M, 33.2M free. Jan 13 22:46:53.057409 systemd-modules-load[203]: Inserted module 'overlay' Jan 13 22:46:53.173645 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:46:53.105008 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 13 22:46:53.175913 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:46:53.177013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:46:53.178706 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:46:53.192283 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:46:53.200357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:46:53.207254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:46:53.211061 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:46:53.217947 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:46:53.229409 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:46:53.238281 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 22:46:53.240091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:46:53.241604 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:46:53.252272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:46:53.263213 dracut-cmdline[233]: dracut-dracut-053 Jan 13 22:46:53.268762 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1175b5bd4028ce8485b23b7d346f787308cbfa43cca7b1fefd4254406dce7d07 Jan 13 22:46:53.298464 systemd-resolved[238]: Positive Trust Anchors: Jan 13 22:46:53.298527 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:46:53.298573 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:46:53.303065 systemd-resolved[238]: Defaulting to hostname 'linux'. Jan 13 22:46:53.305016 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:46:53.308272 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:46:53.387099 kernel: SCSI subsystem initialized Jan 13 22:46:53.399103 kernel: Loading iSCSI transport class v2.0-870. Jan 13 22:46:53.412073 kernel: iscsi: registered transport (tcp) Jan 13 22:46:53.438347 kernel: iscsi: registered transport (qla4xxx) Jan 13 22:46:53.438440 kernel: QLogic iSCSI HBA Driver Jan 13 22:46:53.496551 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 22:46:53.503302 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 22:46:53.537430 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 22:46:53.537521 kernel: device-mapper: uevent: version 1.0.3 Jan 13 22:46:53.539897 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 22:46:53.589089 kernel: raid6: sse2x4 gen() 13813 MB/s Jan 13 22:46:53.607076 kernel: raid6: sse2x2 gen() 9373 MB/s Jan 13 22:46:53.625778 kernel: raid6: sse2x1 gen() 10100 MB/s Jan 13 22:46:53.625864 kernel: raid6: using algorithm sse2x4 gen() 13813 MB/s Jan 13 22:46:53.644762 kernel: raid6: .... xor() 7738 MB/s, rmw enabled Jan 13 22:46:53.644852 kernel: raid6: using ssse3x2 recovery algorithm Jan 13 22:46:53.671107 kernel: xor: automatically using best checksumming function avx Jan 13 22:46:53.867113 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 22:46:53.881640 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:46:53.890262 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:46:53.910867 systemd-udevd[421]: Using default interface naming scheme 'v255'. Jan 13 22:46:53.918316 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:46:53.925236 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 22:46:53.957972 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Jan 13 22:46:54.001067 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:46:54.006251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:46:54.123534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:46:54.133465 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 22:46:54.161585 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 22:46:54.165283 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:46:54.168261 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:46:54.170304 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:46:54.177251 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 22:46:54.206415 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:46:54.272623 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 13 22:46:54.335343 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 22:46:54.335381 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 13 22:46:54.335600 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 22:46:54.335623 kernel: GPT:17805311 != 125829119 Jan 13 22:46:54.335653 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 22:46:54.335672 kernel: GPT:17805311 != 125829119 Jan 13 22:46:54.335689 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 22:46:54.335707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 22:46:54.335725 kernel: AVX version of gcm_enc/dec engaged. Jan 13 22:46:54.335743 kernel: AES CTR mode by8 optimization enabled Jan 13 22:46:54.282518 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:46:54.490146 kernel: libata version 3.00 loaded. Jan 13 22:46:54.490199 kernel: ACPI: bus type USB registered Jan 13 22:46:54.490221 kernel: usbcore: registered new interface driver usbfs Jan 13 22:46:54.490240 kernel: usbcore: registered new interface driver hub Jan 13 22:46:54.490258 kernel: usbcore: registered new device driver usb Jan 13 22:46:54.490276 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (469) Jan 13 22:46:54.490295 kernel: BTRFS: device fsid 5e7921ba-229a-48a0-bc77-9b30aaa34aeb devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (465) Jan 13 22:46:54.490313 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 22:46:54.490829 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 13 22:46:54.491126 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 22:46:54.491419 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 13 22:46:54.491703 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 13 22:46:54.492004 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 13 22:46:54.492237 kernel: hub 1-0:1.0: USB hub found Jan 13 22:46:54.492580 kernel: hub 1-0:1.0: 4 ports detected Jan 13 22:46:54.492846 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 22:46:54.493261 kernel: hub 2-0:1.0: USB hub found Jan 13 22:46:54.493521 kernel: hub 2-0:1.0: 4 ports detected Jan 13 22:46:54.493738 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 22:46:54.495944 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 22:46:54.495969 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 22:46:54.496674 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 22:46:54.496925 kernel: scsi host0: ahci Jan 13 22:46:54.497239 kernel: scsi host1: ahci Jan 13 22:46:54.497454 kernel: scsi host2: ahci Jan 13 22:46:54.497704 kernel: scsi host3: ahci Jan 13 22:46:54.497918 kernel: scsi host4: ahci Jan 13 22:46:54.498184 kernel: scsi host5: ahci Jan 13 22:46:54.498419 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 13 22:46:54.498450 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 13 22:46:54.498510 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 13 22:46:54.498535 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 13 22:46:54.498553 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 13 22:46:54.498571 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 13 22:46:54.282693 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:46:54.283688 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:46:54.284443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:46:54.284636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:46:54.285413 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:46:54.304169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:46:54.461194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 22:46:54.489683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:46:54.522413 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 22:46:54.530586 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 22:46:54.536874 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 22:46:54.537797 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 22:46:54.550267 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 22:46:54.554231 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 22:46:54.561677 disk-uuid[563]: Primary Header is updated. Jan 13 22:46:54.561677 disk-uuid[563]: Secondary Entries is updated. Jan 13 22:46:54.561677 disk-uuid[563]: Secondary Header is updated. Jan 13 22:46:54.569092 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 22:46:54.594400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:46:54.677063 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 22:46:54.805115 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 22:46:54.805223 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 22:46:54.807385 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 13 22:46:54.809271 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 22:46:54.811745 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 22:46:54.814010 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 22:46:54.828064 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 22:46:54.835149 kernel: usbcore: registered new interface driver usbhid Jan 13 22:46:54.835207 kernel: usbhid: USB HID core driver Jan 13 22:46:54.842898 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 13 22:46:54.842965 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 13 22:46:55.581747 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 22:46:55.583341 disk-uuid[564]: The operation has completed successfully. Jan 13 22:46:55.641484 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 22:46:55.641698 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 22:46:55.667517 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 22:46:55.672673 sh[584]: Success Jan 13 22:46:55.691751 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 13 22:46:55.765686 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 22:46:55.769211 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 22:46:55.771177 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 22:46:55.801225 kernel: BTRFS info (device dm-0): first mount of filesystem 5e7921ba-229a-48a0-bc77-9b30aaa34aeb Jan 13 22:46:55.801315 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:46:55.803312 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 22:46:55.805497 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 22:46:55.807178 kernel: BTRFS info (device dm-0): using free space tree Jan 13 22:46:55.818731 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 22:46:55.820334 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 22:46:55.826256 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 22:46:55.830744 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 22:46:55.847056 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 22:46:55.847114 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:46:55.847147 kernel: BTRFS info (device vda6): using free space tree Jan 13 22:46:55.853073 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 22:46:55.867793 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 22:46:55.869822 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 22:46:55.876969 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 22:46:55.886459 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 22:46:56.000271 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:46:56.020110 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:46:56.031918 ignition[676]: Ignition 2.20.0 Jan 13 22:46:56.033000 ignition[676]: Stage: fetch-offline Jan 13 22:46:56.033117 ignition[676]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:46:56.033138 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:46:56.036396 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:46:56.033309 ignition[676]: parsed url from cmdline: "" Jan 13 22:46:56.033316 ignition[676]: no config URL provided Jan 13 22:46:56.033326 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 22:46:56.033342 ignition[676]: no config at "/usr/lib/ignition/user.ign" Jan 13 22:46:56.033352 ignition[676]: failed to fetch config: resource requires networking Jan 13 22:46:56.033661 ignition[676]: Ignition finished successfully Jan 13 22:46:56.054355 systemd-networkd[770]: lo: Link UP Jan 13 22:46:56.054374 systemd-networkd[770]: lo: Gained carrier Jan 13 22:46:56.056939 systemd-networkd[770]: Enumeration completed Jan 13 22:46:56.057607 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 22:46:56.057613 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:46:56.057789 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:46:56.060267 systemd[1]: Reached target network.target - Network. Jan 13 22:46:56.060278 systemd-networkd[770]: eth0: Link UP Jan 13 22:46:56.060284 systemd-networkd[770]: eth0: Gained carrier Jan 13 22:46:56.060295 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 22:46:56.069237 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 22:46:56.084673 ignition[774]: Ignition 2.20.0 Jan 13 22:46:56.084696 ignition[774]: Stage: fetch Jan 13 22:46:56.084947 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:46:56.084968 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:46:56.085121 ignition[774]: parsed url from cmdline: "" Jan 13 22:46:56.085129 ignition[774]: no config URL provided Jan 13 22:46:56.085138 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 22:46:56.085155 ignition[774]: no config at "/usr/lib/ignition/user.ign" Jan 13 22:46:56.085301 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 22:46:56.085529 ignition[774]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 22:46:56.085575 ignition[774]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 22:46:56.085616 ignition[774]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 22:46:56.114174 systemd-networkd[770]: eth0: DHCPv4 address 10.244.10.2/30, gateway 10.244.10.1 acquired from 10.244.10.1 Jan 13 22:46:56.286368 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 13 22:46:56.315555 ignition[774]: GET result: OK Jan 13 22:46:56.316942 ignition[774]: parsing config with SHA512: a49aa9be5501e9054207d7e7e4149bbaad22e3f3728f47c31e97f1bde3230a64d62b18d7c0072dfcb97f93b0777262494cbcd46b465cb53a710b011bd1134d13 Jan 13 22:46:56.323120 unknown[774]: fetched base config from "system" Jan 13 22:46:56.323139 unknown[774]: fetched base config from "system" Jan 13 22:46:56.323523 ignition[774]: fetch: fetch complete Jan 13 22:46:56.323149 unknown[774]: fetched user config from "openstack" Jan 13 22:46:56.323532 ignition[774]: fetch: fetch passed Jan 13 22:46:56.325727 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 22:46:56.323612 ignition[774]: Ignition finished successfully Jan 13 22:46:56.338271 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 22:46:56.364067 ignition[782]: Ignition 2.20.0 Jan 13 22:46:56.366553 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 22:46:56.364088 ignition[782]: Stage: kargs Jan 13 22:46:56.364333 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:46:56.364354 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:46:56.365234 ignition[782]: kargs: kargs passed Jan 13 22:46:56.365315 ignition[782]: Ignition finished successfully Jan 13 22:46:56.374273 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 22:46:56.399546 ignition[788]: Ignition 2.20.0 Jan 13 22:46:56.399572 ignition[788]: Stage: disks Jan 13 22:46:56.399811 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 13 22:46:56.399832 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:46:56.402054 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 22:46:56.400704 ignition[788]: disks: disks passed Jan 13 22:46:56.403817 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 22:46:56.400785 ignition[788]: Ignition finished successfully Jan 13 22:46:56.404883 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 22:46:56.406499 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:46:56.407807 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:46:56.409547 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:46:56.423322 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 22:46:56.443591 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 22:46:56.448466 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 22:46:56.454181 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 22:46:56.575152 kernel: EXT4-fs (vda9): mounted filesystem 84bcd1b2-5573-4e91-8fd5-f97782397085 r/w with ordered data mode. Quota mode: none. Jan 13 22:46:56.576587 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 22:46:56.578848 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 22:46:56.586180 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:46:56.590206 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 22:46:56.592974 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 22:46:56.596342 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 22:46:56.597180 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 22:46:56.597235 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:46:56.610075 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (805) Jan 13 22:46:56.610165 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 22:46:56.615668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:46:56.615745 kernel: BTRFS info (device vda6): using free space tree Jan 13 22:46:56.618231 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 22:46:56.621248 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 22:46:56.624208 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:46:56.633838 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 22:46:56.718088 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 22:46:56.728241 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jan 13 22:46:56.734864 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 22:46:56.743867 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 22:46:56.854532 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 22:46:56.862179 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 22:46:56.866265 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 22:46:56.875206 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 22:46:56.877509 kernel: BTRFS info (device vda6): last unmount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 22:46:56.906835 ignition[926]: INFO : Ignition 2.20.0 Jan 13 22:46:56.906835 ignition[926]: INFO : Stage: mount Jan 13 22:46:56.909688 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:46:56.909688 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:46:56.911953 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 22:46:56.913878 ignition[926]: INFO : mount: mount passed Jan 13 22:46:56.913878 ignition[926]: INFO : Ignition finished successfully Jan 13 22:46:56.913438 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 22:46:57.852478 systemd-networkd[770]: eth0: Gained IPv6LL Jan 13 22:46:59.219149 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:280:24:19ff:fef4:a02/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:280:24:19ff:fef4:a02/64 assigned by NDisc. Jan 13 22:46:59.219167 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 22:47:03.787515 coreos-metadata[807]: Jan 13 22:47:03.787 WARN failed to locate config-drive, using the metadata service API instead Jan 13 22:47:03.812989 coreos-metadata[807]: Jan 13 22:47:03.812 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 22:47:03.827027 coreos-metadata[807]: Jan 13 22:47:03.826 INFO Fetch successful Jan 13 22:47:03.828109 coreos-metadata[807]: Jan 13 22:47:03.828 INFO wrote hostname srv-g6y97.gb1.brightbox.com to /sysroot/etc/hostname Jan 13 22:47:03.830665 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 22:47:03.830861 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 22:47:03.842250 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 22:47:03.856314 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 22:47:03.879121 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Jan 13 22:47:03.891163 kernel: BTRFS info (device vda6): first mount of filesystem 1066b41d-395d-4ccb-b5ae-be36ea0fc11e Jan 13 22:47:03.891282 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 22:47:03.891358 kernel: BTRFS info (device vda6): using free space tree Jan 13 22:47:03.898247 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 22:47:03.901123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 22:47:03.931107 ignition[959]: INFO : Ignition 2.20.0 Jan 13 22:47:03.932481 ignition[959]: INFO : Stage: files Jan 13 22:47:03.932481 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:47:03.932481 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:47:03.934934 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 13 22:47:03.934934 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 22:47:03.934934 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 22:47:03.938592 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 22:47:03.939581 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 22:47:03.939581 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 22:47:03.939332 unknown[959]: wrote ssh authorized keys file for user: core Jan 13 22:47:03.942613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 22:47:03.942613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 22:47:03.942613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:47:03.942613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 22:47:03.942613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:47:03.942613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:47:03.942613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:47:03.942613 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 13 22:47:04.529641 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 22:47:06.219730 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 13 22:47:06.222592 ignition[959]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:47:06.222592 ignition[959]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 22:47:06.222592 ignition[959]: INFO : files: files passed Jan 13 22:47:06.222592 ignition[959]: INFO : Ignition finished successfully Jan 13 22:47:06.222453 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 22:47:06.231278 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 22:47:06.238249 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 22:47:06.246162 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 22:47:06.246356 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 22:47:06.256065 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:47:06.257914 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:47:06.259754 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 22:47:06.261171 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:47:06.262554 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 22:47:06.276391 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 22:47:06.308325 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 22:47:06.308527 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 22:47:06.310790 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 22:47:06.312117 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 22:47:06.313815 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 22:47:06.319241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 22:47:06.338917 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:47:06.348428 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 22:47:06.361555 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:47:06.362639 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:47:06.364391 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 22:47:06.366012 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 22:47:06.366233 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 22:47:06.368075 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 22:47:06.369014 systemd[1]: Stopped target basic.target - Basic System. Jan 13 22:47:06.370879 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 22:47:06.372497 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 22:47:06.373966 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 22:47:06.375564 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 22:47:06.377147 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 22:47:06.378838 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 22:47:06.380481 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 22:47:06.382164 systemd[1]: Stopped target swap.target - Swaps. Jan 13 22:47:06.383595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 22:47:06.383804 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 22:47:06.385660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:47:06.386644 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:47:06.388128 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 22:47:06.388314 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:47:06.389820 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 22:47:06.390020 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 22:47:06.392116 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 22:47:06.392371 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 22:47:06.394067 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 22:47:06.394225 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 22:47:06.405947 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 22:47:06.409422 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 22:47:06.410186 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 22:47:06.410458 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:47:06.413716 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 22:47:06.413962 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 22:47:06.427325 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 22:47:06.427484 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 22:47:06.437057 ignition[1012]: INFO : Ignition 2.20.0 Jan 13 22:47:06.437057 ignition[1012]: INFO : Stage: umount Jan 13 22:47:06.437057 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 22:47:06.437057 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 22:47:06.444221 ignition[1012]: INFO : umount: umount passed Jan 13 22:47:06.444221 ignition[1012]: INFO : Ignition finished successfully Jan 13 22:47:06.440477 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 22:47:06.440641 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 22:47:06.445872 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 22:47:06.448601 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 22:47:06.448708 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 22:47:06.452491 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 22:47:06.452576 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 22:47:06.453678 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 22:47:06.453746 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 22:47:06.454502 systemd[1]: Stopped target network.target - Network. Jan 13 22:47:06.457273 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 22:47:06.457349 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 22:47:06.458456 systemd[1]: Stopped target paths.target - Path Units. Jan 13 22:47:06.459091 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 22:47:06.460104 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:47:06.461728 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 22:47:06.463282 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 22:47:06.464914 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 22:47:06.464984 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 22:47:06.472479 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 22:47:06.472548 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 22:47:06.474019 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 22:47:06.474130 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 22:47:06.475769 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 22:47:06.475837 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 22:47:06.477559 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 22:47:06.480836 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 22:47:06.481291 systemd-networkd[770]: eth0: DHCPv6 lease lost Jan 13 22:47:06.485223 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 22:47:06.485432 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 22:47:06.487373 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 22:47:06.487486 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:47:06.502288 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 22:47:06.503756 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 22:47:06.503859 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 22:47:06.504916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:47:06.506953 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 22:47:06.507214 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 22:47:06.516503 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 22:47:06.517686 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:47:06.523509 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 22:47:06.523606 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 22:47:06.527456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 22:47:06.527529 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:47:06.528366 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 22:47:06.528447 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 22:47:06.530293 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 22:47:06.530368 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 22:47:06.532256 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 22:47:06.532333 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 22:47:06.546352 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 22:47:06.547633 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 22:47:06.547729 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:47:06.548508 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 22:47:06.548578 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 22:47:06.549361 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 22:47:06.549431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:47:06.551084 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 22:47:06.551157 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:47:06.554302 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 22:47:06.554381 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:47:06.557542 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 22:47:06.557628 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:47:06.559136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 22:47:06.559207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:47:06.562270 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 22:47:06.562431 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 22:47:06.564341 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 22:47:06.564483 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 22:47:06.565904 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 22:47:06.566084 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 22:47:06.570107 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 22:47:06.571235 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 22:47:06.571330 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 22:47:06.580266 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 22:47:06.592265 systemd[1]: Switching root. Jan 13 22:47:06.631413 systemd-journald[202]: Journal stopped Jan 13 22:47:08.078026 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 13 22:47:08.078194 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 22:47:08.078251 kernel: SELinux: policy capability open_perms=1 Jan 13 22:47:08.078281 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 22:47:08.078301 kernel: SELinux: policy capability always_check_network=0 Jan 13 22:47:08.078320 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 22:47:08.078338 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 22:47:08.078371 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 22:47:08.078391 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 22:47:08.078410 kernel: audit: type=1403 audit(1736808426.854:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 22:47:08.078438 systemd[1]: Successfully loaded SELinux policy in 50.290ms. Jan 13 22:47:08.078467 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.900ms. Jan 13 22:47:08.078491 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 22:47:08.078511 systemd[1]: Detected virtualization kvm. Jan 13 22:47:08.078548 systemd[1]: Detected architecture x86-64. Jan 13 22:47:08.078570 systemd[1]: Detected first boot. Jan 13 22:47:08.078597 systemd[1]: Hostname set to . Jan 13 22:47:08.078617 systemd[1]: Initializing machine ID from VM UUID. Jan 13 22:47:08.078636 zram_generator::config[1056]: No configuration found. Jan 13 22:47:08.078662 systemd[1]: Populated /etc with preset unit settings. Jan 13 22:47:08.078694 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 22:47:08.078716 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 22:47:08.078736 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 22:47:08.078756 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 22:47:08.078776 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 22:47:08.078796 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 22:47:08.078815 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 22:47:08.078836 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 22:47:08.078856 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 22:47:08.078898 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 22:47:08.078921 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 22:47:08.078942 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 22:47:08.078961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 22:47:08.078982 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 22:47:08.079002 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 22:47:08.079053 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 22:47:08.079076 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 22:47:08.079110 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 22:47:08.079133 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 22:47:08.079152 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 22:47:08.079173 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 22:47:08.079193 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 22:47:08.079212 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 22:47:08.079272 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 22:47:08.079296 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 22:47:08.079316 systemd[1]: Reached target slices.target - Slice Units. Jan 13 22:47:08.079336 systemd[1]: Reached target swap.target - Swaps. Jan 13 22:47:08.079355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 22:47:08.079375 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 22:47:08.079395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 22:47:08.079414 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 22:47:08.079434 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 22:47:08.079462 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 22:47:08.079496 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 22:47:08.079518 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 22:47:08.079550 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 22:47:08.079574 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:47:08.079606 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 22:47:08.079627 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 22:47:08.079647 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 22:47:08.079668 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 22:47:08.079701 systemd[1]: Reached target machines.target - Containers. Jan 13 22:47:08.079724 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 22:47:08.079744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:47:08.079804 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 22:47:08.079845 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 22:47:08.079882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:47:08.079923 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 22:47:08.079945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:47:08.079965 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 22:47:08.079984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:47:08.080004 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 22:47:08.080025 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 22:47:08.080078 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 22:47:08.080101 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 22:47:08.080135 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 22:47:08.080163 kernel: loop: module loaded Jan 13 22:47:08.080184 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 22:47:08.080204 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 22:47:08.080236 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 22:47:08.080258 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 22:47:08.080279 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 22:47:08.080299 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 22:47:08.080319 systemd[1]: Stopped verity-setup.service. Jan 13 22:47:08.080353 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:47:08.080374 kernel: ACPI: bus type drm_connector registered Jan 13 22:47:08.080394 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 22:47:08.080475 systemd-journald[1147]: Collecting audit messages is disabled. Jan 13 22:47:08.080522 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 22:47:08.080546 systemd-journald[1147]: Journal started Jan 13 22:47:08.080619 systemd-journald[1147]: Runtime Journal (/run/log/journal/04ae6d22867f4b65b1e09edd83e7ab02) is 4.7M, max 38.0M, 33.2M free. Jan 13 22:47:07.668176 systemd[1]: Queued start job for default target multi-user.target. Jan 13 22:47:07.693439 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 22:47:07.694157 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 22:47:08.085087 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 22:47:08.088731 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 22:47:08.091621 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 22:47:08.092728 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 22:47:08.095992 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 22:47:08.097110 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 22:47:08.098367 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 22:47:08.100342 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 22:47:08.101616 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:47:08.101816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:47:08.103510 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 22:47:08.105391 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 22:47:08.106720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:47:08.106920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:47:08.108575 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:47:08.112096 kernel: fuse: init (API version 7.39) Jan 13 22:47:08.109900 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:47:08.111151 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 22:47:08.113406 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 22:47:08.113639 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 22:47:08.114826 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 22:47:08.116062 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 22:47:08.117593 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 22:47:08.133515 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 22:47:08.144178 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 22:47:08.152106 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 22:47:08.153196 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 22:47:08.153260 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 22:47:08.164550 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 22:47:08.176196 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 22:47:08.185796 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 22:47:08.186756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:47:08.194191 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 22:47:08.196702 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 22:47:08.197563 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 22:47:08.208271 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 22:47:08.209490 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 22:47:08.218291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:47:08.225257 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 22:47:08.230028 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 22:47:08.234143 systemd-journald[1147]: Time spent on flushing to /var/log/journal/04ae6d22867f4b65b1e09edd83e7ab02 is 112.656ms for 1122 entries. Jan 13 22:47:08.234143 systemd-journald[1147]: System Journal (/var/log/journal/04ae6d22867f4b65b1e09edd83e7ab02) is 8.0M, max 584.8M, 576.8M free. Jan 13 22:47:08.386703 systemd-journald[1147]: Received client request to flush runtime journal. Jan 13 22:47:08.387055 kernel: loop0: detected capacity change from 0 to 138184 Jan 13 22:47:08.387110 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 22:47:08.236194 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 22:47:08.239289 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 22:47:08.242136 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 22:47:08.286996 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:47:08.298838 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 22:47:08.302533 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 22:47:08.316300 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 22:47:08.327643 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 13 22:47:08.327664 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Jan 13 22:47:08.355989 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 22:47:08.364292 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 22:47:08.394074 kernel: loop1: detected capacity change from 0 to 210664 Jan 13 22:47:08.405799 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 22:47:08.412357 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 22:47:08.413342 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 22:47:08.471071 kernel: loop2: detected capacity change from 0 to 140992 Jan 13 22:47:08.474359 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 22:47:08.484242 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 22:47:08.526937 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 22:47:08.562069 kernel: loop3: detected capacity change from 0 to 8 Jan 13 22:47:08.566584 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 22:47:08.576382 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 22:47:08.601208 kernel: loop4: detected capacity change from 0 to 138184 Jan 13 22:47:08.638108 kernel: loop5: detected capacity change from 0 to 210664 Jan 13 22:47:08.637718 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 13 22:47:08.640089 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Jan 13 22:47:08.658004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 22:47:08.681146 kernel: loop6: detected capacity change from 0 to 140992 Jan 13 22:47:08.716060 kernel: loop7: detected capacity change from 0 to 8 Jan 13 22:47:08.722009 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 13 22:47:08.725222 (sd-merge)[1215]: Merged extensions into '/usr'. Jan 13 22:47:08.737550 systemd[1]: Reloading requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 22:47:08.737575 systemd[1]: Reloading... Jan 13 22:47:08.880138 zram_generator::config[1240]: No configuration found. Jan 13 22:47:09.041166 ldconfig[1184]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 22:47:09.173424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:47:09.242587 systemd[1]: Reloading finished in 499 ms. Jan 13 22:47:09.272102 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 22:47:09.273433 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 22:47:09.274643 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 22:47:09.293440 systemd[1]: Starting ensure-sysext.service... Jan 13 22:47:09.295932 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 22:47:09.300311 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 22:47:09.314402 systemd[1]: Reloading requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Jan 13 22:47:09.316085 systemd[1]: Reloading... Jan 13 22:47:09.338961 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 22:47:09.340096 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 22:47:09.342209 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 22:47:09.342830 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 13 22:47:09.342951 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 13 22:47:09.348610 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 22:47:09.348628 systemd-tmpfiles[1301]: Skipping /boot Jan 13 22:47:09.372544 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 22:47:09.372566 systemd-tmpfiles[1301]: Skipping /boot Jan 13 22:47:09.389335 systemd-udevd[1302]: Using default interface naming scheme 'v255'. Jan 13 22:47:09.439602 zram_generator::config[1327]: No configuration found. Jan 13 22:47:09.599327 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1341) Jan 13 22:47:09.716065 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 22:47:09.741065 kernel: ACPI: button: Power Button [PWRF] Jan 13 22:47:09.756151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:47:09.761073 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 22:47:09.812065 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 22:47:09.822062 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 22:47:09.830327 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 22:47:09.830664 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 22:47:09.883794 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 22:47:09.884448 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 22:47:09.886816 systemd[1]: Reloading finished in 570 ms. Jan 13 22:47:09.909679 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 22:47:09.919669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 22:47:09.965921 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:47:10.025957 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 22:47:10.034195 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 22:47:10.035670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:47:10.047680 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:47:10.052585 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 22:47:10.057377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 22:47:10.058431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:47:10.062619 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 22:47:10.072442 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 22:47:10.084489 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 22:47:10.090482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 22:47:10.099507 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 22:47:10.104627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 22:47:10.105652 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:47:10.109580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:47:10.109905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:47:10.116328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:47:10.116618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:47:10.122415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 22:47:10.123392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:47:10.123591 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:47:10.127624 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:47:10.127981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 22:47:10.137481 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 22:47:10.138460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 22:47:10.138641 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 22:47:10.142572 systemd[1]: Finished ensure-sysext.service. Jan 13 22:47:10.157360 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 22:47:10.168283 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 22:47:10.202294 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 22:47:10.203610 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 22:47:10.203820 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 22:47:10.212280 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 22:47:10.233833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 22:47:10.234458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 22:47:10.238469 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 22:47:10.239175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 22:47:10.249305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 22:47:10.257417 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 22:47:10.259011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 22:47:10.259340 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 22:47:10.265406 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 22:47:10.276593 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 22:47:10.278266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 22:47:10.306116 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 22:47:10.308343 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 22:47:10.330345 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 22:47:10.359633 augenrules[1464]: No rules Jan 13 22:47:10.362598 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 22:47:10.363238 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 22:47:10.367228 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 22:47:10.380362 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 22:47:10.413648 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 22:47:10.466127 systemd-networkd[1419]: lo: Link UP Jan 13 22:47:10.466142 systemd-networkd[1419]: lo: Gained carrier Jan 13 22:47:10.474194 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 22:47:10.477608 systemd-networkd[1419]: Enumeration completed Jan 13 22:47:10.479456 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 22:47:10.479468 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 22:47:10.481591 systemd-networkd[1419]: eth0: Link UP Jan 13 22:47:10.482153 systemd-networkd[1419]: eth0: Gained carrier Jan 13 22:47:10.482318 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 22:47:10.510911 systemd-resolved[1422]: Positive Trust Anchors: Jan 13 22:47:10.510936 systemd-resolved[1422]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 22:47:10.510983 systemd-resolved[1422]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 22:47:10.517124 systemd-networkd[1419]: eth0: DHCPv4 address 10.244.10.2/30, gateway 10.244.10.1 acquired from 10.244.10.1 Jan 13 22:47:10.518421 systemd-resolved[1422]: Using system hostname 'srv-g6y97.gb1.brightbox.com'. Jan 13 22:47:10.520172 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Jan 13 22:47:10.554210 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 22:47:10.555377 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 22:47:10.556428 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 22:47:10.557772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 22:47:10.559806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 22:47:10.560633 systemd[1]: Reached target network.target - Network. Jan 13 22:47:10.561544 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 22:47:10.562391 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 22:47:10.563305 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 22:47:10.564149 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 22:47:10.565027 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 22:47:10.565863 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 22:47:10.565922 systemd[1]: Reached target paths.target - Path Units. Jan 13 22:47:10.566610 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 22:47:10.567725 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 22:47:10.568678 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 22:47:10.569471 systemd[1]: Reached target timers.target - Timer Units. Jan 13 22:47:10.571290 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 22:47:10.574619 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 22:47:10.581469 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 22:47:10.584390 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 22:47:10.592313 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 22:47:10.595629 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 22:47:10.596520 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 22:47:10.597471 systemd[1]: Reached target basic.target - Basic System. Jan 13 22:47:10.598205 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 22:47:10.598254 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 22:47:10.599311 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 22:47:10.607237 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 22:47:10.614260 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 22:47:10.625462 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 22:47:10.632183 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 22:47:10.639270 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 22:47:10.641344 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 22:47:10.649323 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 22:47:10.654465 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 22:47:10.659284 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 22:47:10.666368 jq[1487]: false Jan 13 22:47:10.671341 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 22:47:10.675079 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 22:47:10.675933 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 22:47:10.685624 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 22:47:10.692221 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 22:47:10.694299 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 22:47:10.697808 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 22:47:10.699257 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 22:47:10.703245 dbus-daemon[1484]: [system] SELinux support is enabled Jan 13 22:47:10.704661 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 22:47:10.711453 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 22:47:10.711504 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 22:47:10.713462 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 22:47:10.713501 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 22:47:10.717768 dbus-daemon[1484]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1419 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 22:47:10.727332 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 22:47:10.758747 update_engine[1493]: I20250113 22:47:10.758206 1493 main.cc:92] Flatcar Update Engine starting Jan 13 22:47:10.760672 systemd[1]: Started update-engine.service - Update Engine. Jan 13 22:47:10.760889 update_engine[1493]: I20250113 22:47:10.760834 1493 update_check_scheduler.cc:74] Next update check in 3m56s Jan 13 22:47:10.770398 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 22:47:10.772914 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 22:47:10.775642 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 22:47:10.791840 systemd-logind[1492]: Watching system buttons on /dev/input/event2 (Power Button) Jan 13 22:47:10.792804 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 22:47:10.793308 systemd-logind[1492]: New seat seat0. Jan 13 22:47:10.794859 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 22:47:10.801379 jq[1495]: true Jan 13 22:47:10.805369 extend-filesystems[1488]: Found loop4 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found loop5 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found loop6 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found loop7 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found vda Jan 13 22:47:10.805369 extend-filesystems[1488]: Found vda1 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found vda2 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found vda3 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found usr Jan 13 22:47:10.805369 extend-filesystems[1488]: Found vda4 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found vda6 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found vda7 Jan 13 22:47:10.805369 extend-filesystems[1488]: Found vda9 Jan 13 22:47:10.805369 extend-filesystems[1488]: Checking size of /dev/vda9 Jan 13 22:47:10.811593 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 22:47:10.813616 systemd-timesyncd[1430]: Contacted time server 91.135.12.168:123 (0.flatcar.pool.ntp.org). Jan 13 22:47:10.813957 systemd-timesyncd[1430]: Initial clock synchronization to Mon 2025-01-13 22:47:11.027058 UTC. Jan 13 22:47:10.825665 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 22:47:10.826200 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 22:47:10.872290 extend-filesystems[1488]: Resized partition /dev/vda9 Jan 13 22:47:10.878070 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Jan 13 22:47:10.886081 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 13 22:47:10.892701 jq[1514]: true Jan 13 22:47:10.893145 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1336) Jan 13 22:47:11.135995 bash[1546]: Updated "/home/core/.ssh/authorized_keys" Jan 13 22:47:11.142653 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 22:47:11.154857 systemd[1]: Starting sshkeys.service... Jan 13 22:47:11.182372 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 22:47:11.183778 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 22:47:11.186896 dbus-daemon[1484]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1500 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 22:47:11.198538 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 22:47:11.211493 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 22:47:11.232646 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 13 22:47:11.223749 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 22:47:11.235133 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 22:47:11.235133 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 13 22:47:11.235133 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 13 22:47:11.241557 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 22:47:11.244134 extend-filesystems[1488]: Resized filesystem in /dev/vda9 Jan 13 22:47:11.245238 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 22:47:11.245344 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 22:47:11.275770 polkitd[1550]: Started polkitd version 121 Jan 13 22:47:11.298496 polkitd[1550]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 22:47:11.298613 polkitd[1550]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 22:47:11.307572 polkitd[1550]: Finished loading, compiling and executing 2 rules Jan 13 22:47:11.308935 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 22:47:11.309801 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 22:47:11.311138 polkitd[1550]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 22:47:11.331787 containerd[1501]: time="2025-01-13T22:47:11.331636510Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 22:47:11.343549 systemd-hostnamed[1500]: Hostname set to (static) Jan 13 22:47:11.369095 containerd[1501]: time="2025-01-13T22:47:11.368116854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:47:11.370966 containerd[1501]: time="2025-01-13T22:47:11.370921851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:47:11.371121 containerd[1501]: time="2025-01-13T22:47:11.370965114Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 22:47:11.371121 containerd[1501]: time="2025-01-13T22:47:11.370991797Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 22:47:11.371343 containerd[1501]: time="2025-01-13T22:47:11.371315239Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 22:47:11.371396 containerd[1501]: time="2025-01-13T22:47:11.371352253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 22:47:11.371491 containerd[1501]: time="2025-01-13T22:47:11.371463939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:47:11.371671 containerd[1501]: time="2025-01-13T22:47:11.371493951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.371732840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.371764463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.371788427Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.371804952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.371944578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.372368638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.372504883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.372528158Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.372677170Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 22:47:11.373089 containerd[1501]: time="2025-01-13T22:47:11.372766264Z" level=info msg="metadata content store policy set" policy=shared Jan 13 22:47:11.378285 containerd[1501]: time="2025-01-13T22:47:11.378216747Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 22:47:11.378395 containerd[1501]: time="2025-01-13T22:47:11.378366577Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 22:47:11.378459 containerd[1501]: time="2025-01-13T22:47:11.378405454Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 22:47:11.378459 containerd[1501]: time="2025-01-13T22:47:11.378435339Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 22:47:11.378551 containerd[1501]: time="2025-01-13T22:47:11.378462749Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 22:47:11.378812 containerd[1501]: time="2025-01-13T22:47:11.378783593Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 22:47:11.379247 containerd[1501]: time="2025-01-13T22:47:11.379216082Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 22:47:11.379429 containerd[1501]: time="2025-01-13T22:47:11.379402972Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 22:47:11.379605 containerd[1501]: time="2025-01-13T22:47:11.379439271Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 22:47:11.379605 containerd[1501]: time="2025-01-13T22:47:11.379466197Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 22:47:11.379605 containerd[1501]: time="2025-01-13T22:47:11.379489239Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 22:47:11.379605 containerd[1501]: time="2025-01-13T22:47:11.379512218Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 22:47:11.379605 containerd[1501]: time="2025-01-13T22:47:11.379536677Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 22:47:11.379605 containerd[1501]: time="2025-01-13T22:47:11.379561192Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 22:47:11.379605 containerd[1501]: time="2025-01-13T22:47:11.379585952Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379619565Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379644201Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379665683Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379709058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379734662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379755719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379778210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379799928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379824069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379859968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379887115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379910269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379934825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.380447 containerd[1501]: time="2025-01-13T22:47:11.379975629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.379996198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380016568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380218036Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380273726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380301575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380322522Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380433552Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380469489Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380639717Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380665302Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380685133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380706870Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380734647Z" level=info msg="NRI interface is disabled by configuration." Jan 13 22:47:11.381811 containerd[1501]: time="2025-01-13T22:47:11.380754966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.381279525Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.381357978Z" level=info msg="Connect containerd service" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.381422099Z" level=info msg="using legacy CRI server" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.381440620Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.381635966Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.382898830Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.383089642Z" level=info msg="Start subscribing containerd event" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.383182605Z" level=info msg="Start recovering state" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.383326520Z" level=info msg="Start event monitor" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.383353584Z" level=info msg="Start snapshots syncer" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.383371084Z" level=info msg="Start cni network conf syncer for default" Jan 13 22:47:11.383527 containerd[1501]: time="2025-01-13T22:47:11.383384837Z" level=info msg="Start streaming server" Jan 13 22:47:11.386213 containerd[1501]: time="2025-01-13T22:47:11.384523178Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 22:47:11.386213 containerd[1501]: time="2025-01-13T22:47:11.384720993Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 22:47:11.386993 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 22:47:11.388044 containerd[1501]: time="2025-01-13T22:47:11.388006315Z" level=info msg="containerd successfully booted in 0.059136s" Jan 13 22:47:11.514323 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 22:47:11.568049 sshd_keygen[1517]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 22:47:11.595864 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 22:47:11.604543 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 22:47:11.607526 systemd[1]: Started sshd@0-10.244.10.2:22-147.75.109.163:35626.service - OpenSSH per-connection server daemon (147.75.109.163:35626). Jan 13 22:47:11.619766 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 22:47:11.620249 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 22:47:11.631302 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 22:47:11.649632 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 22:47:11.666710 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 22:47:11.671567 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 22:47:11.672912 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 22:47:11.996966 systemd-networkd[1419]: eth0: Gained IPv6LL Jan 13 22:47:12.000539 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 22:47:12.003029 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 22:47:12.010503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:47:12.027778 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 22:47:12.058996 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 22:47:12.548942 sshd[1577]: Accepted publickey for core from 147.75.109.163 port 35626 ssh2: RSA SHA256:SMt0x8mMAMcddjJjDBKkiVSH45e5wlQsWHaQ+hrymws Jan 13 22:47:12.554555 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:47:12.580922 systemd-logind[1492]: New session 1 of user core. Jan 13 22:47:12.582832 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 22:47:12.591555 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 22:47:12.623260 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 22:47:12.637902 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 22:47:12.647763 (systemd)[1601]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 22:47:12.810796 systemd[1601]: Queued start job for default target default.target. Jan 13 22:47:12.820106 systemd[1601]: Created slice app.slice - User Application Slice. Jan 13 22:47:12.821025 systemd[1601]: Reached target paths.target - Paths. Jan 13 22:47:12.821075 systemd[1601]: Reached target timers.target - Timers. Jan 13 22:47:12.826310 systemd[1601]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 22:47:12.852758 systemd[1601]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 22:47:12.853701 systemd[1601]: Reached target sockets.target - Sockets. Jan 13 22:47:12.853739 systemd[1601]: Reached target basic.target - Basic System. Jan 13 22:47:12.853818 systemd[1601]: Reached target default.target - Main User Target. Jan 13 22:47:12.853885 systemd[1601]: Startup finished in 194ms. Jan 13 22:47:12.854155 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 22:47:12.862794 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 22:47:12.865588 systemd-networkd[1419]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:280:24:19ff:fef4:a02/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:280:24:19ff:fef4:a02/64 assigned by NDisc. Jan 13 22:47:12.865602 systemd-networkd[1419]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 13 22:47:12.985856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:47:12.999992 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:47:13.515879 systemd[1]: Started sshd@1-10.244.10.2:22-147.75.109.163:35640.service - OpenSSH per-connection server daemon (147.75.109.163:35640). Jan 13 22:47:13.722023 kubelet[1616]: E0113 22:47:13.721786 1616 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:47:13.724346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:47:13.724675 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:47:13.725196 systemd[1]: kubelet.service: Consumed 1.088s CPU time. Jan 13 22:47:14.430096 sshd[1624]: Accepted publickey for core from 147.75.109.163 port 35640 ssh2: RSA SHA256:SMt0x8mMAMcddjJjDBKkiVSH45e5wlQsWHaQ+hrymws Jan 13 22:47:14.432229 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:47:14.439739 systemd-logind[1492]: New session 2 of user core. Jan 13 22:47:14.451371 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 22:47:15.063017 sshd[1630]: Connection closed by 147.75.109.163 port 35640 Jan 13 22:47:15.064226 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Jan 13 22:47:15.069470 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Jan 13 22:47:15.070707 systemd[1]: sshd@1-10.244.10.2:22-147.75.109.163:35640.service: Deactivated successfully. Jan 13 22:47:15.073914 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 22:47:15.075595 systemd-logind[1492]: Removed session 2. Jan 13 22:47:15.223572 systemd[1]: Started sshd@2-10.244.10.2:22-147.75.109.163:35642.service - OpenSSH per-connection server daemon (147.75.109.163:35642). Jan 13 22:47:16.138346 sshd[1635]: Accepted publickey for core from 147.75.109.163 port 35642 ssh2: RSA SHA256:SMt0x8mMAMcddjJjDBKkiVSH45e5wlQsWHaQ+hrymws Jan 13 22:47:16.140614 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:47:16.147242 systemd-logind[1492]: New session 3 of user core. Jan 13 22:47:16.158732 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 22:47:16.739722 login[1585]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 22:47:16.742222 login[1586]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 13 22:47:16.748508 systemd-logind[1492]: New session 4 of user core. Jan 13 22:47:16.763531 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 22:47:16.766242 sshd[1637]: Connection closed by 147.75.109.163 port 35642 Jan 13 22:47:16.765368 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jan 13 22:47:16.770559 systemd-logind[1492]: New session 5 of user core. Jan 13 22:47:16.778388 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 22:47:16.779231 systemd[1]: sshd@2-10.244.10.2:22-147.75.109.163:35642.service: Deactivated successfully. Jan 13 22:47:16.782391 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 22:47:16.783410 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Jan 13 22:47:16.790198 systemd-logind[1492]: Removed session 3. Jan 13 22:47:17.710926 coreos-metadata[1483]: Jan 13 22:47:17.710 WARN failed to locate config-drive, using the metadata service API instead Jan 13 22:47:17.738196 coreos-metadata[1483]: Jan 13 22:47:17.738 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 22:47:17.747336 coreos-metadata[1483]: Jan 13 22:47:17.747 INFO Fetch failed with 404: resource not found Jan 13 22:47:17.747336 coreos-metadata[1483]: Jan 13 22:47:17.747 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 22:47:17.748089 coreos-metadata[1483]: Jan 13 22:47:17.748 INFO Fetch successful Jan 13 22:47:17.748317 coreos-metadata[1483]: Jan 13 22:47:17.748 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 22:47:17.766520 coreos-metadata[1483]: Jan 13 22:47:17.766 INFO Fetch successful Jan 13 22:47:17.766754 coreos-metadata[1483]: Jan 13 22:47:17.766 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 22:47:17.781277 coreos-metadata[1483]: Jan 13 22:47:17.781 INFO Fetch successful Jan 13 22:47:17.781629 coreos-metadata[1483]: Jan 13 22:47:17.781 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 22:47:17.798727 coreos-metadata[1483]: Jan 13 22:47:17.798 INFO Fetch successful Jan 13 22:47:17.798979 coreos-metadata[1483]: Jan 13 22:47:17.798 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 22:47:17.821603 coreos-metadata[1483]: Jan 13 22:47:17.821 INFO Fetch successful Jan 13 22:47:17.855851 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 22:47:17.857398 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 22:47:18.347384 coreos-metadata[1551]: Jan 13 22:47:18.347 WARN failed to locate config-drive, using the metadata service API instead Jan 13 22:47:18.370911 coreos-metadata[1551]: Jan 13 22:47:18.370 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 22:47:18.400807 coreos-metadata[1551]: Jan 13 22:47:18.400 INFO Fetch successful Jan 13 22:47:18.401035 coreos-metadata[1551]: Jan 13 22:47:18.400 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 22:47:18.447348 coreos-metadata[1551]: Jan 13 22:47:18.447 INFO Fetch successful Jan 13 22:47:18.449626 unknown[1551]: wrote ssh authorized keys file for user: core Jan 13 22:47:18.468073 update-ssh-keys[1677]: Updated "/home/core/.ssh/authorized_keys" Jan 13 22:47:18.469800 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 22:47:18.473179 systemd[1]: Finished sshkeys.service. Jan 13 22:47:18.475360 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 22:47:18.475803 systemd[1]: Startup finished in 1.469s (kernel) + 14.098s (initrd) + 11.671s (userspace) = 27.239s. Jan 13 22:47:23.853814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 22:47:23.866484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:47:24.031783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:47:24.041522 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:47:24.108750 kubelet[1688]: E0113 22:47:24.108538 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:47:24.112876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:47:24.113161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:47:27.002385 systemd[1]: Started sshd@3-10.244.10.2:22-147.75.109.163:50388.service - OpenSSH per-connection server daemon (147.75.109.163:50388). Jan 13 22:47:27.888331 sshd[1696]: Accepted publickey for core from 147.75.109.163 port 50388 ssh2: RSA SHA256:SMt0x8mMAMcddjJjDBKkiVSH45e5wlQsWHaQ+hrymws Jan 13 22:47:27.890351 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:47:27.898786 systemd-logind[1492]: New session 6 of user core. Jan 13 22:47:27.902255 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 22:47:28.505414 sshd[1698]: Connection closed by 147.75.109.163 port 50388 Jan 13 22:47:28.506437 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jan 13 22:47:28.510437 systemd[1]: sshd@3-10.244.10.2:22-147.75.109.163:50388.service: Deactivated successfully. Jan 13 22:47:28.513359 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 22:47:28.515268 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Jan 13 22:47:28.516616 systemd-logind[1492]: Removed session 6. Jan 13 22:47:28.664446 systemd[1]: Started sshd@4-10.244.10.2:22-147.75.109.163:49164.service - OpenSSH per-connection server daemon (147.75.109.163:49164). Jan 13 22:47:29.553003 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 49164 ssh2: RSA SHA256:SMt0x8mMAMcddjJjDBKkiVSH45e5wlQsWHaQ+hrymws Jan 13 22:47:29.554910 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:47:29.560858 systemd-logind[1492]: New session 7 of user core. Jan 13 22:47:29.572409 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 22:47:30.165379 sshd[1705]: Connection closed by 147.75.109.163 port 49164 Jan 13 22:47:30.166356 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jan 13 22:47:30.172504 systemd[1]: sshd@4-10.244.10.2:22-147.75.109.163:49164.service: Deactivated successfully. Jan 13 22:47:30.174960 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 22:47:30.177338 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Jan 13 22:47:30.180300 systemd-logind[1492]: Removed session 7. Jan 13 22:47:30.333477 systemd[1]: Started sshd@5-10.244.10.2:22-147.75.109.163:49178.service - OpenSSH per-connection server daemon (147.75.109.163:49178). Jan 13 22:47:31.247257 sshd[1710]: Accepted publickey for core from 147.75.109.163 port 49178 ssh2: RSA SHA256:SMt0x8mMAMcddjJjDBKkiVSH45e5wlQsWHaQ+hrymws Jan 13 22:47:31.249734 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:47:31.257796 systemd-logind[1492]: New session 8 of user core. Jan 13 22:47:31.264396 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 22:47:31.874737 sshd[1712]: Connection closed by 147.75.109.163 port 49178 Jan 13 22:47:31.874989 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jan 13 22:47:31.880548 systemd[1]: sshd@5-10.244.10.2:22-147.75.109.163:49178.service: Deactivated successfully. Jan 13 22:47:31.882906 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 22:47:31.883818 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Jan 13 22:47:31.885539 systemd-logind[1492]: Removed session 8. Jan 13 22:47:32.037498 systemd[1]: Started sshd@6-10.244.10.2:22-147.75.109.163:49188.service - OpenSSH per-connection server daemon (147.75.109.163:49188). Jan 13 22:47:32.946122 sshd[1717]: Accepted publickey for core from 147.75.109.163 port 49188 ssh2: RSA SHA256:SMt0x8mMAMcddjJjDBKkiVSH45e5wlQsWHaQ+hrymws Jan 13 22:47:32.948296 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:47:32.955363 systemd-logind[1492]: New session 9 of user core. Jan 13 22:47:32.967440 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 22:47:33.451551 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 22:47:33.452159 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:47:33.474565 sudo[1720]: pam_unix(sudo:session): session closed for user root Jan 13 22:47:33.618930 sshd[1719]: Connection closed by 147.75.109.163 port 49188 Jan 13 22:47:33.620488 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Jan 13 22:47:33.626704 systemd[1]: sshd@6-10.244.10.2:22-147.75.109.163:49188.service: Deactivated successfully. Jan 13 22:47:33.629220 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 22:47:33.630464 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Jan 13 22:47:33.632212 systemd-logind[1492]: Removed session 9. Jan 13 22:47:33.777403 systemd[1]: Started sshd@7-10.244.10.2:22-147.75.109.163:49198.service - OpenSSH per-connection server daemon (147.75.109.163:49198). Jan 13 22:47:34.353798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 22:47:34.361685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:47:34.510863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:47:34.517148 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 22:47:34.591465 kubelet[1735]: E0113 22:47:34.591109 1735 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 22:47:34.594660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 22:47:34.594950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 22:47:34.681213 sshd[1725]: Accepted publickey for core from 147.75.109.163 port 49198 ssh2: RSA SHA256:SMt0x8mMAMcddjJjDBKkiVSH45e5wlQsWHaQ+hrymws Jan 13 22:47:34.683267 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:47:34.691338 systemd-logind[1492]: New session 10 of user core. Jan 13 22:47:34.702288 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 22:47:35.158746 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 22:47:35.159256 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:47:35.164982 sudo[1746]: pam_unix(sudo:session): session closed for user root Jan 13 22:47:35.173443 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 22:47:35.173900 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:47:35.194528 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 22:47:35.238009 augenrules[1768]: No rules Jan 13 22:47:35.239075 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 22:47:35.239450 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 22:47:35.241395 sudo[1745]: pam_unix(sudo:session): session closed for user root Jan 13 22:47:35.385280 sshd[1744]: Connection closed by 147.75.109.163 port 49198 Jan 13 22:47:35.386379 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Jan 13 22:47:35.391715 systemd[1]: sshd@7-10.244.10.2:22-147.75.109.163:49198.service: Deactivated successfully. Jan 13 22:47:35.394171 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 22:47:35.395105 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Jan 13 22:47:35.396943 systemd-logind[1492]: Removed session 10. Jan 13 22:47:35.571578 systemd[1]: Started sshd@8-10.244.10.2:22-147.75.109.163:49204.service - OpenSSH per-connection server daemon (147.75.109.163:49204). Jan 13 22:47:36.469343 sshd[1776]: Accepted publickey for core from 147.75.109.163 port 49204 ssh2: RSA SHA256:SMt0x8mMAMcddjJjDBKkiVSH45e5wlQsWHaQ+hrymws Jan 13 22:47:36.471850 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 22:47:36.481342 systemd-logind[1492]: New session 11 of user core. Jan 13 22:47:36.501494 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 22:47:36.949455 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 22:47:36.949988 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 22:47:37.818725 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:47:37.828317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:47:37.850599 systemd[1]: Reloading requested from client PID 1816 ('systemctl') (unit session-11.scope)... Jan 13 22:47:37.850844 systemd[1]: Reloading... Jan 13 22:47:38.009077 zram_generator::config[1854]: No configuration found. Jan 13 22:47:38.202988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 22:47:38.310047 systemd[1]: Reloading finished in 458 ms. Jan 13 22:47:38.386478 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 22:47:38.386609 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 22:47:38.387089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:47:38.394559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 22:47:38.576287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 22:47:38.588902 (kubelet)[1923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 22:47:38.646850 kubelet[1923]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:47:38.646850 kubelet[1923]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 22:47:38.646850 kubelet[1923]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 22:47:38.654555 kubelet[1923]: I0113 22:47:38.654451 1923 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 22:47:39.337921 kubelet[1923]: I0113 22:47:39.337838 1923 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 22:47:39.337921 kubelet[1923]: I0113 22:47:39.337901 1923 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 22:47:39.338289 kubelet[1923]: I0113 22:47:39.338255 1923 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 22:47:39.357758 kubelet[1923]: I0113 22:47:39.357308 1923 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 22:47:39.373278 kubelet[1923]: I0113 22:47:39.373238 1923 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 22:47:39.376660 kubelet[1923]: I0113 22:47:39.376221 1923 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 22:47:39.376660 kubelet[1923]: I0113 22:47:39.376285 1923 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.244.10.2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 22:47:39.377968 kubelet[1923]: I0113 22:47:39.377941 1923 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 22:47:39.378104 kubelet[1923]: I0113 22:47:39.378086 1923 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 22:47:39.378705 kubelet[1923]: I0113 22:47:39.378496 1923 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:47:39.379662 kubelet[1923]: I0113 22:47:39.379636 1923 kubelet.go:400] "Attempting to sync node with API server" Jan 13 22:47:39.380419 kubelet[1923]: I0113 22:47:39.379882 1923 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 22:47:39.380419 kubelet[1923]: I0113 22:47:39.379985 1923 kubelet.go:312] "Adding apiserver pod source" Jan 13 22:47:39.380419 kubelet[1923]: I0113 22:47:39.380048 1923 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 22:47:39.380419 kubelet[1923]: E0113 22:47:39.380359 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:39.380419 kubelet[1923]: E0113 22:47:39.380416 1923 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:39.386429 kubelet[1923]: I0113 22:47:39.386357 1923 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 22:47:39.388481 kubelet[1923]: I0113 22:47:39.388283 1923 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 22:47:39.388481 kubelet[1923]: W0113 22:47:39.388415 1923 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 22:47:39.391815 kubelet[1923]: I0113 22:47:39.391791 1923 server.go:1264] "Started kubelet" Jan 13 22:47:39.393068 kubelet[1923]: W0113 22:47:39.392630 1923 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 22:47:39.393068 kubelet[1923]: E0113 22:47:39.392753 1923 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 22:47:39.393266 kubelet[1923]: I0113 22:47:39.393222 1923 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 22:47:39.393605 kubelet[1923]: I0113 22:47:39.393532 1923 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 22:47:39.394580 kubelet[1923]: I0113 22:47:39.394402 1923 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 22:47:39.395886 kubelet[1923]: I0113 22:47:39.395663 1923 server.go:455] "Adding debug handlers to kubelet server" Jan 13 22:47:39.401589 kubelet[1923]: I0113 22:47:39.401023 1923 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 22:47:39.413419 kubelet[1923]: W0113 22:47:39.412190 1923 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.244.10.2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 22:47:39.413419 kubelet[1923]: E0113 22:47:39.412242 1923 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.244.10.2" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 22:47:39.413419 kubelet[1923]: E0113 22:47:39.412288 1923 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.244.10.2.181a621780f01c78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.244.10.2,UID:10.244.10.2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.244.10.2,},FirstTimestamp:2025-01-13 22:47:39.391736952 +0000 UTC m=+0.797259045,LastTimestamp:2025-01-13 22:47:39.391736952 +0000 UTC m=+0.797259045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.244.10.2,}" Jan 13 22:47:39.414220 kubelet[1923]: I0113 22:47:39.414196 1923 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 22:47:39.414851 kubelet[1923]: I0113 22:47:39.414825 1923 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 22:47:39.416798 kubelet[1923]: I0113 22:47:39.416729 1923 reconciler.go:26] "Reconciler: start to sync state" Jan 13 22:47:39.418469 kubelet[1923]: E0113 22:47:39.418439 1923 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.244.10.2\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 22:47:39.422741 kubelet[1923]: I0113 22:47:39.422716 1923 factory.go:221] Registration of the systemd container factory successfully Jan 13 22:47:39.423222 kubelet[1923]: I0113 22:47:39.423126 1923 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 22:47:39.428366 kubelet[1923]: E0113 22:47:39.428309 1923 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 22:47:39.432397 kubelet[1923]: I0113 22:47:39.432373 1923 factory.go:221] Registration of the containerd container factory successfully Jan 13 22:47:39.465259 kubelet[1923]: I0113 22:47:39.465223 1923 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 22:47:39.465508 kubelet[1923]: I0113 22:47:39.465475 1923 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 22:47:39.465803 kubelet[1923]: I0113 22:47:39.465630 1923 state_mem.go:36] "Initialized new in-memory state store" Jan 13 22:47:39.468589 kubelet[1923]: I0113 22:47:39.468396 1923 policy_none.go:49] "None policy: Start" Jan 13 22:47:39.469847 kubelet[1923]: I0113 22:47:39.469396 1923 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 22:47:39.469847 kubelet[1923]: I0113 22:47:39.469433 1923 state_mem.go:35] "Initializing new in-memory state store" Jan 13 22:47:39.480865 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 22:47:39.497858 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 22:47:39.506827 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 22:47:39.507726 kubelet[1923]: I0113 22:47:39.507187 1923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 22:47:39.510748 kubelet[1923]: I0113 22:47:39.510255 1923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 22:47:39.510748 kubelet[1923]: I0113 22:47:39.510304 1923 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 22:47:39.510748 kubelet[1923]: I0113 22:47:39.510336 1923 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 22:47:39.510748 kubelet[1923]: E0113 22:47:39.510409 1923 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 22:47:39.515742 kubelet[1923]: I0113 22:47:39.515714 1923 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 22:47:39.516338 kubelet[1923]: I0113 22:47:39.516287 1923 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 22:47:39.516618 kubelet[1923]: I0113 22:47:39.516597 1923 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 22:47:39.519475 kubelet[1923]: I0113 22:47:39.518264 1923 kubelet_node_status.go:73] "Attempting to register node" node="10.244.10.2" Jan 13 22:47:39.523448 kubelet[1923]: E0113 22:47:39.523104 1923 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.244.10.2\" not found" Jan 13 22:47:39.526739 kubelet[1923]: I0113 22:47:39.526715 1923 kubelet_node_status.go:76] "Successfully registered node" node="10.244.10.2" Jan 13 22:47:39.545646 kubelet[1923]: E0113 22:47:39.545589 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:39.605486 sudo[1779]: pam_unix(sudo:session): session closed for user root Jan 13 22:47:39.646832 kubelet[1923]: E0113 22:47:39.646758 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:39.747599 kubelet[1923]: E0113 22:47:39.747488 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:39.748703 sshd[1778]: Connection closed by 147.75.109.163 port 49204 Jan 13 22:47:39.749581 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Jan 13 22:47:39.755016 systemd[1]: sshd@8-10.244.10.2:22-147.75.109.163:49204.service: Deactivated successfully. Jan 13 22:47:39.755533 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Jan 13 22:47:39.758968 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 22:47:39.761486 systemd-logind[1492]: Removed session 11. Jan 13 22:47:39.848264 kubelet[1923]: E0113 22:47:39.848159 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:39.948632 kubelet[1923]: E0113 22:47:39.948375 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:40.049472 kubelet[1923]: E0113 22:47:40.049384 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:40.150523 kubelet[1923]: E0113 22:47:40.150427 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:40.251689 kubelet[1923]: E0113 22:47:40.251414 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:40.342010 kubelet[1923]: I0113 22:47:40.341536 1923 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 22:47:40.342010 kubelet[1923]: W0113 22:47:40.341887 1923 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 22:47:40.342010 kubelet[1923]: W0113 22:47:40.341964 1923 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 22:47:40.351735 kubelet[1923]: E0113 22:47:40.351675 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:40.381438 kubelet[1923]: E0113 22:47:40.381317 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:40.452681 kubelet[1923]: E0113 22:47:40.452606 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:40.553570 kubelet[1923]: E0113 22:47:40.553252 1923 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.10.2\" not found" Jan 13 22:47:40.655401 kubelet[1923]: I0113 22:47:40.655288 1923 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 22:47:40.655932 containerd[1501]: time="2025-01-13T22:47:40.655837225Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 22:47:40.657229 kubelet[1923]: I0113 22:47:40.656835 1923 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 22:47:41.382164 kubelet[1923]: E0113 22:47:41.382064 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:41.383496 kubelet[1923]: I0113 22:47:41.382198 1923 apiserver.go:52] "Watching apiserver" Jan 13 22:47:41.390020 kubelet[1923]: I0113 22:47:41.389667 1923 topology_manager.go:215] "Topology Admit Handler" podUID="1be9a641-8a33-4b43-8027-7ef38f5c3858" podNamespace="kube-system" podName="cilium-l2vqs" Jan 13 22:47:41.390020 kubelet[1923]: I0113 22:47:41.389960 1923 topology_manager.go:215] "Topology Admit Handler" podUID="2bfa4bea-fbe3-4f80-abd6-064d692ad309" podNamespace="kube-system" podName="kube-proxy-q5cq5" Jan 13 22:47:41.400387 systemd[1]: Created slice kubepods-besteffort-pod2bfa4bea_fbe3_4f80_abd6_064d692ad309.slice - libcontainer container kubepods-besteffort-pod2bfa4bea_fbe3_4f80_abd6_064d692ad309.slice. Jan 13 22:47:41.414974 systemd[1]: Created slice kubepods-burstable-pod1be9a641_8a33_4b43_8027_7ef38f5c3858.slice - libcontainer container kubepods-burstable-pod1be9a641_8a33_4b43_8027_7ef38f5c3858.slice. Jan 13 22:47:41.417972 kubelet[1923]: I0113 22:47:41.417945 1923 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 22:47:41.430093 kubelet[1923]: I0113 22:47:41.429836 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-bpf-maps\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.430093 kubelet[1923]: I0113 22:47:41.429888 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-lib-modules\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.430093 kubelet[1923]: I0113 22:47:41.429921 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-xtables-lock\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.430093 kubelet[1923]: I0113 22:47:41.429946 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-config-path\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.430093 kubelet[1923]: I0113 22:47:41.429976 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-host-proc-sys-kernel\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.430093 kubelet[1923]: I0113 22:47:41.430001 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2bfa4bea-fbe3-4f80-abd6-064d692ad309-kube-proxy\") pod \"kube-proxy-q5cq5\" (UID: \"2bfa4bea-fbe3-4f80-abd6-064d692ad309\") " pod="kube-system/kube-proxy-q5cq5" Jan 13 22:47:41.430766 kubelet[1923]: I0113 22:47:41.430025 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1be9a641-8a33-4b43-8027-7ef38f5c3858-clustermesh-secrets\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.430766 kubelet[1923]: I0113 22:47:41.430507 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-host-proc-sys-net\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.430766 kubelet[1923]: I0113 22:47:41.430538 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65b5\" (UniqueName: \"kubernetes.io/projected/1be9a641-8a33-4b43-8027-7ef38f5c3858-kube-api-access-v65b5\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.430766 kubelet[1923]: I0113 22:47:41.430564 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bfa4bea-fbe3-4f80-abd6-064d692ad309-xtables-lock\") pod \"kube-proxy-q5cq5\" (UID: \"2bfa4bea-fbe3-4f80-abd6-064d692ad309\") " pod="kube-system/kube-proxy-q5cq5" Jan 13 22:47:41.430766 kubelet[1923]: I0113 22:47:41.430600 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-hostproc\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.430766 kubelet[1923]: I0113 22:47:41.430623 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1be9a641-8a33-4b43-8027-7ef38f5c3858-hubble-tls\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.431111 kubelet[1923]: I0113 22:47:41.430646 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bfa4bea-fbe3-4f80-abd6-064d692ad309-lib-modules\") pod \"kube-proxy-q5cq5\" (UID: \"2bfa4bea-fbe3-4f80-abd6-064d692ad309\") " pod="kube-system/kube-proxy-q5cq5" Jan 13 22:47:41.431111 kubelet[1923]: I0113 22:47:41.430670 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-run\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.431111 kubelet[1923]: I0113 22:47:41.430694 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-cgroup\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.431111 kubelet[1923]: I0113 22:47:41.430725 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cni-path\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.431111 kubelet[1923]: I0113 22:47:41.430764 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-etc-cni-netd\") pod \"cilium-l2vqs\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " pod="kube-system/cilium-l2vqs" Jan 13 22:47:41.431111 kubelet[1923]: I0113 22:47:41.430824 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx5rp\" (UniqueName: \"kubernetes.io/projected/2bfa4bea-fbe3-4f80-abd6-064d692ad309-kube-api-access-mx5rp\") pod \"kube-proxy-q5cq5\" (UID: \"2bfa4bea-fbe3-4f80-abd6-064d692ad309\") " pod="kube-system/kube-proxy-q5cq5" Jan 13 22:47:41.710334 containerd[1501]: time="2025-01-13T22:47:41.710127748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q5cq5,Uid:2bfa4bea-fbe3-4f80-abd6-064d692ad309,Namespace:kube-system,Attempt:0,}" Jan 13 22:47:41.724716 containerd[1501]: time="2025-01-13T22:47:41.724058134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2vqs,Uid:1be9a641-8a33-4b43-8027-7ef38f5c3858,Namespace:kube-system,Attempt:0,}" Jan 13 22:47:42.383997 kubelet[1923]: E0113 22:47:42.383893 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:42.437572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626218632.mount: Deactivated successfully. Jan 13 22:47:42.461151 containerd[1501]: time="2025-01-13T22:47:42.461084234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:47:42.463081 containerd[1501]: time="2025-01-13T22:47:42.463002378Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:47:42.464380 containerd[1501]: time="2025-01-13T22:47:42.464284651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 13 22:47:42.465139 containerd[1501]: time="2025-01-13T22:47:42.465090708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 22:47:42.466307 containerd[1501]: time="2025-01-13T22:47:42.466186119Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:47:42.470446 containerd[1501]: time="2025-01-13T22:47:42.470375928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 22:47:42.472076 containerd[1501]: time="2025-01-13T22:47:42.471730202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 747.545164ms" Jan 13 22:47:42.474111 containerd[1501]: time="2025-01-13T22:47:42.474077907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 763.686827ms" Jan 13 22:47:42.618071 containerd[1501]: time="2025-01-13T22:47:42.617729824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:47:42.618071 containerd[1501]: time="2025-01-13T22:47:42.617828782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:47:42.618071 containerd[1501]: time="2025-01-13T22:47:42.617853072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:47:42.618071 containerd[1501]: time="2025-01-13T22:47:42.617093070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:47:42.618071 containerd[1501]: time="2025-01-13T22:47:42.617241794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:47:42.618071 containerd[1501]: time="2025-01-13T22:47:42.617281324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:47:42.618071 containerd[1501]: time="2025-01-13T22:47:42.617472337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:47:42.619986 containerd[1501]: time="2025-01-13T22:47:42.619864651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:47:42.737382 systemd[1]: Started cri-containerd-772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f.scope - libcontainer container 772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f. Jan 13 22:47:42.743156 systemd[1]: Started cri-containerd-a9f9b56f2501712eadc680a77edc114aa789816bfaf0d544002c8e5977f49690.scope - libcontainer container a9f9b56f2501712eadc680a77edc114aa789816bfaf0d544002c8e5977f49690. Jan 13 22:47:42.790706 containerd[1501]: time="2025-01-13T22:47:42.790617031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2vqs,Uid:1be9a641-8a33-4b43-8027-7ef38f5c3858,Namespace:kube-system,Attempt:0,} returns sandbox id \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\"" Jan 13 22:47:42.797703 containerd[1501]: time="2025-01-13T22:47:42.797375735Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 22:47:42.800245 containerd[1501]: time="2025-01-13T22:47:42.800200490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q5cq5,Uid:2bfa4bea-fbe3-4f80-abd6-064d692ad309,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9f9b56f2501712eadc680a77edc114aa789816bfaf0d544002c8e5977f49690\"" Jan 13 22:47:42.895875 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 22:47:43.385084 kubelet[1923]: E0113 22:47:43.384971 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:44.385460 kubelet[1923]: E0113 22:47:44.385357 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:45.385698 kubelet[1923]: E0113 22:47:45.385623 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:46.386018 kubelet[1923]: E0113 22:47:46.385959 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:47.392246 kubelet[1923]: E0113 22:47:47.392109 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:48.392427 kubelet[1923]: E0113 22:47:48.392357 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:49.394208 kubelet[1923]: E0113 22:47:49.393968 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:50.395108 kubelet[1923]: E0113 22:47:50.394509 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:51.148279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326683442.mount: Deactivated successfully. Jan 13 22:47:51.395469 kubelet[1923]: E0113 22:47:51.394859 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:52.396055 kubelet[1923]: E0113 22:47:52.395961 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:53.396492 kubelet[1923]: E0113 22:47:53.396423 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:54.012373 containerd[1501]: time="2025-01-13T22:47:54.012259877Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:47:54.014077 containerd[1501]: time="2025-01-13T22:47:54.014022540Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735327" Jan 13 22:47:54.015184 containerd[1501]: time="2025-01-13T22:47:54.015114069Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:47:54.018061 containerd[1501]: time="2025-01-13T22:47:54.017705921Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.220275364s" Jan 13 22:47:54.018061 containerd[1501]: time="2025-01-13T22:47:54.017766365Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 22:47:54.021504 containerd[1501]: time="2025-01-13T22:47:54.021470233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 22:47:54.023705 containerd[1501]: time="2025-01-13T22:47:54.023670378Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 22:47:54.054459 containerd[1501]: time="2025-01-13T22:47:54.054214726Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\"" Jan 13 22:47:54.055452 containerd[1501]: time="2025-01-13T22:47:54.055403333Z" level=info msg="StartContainer for \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\"" Jan 13 22:47:54.108995 systemd[1]: run-containerd-runc-k8s.io-993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7-runc.rrReLS.mount: Deactivated successfully. Jan 13 22:47:54.129418 systemd[1]: Started cri-containerd-993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7.scope - libcontainer container 993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7. Jan 13 22:47:54.172247 containerd[1501]: time="2025-01-13T22:47:54.172163128Z" level=info msg="StartContainer for \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\" returns successfully" Jan 13 22:47:54.190892 systemd[1]: cri-containerd-993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7.scope: Deactivated successfully. Jan 13 22:47:54.397655 kubelet[1923]: E0113 22:47:54.397539 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:54.449845 containerd[1501]: time="2025-01-13T22:47:54.449723480Z" level=info msg="shim disconnected" id=993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7 namespace=k8s.io Jan 13 22:47:54.450107 containerd[1501]: time="2025-01-13T22:47:54.449914926Z" level=warning msg="cleaning up after shim disconnected" id=993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7 namespace=k8s.io Jan 13 22:47:54.450107 containerd[1501]: time="2025-01-13T22:47:54.449937563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:47:54.468409 containerd[1501]: time="2025-01-13T22:47:54.468312650Z" level=warning msg="cleanup warnings time=\"2025-01-13T22:47:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 22:47:54.560538 containerd[1501]: time="2025-01-13T22:47:54.560151370Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 22:47:54.574975 containerd[1501]: time="2025-01-13T22:47:54.574868119Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\"" Jan 13 22:47:54.576194 containerd[1501]: time="2025-01-13T22:47:54.575783525Z" level=info msg="StartContainer for \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\"" Jan 13 22:47:54.614373 systemd[1]: Started cri-containerd-3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9.scope - libcontainer container 3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9. Jan 13 22:47:54.656395 containerd[1501]: time="2025-01-13T22:47:54.656198124Z" level=info msg="StartContainer for \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\" returns successfully" Jan 13 22:47:54.676570 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 22:47:54.676951 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:47:54.678320 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:47:54.684600 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 22:47:54.685459 systemd[1]: cri-containerd-3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9.scope: Deactivated successfully. Jan 13 22:47:54.730482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 22:47:54.738074 containerd[1501]: time="2025-01-13T22:47:54.737796777Z" level=info msg="shim disconnected" id=3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9 namespace=k8s.io Jan 13 22:47:54.738074 containerd[1501]: time="2025-01-13T22:47:54.737866051Z" level=warning msg="cleaning up after shim disconnected" id=3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9 namespace=k8s.io Jan 13 22:47:54.738074 containerd[1501]: time="2025-01-13T22:47:54.737881684Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:47:55.050798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7-rootfs.mount: Deactivated successfully. Jan 13 22:47:55.398666 kubelet[1923]: E0113 22:47:55.398471 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:55.565075 containerd[1501]: time="2025-01-13T22:47:55.564452321Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 22:47:55.593808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466227898.mount: Deactivated successfully. Jan 13 22:47:55.618000 containerd[1501]: time="2025-01-13T22:47:55.617767931Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\"" Jan 13 22:47:55.622553 containerd[1501]: time="2025-01-13T22:47:55.619246596Z" level=info msg="StartContainer for \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\"" Jan 13 22:47:55.625128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962697374.mount: Deactivated successfully. Jan 13 22:47:55.682345 systemd[1]: Started cri-containerd-7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f.scope - libcontainer container 7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f. Jan 13 22:47:55.741469 containerd[1501]: time="2025-01-13T22:47:55.741107010Z" level=info msg="StartContainer for \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\" returns successfully" Jan 13 22:47:55.750525 systemd[1]: cri-containerd-7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f.scope: Deactivated successfully. Jan 13 22:47:55.912104 containerd[1501]: time="2025-01-13T22:47:55.911879537Z" level=info msg="shim disconnected" id=7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f namespace=k8s.io Jan 13 22:47:55.912104 containerd[1501]: time="2025-01-13T22:47:55.911944122Z" level=warning msg="cleaning up after shim disconnected" id=7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f namespace=k8s.io Jan 13 22:47:55.912104 containerd[1501]: time="2025-01-13T22:47:55.911959924Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:47:56.046112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f-rootfs.mount: Deactivated successfully. Jan 13 22:47:56.237546 update_engine[1493]: I20250113 22:47:56.237071 1493 update_attempter.cc:509] Updating boot flags... Jan 13 22:47:56.338668 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2278) Jan 13 22:47:56.399559 kubelet[1923]: E0113 22:47:56.399490 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:56.449084 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2276) Jan 13 22:47:56.579370 containerd[1501]: time="2025-01-13T22:47:56.578611466Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 22:47:56.588061 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2276) Jan 13 22:47:56.656860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354938565.mount: Deactivated successfully. Jan 13 22:47:56.678878 containerd[1501]: time="2025-01-13T22:47:56.678804793Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\"" Jan 13 22:47:56.682800 containerd[1501]: time="2025-01-13T22:47:56.681532788Z" level=info msg="StartContainer for \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\"" Jan 13 22:47:56.714553 containerd[1501]: time="2025-01-13T22:47:56.713750178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:47:56.716946 containerd[1501]: time="2025-01-13T22:47:56.716891658Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 13 22:47:56.718098 containerd[1501]: time="2025-01-13T22:47:56.718063921Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:47:56.724361 containerd[1501]: time="2025-01-13T22:47:56.724317235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:47:56.727601 containerd[1501]: time="2025-01-13T22:47:56.727561102Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.706032425s" Jan 13 22:47:56.727708 containerd[1501]: time="2025-01-13T22:47:56.727617714Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 13 22:47:56.734324 containerd[1501]: time="2025-01-13T22:47:56.734275951Z" level=info msg="CreateContainer within sandbox \"a9f9b56f2501712eadc680a77edc114aa789816bfaf0d544002c8e5977f49690\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 22:47:56.740307 systemd[1]: Started cri-containerd-cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c.scope - libcontainer container cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c. Jan 13 22:47:56.760985 containerd[1501]: time="2025-01-13T22:47:56.760843276Z" level=info msg="CreateContainer within sandbox \"a9f9b56f2501712eadc680a77edc114aa789816bfaf0d544002c8e5977f49690\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"121ed6c1cfab95a8543a74555c5ca5935646477a9c8a2022dc97edb0673254ea\"" Jan 13 22:47:56.763028 containerd[1501]: time="2025-01-13T22:47:56.761791531Z" level=info msg="StartContainer for \"121ed6c1cfab95a8543a74555c5ca5935646477a9c8a2022dc97edb0673254ea\"" Jan 13 22:47:56.787716 systemd[1]: cri-containerd-cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c.scope: Deactivated successfully. Jan 13 22:47:56.791439 containerd[1501]: time="2025-01-13T22:47:56.791292202Z" level=info msg="StartContainer for \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\" returns successfully" Jan 13 22:47:56.812264 systemd[1]: Started cri-containerd-121ed6c1cfab95a8543a74555c5ca5935646477a9c8a2022dc97edb0673254ea.scope - libcontainer container 121ed6c1cfab95a8543a74555c5ca5935646477a9c8a2022dc97edb0673254ea. Jan 13 22:47:56.942224 containerd[1501]: time="2025-01-13T22:47:56.940014389Z" level=info msg="StartContainer for \"121ed6c1cfab95a8543a74555c5ca5935646477a9c8a2022dc97edb0673254ea\" returns successfully" Jan 13 22:47:56.942578 containerd[1501]: time="2025-01-13T22:47:56.942348750Z" level=info msg="shim disconnected" id=cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c namespace=k8s.io Jan 13 22:47:56.942578 containerd[1501]: time="2025-01-13T22:47:56.942429011Z" level=warning msg="cleaning up after shim disconnected" id=cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c namespace=k8s.io Jan 13 22:47:56.942578 containerd[1501]: time="2025-01-13T22:47:56.942448478Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:47:57.047783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c-rootfs.mount: Deactivated successfully. Jan 13 22:47:57.400146 kubelet[1923]: E0113 22:47:57.400053 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:57.589002 containerd[1501]: time="2025-01-13T22:47:57.588889915Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 22:47:57.604190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount336182947.mount: Deactivated successfully. Jan 13 22:47:57.609308 containerd[1501]: time="2025-01-13T22:47:57.609247062Z" level=info msg="CreateContainer within sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\"" Jan 13 22:47:57.610108 containerd[1501]: time="2025-01-13T22:47:57.609939476Z" level=info msg="StartContainer for \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\"" Jan 13 22:47:57.632693 kubelet[1923]: I0113 22:47:57.632475 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q5cq5" podStartSLOduration=4.70973285 podStartE2EDuration="18.632424136s" podCreationTimestamp="2025-01-13 22:47:39 +0000 UTC" firstStartedPulling="2025-01-13 22:47:42.807430265 +0000 UTC m=+4.212952048" lastFinishedPulling="2025-01-13 22:47:56.730121551 +0000 UTC m=+18.135643334" observedRunningTime="2025-01-13 22:47:57.632156457 +0000 UTC m=+19.037678271" watchObservedRunningTime="2025-01-13 22:47:57.632424136 +0000 UTC m=+19.037945955" Jan 13 22:47:57.659301 systemd[1]: Started cri-containerd-254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59.scope - libcontainer container 254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59. Jan 13 22:47:57.697828 containerd[1501]: time="2025-01-13T22:47:57.697752122Z" level=info msg="StartContainer for \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\" returns successfully" Jan 13 22:47:57.894590 kubelet[1923]: I0113 22:47:57.894539 1923 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 22:47:58.291383 kernel: Initializing XFRM netlink socket Jan 13 22:47:58.401427 kubelet[1923]: E0113 22:47:58.401357 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:59.380953 kubelet[1923]: E0113 22:47:59.380828 1923 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:47:59.402365 kubelet[1923]: E0113 22:47:59.402309 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:00.035851 systemd-networkd[1419]: cilium_host: Link UP Jan 13 22:48:00.037743 systemd-networkd[1419]: cilium_net: Link UP Jan 13 22:48:00.040753 systemd-networkd[1419]: cilium_net: Gained carrier Jan 13 22:48:00.041128 systemd-networkd[1419]: cilium_host: Gained carrier Jan 13 22:48:00.102323 systemd-networkd[1419]: cilium_net: Gained IPv6LL Jan 13 22:48:00.197329 systemd-networkd[1419]: cilium_vxlan: Link UP Jan 13 22:48:00.197340 systemd-networkd[1419]: cilium_vxlan: Gained carrier Jan 13 22:48:00.212225 systemd-networkd[1419]: cilium_host: Gained IPv6LL Jan 13 22:48:00.402956 kubelet[1923]: E0113 22:48:00.402859 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:00.600074 kernel: NET: Registered PF_ALG protocol family Jan 13 22:48:01.341396 systemd-networkd[1419]: cilium_vxlan: Gained IPv6LL Jan 13 22:48:01.407672 kubelet[1923]: E0113 22:48:01.406490 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:01.640750 systemd-networkd[1419]: lxc_health: Link UP Jan 13 22:48:01.649752 systemd-networkd[1419]: lxc_health: Gained carrier Jan 13 22:48:01.751416 kubelet[1923]: I0113 22:48:01.751028 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l2vqs" podStartSLOduration=11.527572178 podStartE2EDuration="22.751002906s" podCreationTimestamp="2025-01-13 22:47:39 +0000 UTC" firstStartedPulling="2025-01-13 22:47:42.796661344 +0000 UTC m=+4.202183127" lastFinishedPulling="2025-01-13 22:47:54.020092072 +0000 UTC m=+15.425613855" observedRunningTime="2025-01-13 22:47:58.625570094 +0000 UTC m=+20.031091909" watchObservedRunningTime="2025-01-13 22:48:01.751002906 +0000 UTC m=+23.156524701" Jan 13 22:48:02.407535 kubelet[1923]: E0113 22:48:02.407334 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:02.684539 systemd-networkd[1419]: lxc_health: Gained IPv6LL Jan 13 22:48:03.408753 kubelet[1923]: E0113 22:48:03.408603 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:04.409582 kubelet[1923]: E0113 22:48:04.409499 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:05.296096 kubelet[1923]: I0113 22:48:05.294810 1923 topology_manager.go:215] "Topology Admit Handler" podUID="d566e390-360f-4a08-ae21-1caddd6caedb" podNamespace="default" podName="nginx-deployment-85f456d6dd-jg9dv" Jan 13 22:48:05.309899 systemd[1]: Created slice kubepods-besteffort-podd566e390_360f_4a08_ae21_1caddd6caedb.slice - libcontainer container kubepods-besteffort-podd566e390_360f_4a08_ae21_1caddd6caedb.slice. Jan 13 22:48:05.386417 kubelet[1923]: I0113 22:48:05.386337 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgr2z\" (UniqueName: \"kubernetes.io/projected/d566e390-360f-4a08-ae21-1caddd6caedb-kube-api-access-lgr2z\") pod \"nginx-deployment-85f456d6dd-jg9dv\" (UID: \"d566e390-360f-4a08-ae21-1caddd6caedb\") " pod="default/nginx-deployment-85f456d6dd-jg9dv" Jan 13 22:48:05.410215 kubelet[1923]: E0113 22:48:05.410031 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:05.618533 containerd[1501]: time="2025-01-13T22:48:05.618134505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jg9dv,Uid:d566e390-360f-4a08-ae21-1caddd6caedb,Namespace:default,Attempt:0,}" Jan 13 22:48:05.692583 systemd-networkd[1419]: lxcfdba407877b6: Link UP Jan 13 22:48:05.701123 kernel: eth0: renamed from tmp86ae5 Jan 13 22:48:05.713899 systemd-networkd[1419]: lxcfdba407877b6: Gained carrier Jan 13 22:48:06.411151 kubelet[1923]: E0113 22:48:06.411024 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:07.100326 systemd-networkd[1419]: lxcfdba407877b6: Gained IPv6LL Jan 13 22:48:07.412395 kubelet[1923]: E0113 22:48:07.412312 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:08.312138 containerd[1501]: time="2025-01-13T22:48:08.311751222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:48:08.312138 containerd[1501]: time="2025-01-13T22:48:08.311867006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:48:08.312138 containerd[1501]: time="2025-01-13T22:48:08.311886431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:08.312138 containerd[1501]: time="2025-01-13T22:48:08.312079800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:08.376056 systemd[1]: run-containerd-runc-k8s.io-86ae5096ed9c01ad9283cd6d42bb258865ab90bd88dda4c16fc38cf847cff4ab-runc.2dh2Fs.mount: Deactivated successfully. Jan 13 22:48:08.390405 systemd[1]: Started cri-containerd-86ae5096ed9c01ad9283cd6d42bb258865ab90bd88dda4c16fc38cf847cff4ab.scope - libcontainer container 86ae5096ed9c01ad9283cd6d42bb258865ab90bd88dda4c16fc38cf847cff4ab. Jan 13 22:48:08.413097 kubelet[1923]: E0113 22:48:08.412984 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:08.459671 containerd[1501]: time="2025-01-13T22:48:08.459529636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jg9dv,Uid:d566e390-360f-4a08-ae21-1caddd6caedb,Namespace:default,Attempt:0,} returns sandbox id \"86ae5096ed9c01ad9283cd6d42bb258865ab90bd88dda4c16fc38cf847cff4ab\"" Jan 13 22:48:08.463726 containerd[1501]: time="2025-01-13T22:48:08.463368106Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 22:48:09.414014 kubelet[1923]: E0113 22:48:09.413939 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:10.414930 kubelet[1923]: E0113 22:48:10.414808 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:11.416571 kubelet[1923]: E0113 22:48:11.416459 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:12.417599 kubelet[1923]: E0113 22:48:12.417491 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:12.509623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031948804.mount: Deactivated successfully. Jan 13 22:48:13.418653 kubelet[1923]: E0113 22:48:13.418564 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:14.297079 containerd[1501]: time="2025-01-13T22:48:14.296905752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:48:14.419917 kubelet[1923]: E0113 22:48:14.419830 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:14.487763 containerd[1501]: time="2025-01-13T22:48:14.487646368Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 22:48:14.489364 containerd[1501]: time="2025-01-13T22:48:14.489283043Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:48:14.494134 containerd[1501]: time="2025-01-13T22:48:14.493643633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:48:14.494988 containerd[1501]: time="2025-01-13T22:48:14.494948363Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 6.031520915s" Jan 13 22:48:14.495106 containerd[1501]: time="2025-01-13T22:48:14.494997482Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 22:48:14.498950 containerd[1501]: time="2025-01-13T22:48:14.498912845Z" level=info msg="CreateContainer within sandbox \"86ae5096ed9c01ad9283cd6d42bb258865ab90bd88dda4c16fc38cf847cff4ab\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 22:48:14.523841 containerd[1501]: time="2025-01-13T22:48:14.523773910Z" level=info msg="CreateContainer within sandbox \"86ae5096ed9c01ad9283cd6d42bb258865ab90bd88dda4c16fc38cf847cff4ab\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"07af70eee5126780f9644d810b4caf92b3707d94d96246a5eb0e30492f12e962\"" Jan 13 22:48:14.525929 containerd[1501]: time="2025-01-13T22:48:14.524586600Z" level=info msg="StartContainer for \"07af70eee5126780f9644d810b4caf92b3707d94d96246a5eb0e30492f12e962\"" Jan 13 22:48:14.577473 systemd[1]: run-containerd-runc-k8s.io-07af70eee5126780f9644d810b4caf92b3707d94d96246a5eb0e30492f12e962-runc.VtfUEA.mount: Deactivated successfully. Jan 13 22:48:14.591310 systemd[1]: Started cri-containerd-07af70eee5126780f9644d810b4caf92b3707d94d96246a5eb0e30492f12e962.scope - libcontainer container 07af70eee5126780f9644d810b4caf92b3707d94d96246a5eb0e30492f12e962. Jan 13 22:48:14.628333 containerd[1501]: time="2025-01-13T22:48:14.628247924Z" level=info msg="StartContainer for \"07af70eee5126780f9644d810b4caf92b3707d94d96246a5eb0e30492f12e962\" returns successfully" Jan 13 22:48:14.669245 kubelet[1923]: I0113 22:48:14.668890 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-jg9dv" podStartSLOduration=3.634878903 podStartE2EDuration="9.668857973s" podCreationTimestamp="2025-01-13 22:48:05 +0000 UTC" firstStartedPulling="2025-01-13 22:48:08.462857827 +0000 UTC m=+29.868379609" lastFinishedPulling="2025-01-13 22:48:14.496836891 +0000 UTC m=+35.902358679" observedRunningTime="2025-01-13 22:48:14.667623404 +0000 UTC m=+36.073145198" watchObservedRunningTime="2025-01-13 22:48:14.668857973 +0000 UTC m=+36.074379801" Jan 13 22:48:15.420502 kubelet[1923]: E0113 22:48:15.420400 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:16.421396 kubelet[1923]: E0113 22:48:16.421329 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:17.422056 kubelet[1923]: E0113 22:48:17.421969 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:18.423080 kubelet[1923]: E0113 22:48:18.422950 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:19.380317 kubelet[1923]: E0113 22:48:19.380165 1923 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:19.423894 kubelet[1923]: E0113 22:48:19.423816 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:20.424876 kubelet[1923]: E0113 22:48:20.424810 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:21.425605 kubelet[1923]: E0113 22:48:21.425521 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:22.426449 kubelet[1923]: E0113 22:48:22.426370 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:23.427304 kubelet[1923]: E0113 22:48:23.427234 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:24.427516 kubelet[1923]: E0113 22:48:24.427433 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:25.428715 kubelet[1923]: E0113 22:48:25.428638 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:26.430758 kubelet[1923]: E0113 22:48:26.430663 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:26.680506 kubelet[1923]: I0113 22:48:26.680134 1923 topology_manager.go:215] "Topology Admit Handler" podUID="cd0e8e39-3874-4834-9fbc-5dff9b5bbe5a" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 22:48:26.690594 systemd[1]: Created slice kubepods-besteffort-podcd0e8e39_3874_4834_9fbc_5dff9b5bbe5a.slice - libcontainer container kubepods-besteffort-podcd0e8e39_3874_4834_9fbc_5dff9b5bbe5a.slice. Jan 13 22:48:26.816553 kubelet[1923]: I0113 22:48:26.816323 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/cd0e8e39-3874-4834-9fbc-5dff9b5bbe5a-data\") pod \"nfs-server-provisioner-0\" (UID: \"cd0e8e39-3874-4834-9fbc-5dff9b5bbe5a\") " pod="default/nfs-server-provisioner-0" Jan 13 22:48:26.816553 kubelet[1923]: I0113 22:48:26.816406 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6vw\" (UniqueName: \"kubernetes.io/projected/cd0e8e39-3874-4834-9fbc-5dff9b5bbe5a-kube-api-access-jf6vw\") pod \"nfs-server-provisioner-0\" (UID: \"cd0e8e39-3874-4834-9fbc-5dff9b5bbe5a\") " pod="default/nfs-server-provisioner-0" Jan 13 22:48:26.996888 containerd[1501]: time="2025-01-13T22:48:26.996613541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cd0e8e39-3874-4834-9fbc-5dff9b5bbe5a,Namespace:default,Attempt:0,}" Jan 13 22:48:27.059369 systemd-networkd[1419]: lxcb8b232a7dcfd: Link UP Jan 13 22:48:27.072705 kernel: eth0: renamed from tmp07a31 Jan 13 22:48:27.075507 systemd-networkd[1419]: lxcb8b232a7dcfd: Gained carrier Jan 13 22:48:27.353381 containerd[1501]: time="2025-01-13T22:48:27.353146431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:48:27.353737 containerd[1501]: time="2025-01-13T22:48:27.353677496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:48:27.354308 containerd[1501]: time="2025-01-13T22:48:27.353748418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:27.354308 containerd[1501]: time="2025-01-13T22:48:27.353973139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:27.387287 systemd[1]: Started cri-containerd-07a31a11bc2c21d8686fa24128d0f21306394dadba6040778224ac2b983cb4f8.scope - libcontainer container 07a31a11bc2c21d8686fa24128d0f21306394dadba6040778224ac2b983cb4f8. Jan 13 22:48:27.431475 kubelet[1923]: E0113 22:48:27.431029 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:27.447419 containerd[1501]: time="2025-01-13T22:48:27.446410288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:cd0e8e39-3874-4834-9fbc-5dff9b5bbe5a,Namespace:default,Attempt:0,} returns sandbox id \"07a31a11bc2c21d8686fa24128d0f21306394dadba6040778224ac2b983cb4f8\"" Jan 13 22:48:27.449622 containerd[1501]: time="2025-01-13T22:48:27.449483825Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 22:48:28.220347 systemd-networkd[1419]: lxcb8b232a7dcfd: Gained IPv6LL Jan 13 22:48:28.432701 kubelet[1923]: E0113 22:48:28.432586 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:29.433352 kubelet[1923]: E0113 22:48:29.433269 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:30.434109 kubelet[1923]: E0113 22:48:30.433925 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:31.434675 kubelet[1923]: E0113 22:48:31.434576 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:31.740544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962930938.mount: Deactivated successfully. Jan 13 22:48:32.435440 kubelet[1923]: E0113 22:48:32.435321 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:33.435975 kubelet[1923]: E0113 22:48:33.435858 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:34.436398 kubelet[1923]: E0113 22:48:34.436340 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:34.746117 containerd[1501]: time="2025-01-13T22:48:34.745735760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:48:34.747756 containerd[1501]: time="2025-01-13T22:48:34.747687448Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 13 22:48:34.749074 containerd[1501]: time="2025-01-13T22:48:34.748450438Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:48:34.752896 containerd[1501]: time="2025-01-13T22:48:34.752806322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:48:34.754448 containerd[1501]: time="2025-01-13T22:48:34.754404065Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 7.304851381s" Jan 13 22:48:34.754564 containerd[1501]: time="2025-01-13T22:48:34.754466410Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 22:48:34.760470 containerd[1501]: time="2025-01-13T22:48:34.760418568Z" level=info msg="CreateContainer within sandbox \"07a31a11bc2c21d8686fa24128d0f21306394dadba6040778224ac2b983cb4f8\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 22:48:34.780768 containerd[1501]: time="2025-01-13T22:48:34.780556690Z" level=info msg="CreateContainer within sandbox \"07a31a11bc2c21d8686fa24128d0f21306394dadba6040778224ac2b983cb4f8\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2acd91ec92a34df3a7c53b9467c623d4f764f26111b2e1c9b1612d847f329f43\"" Jan 13 22:48:34.781898 containerd[1501]: time="2025-01-13T22:48:34.781756869Z" level=info msg="StartContainer for \"2acd91ec92a34df3a7c53b9467c623d4f764f26111b2e1c9b1612d847f329f43\"" Jan 13 22:48:34.840370 systemd[1]: Started cri-containerd-2acd91ec92a34df3a7c53b9467c623d4f764f26111b2e1c9b1612d847f329f43.scope - libcontainer container 2acd91ec92a34df3a7c53b9467c623d4f764f26111b2e1c9b1612d847f329f43. Jan 13 22:48:34.886471 containerd[1501]: time="2025-01-13T22:48:34.886416758Z" level=info msg="StartContainer for \"2acd91ec92a34df3a7c53b9467c623d4f764f26111b2e1c9b1612d847f329f43\" returns successfully" Jan 13 22:48:35.437129 kubelet[1923]: E0113 22:48:35.437016 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:35.718652 kubelet[1923]: I0113 22:48:35.718269 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.40998563 podStartE2EDuration="9.718227133s" podCreationTimestamp="2025-01-13 22:48:26 +0000 UTC" firstStartedPulling="2025-01-13 22:48:27.44897728 +0000 UTC m=+48.854499068" lastFinishedPulling="2025-01-13 22:48:34.757218783 +0000 UTC m=+56.162740571" observedRunningTime="2025-01-13 22:48:35.717601177 +0000 UTC m=+57.123123020" watchObservedRunningTime="2025-01-13 22:48:35.718227133 +0000 UTC m=+57.123748929" Jan 13 22:48:36.438241 kubelet[1923]: E0113 22:48:36.438155 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:37.439068 kubelet[1923]: E0113 22:48:37.438947 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:38.440176 kubelet[1923]: E0113 22:48:38.439976 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:39.380456 kubelet[1923]: E0113 22:48:39.380366 1923 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:39.441010 kubelet[1923]: E0113 22:48:39.440928 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:40.441786 kubelet[1923]: E0113 22:48:40.441562 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:41.442651 kubelet[1923]: E0113 22:48:41.442563 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:42.442992 kubelet[1923]: E0113 22:48:42.442892 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:42.927414 systemd[1]: Started sshd@9-10.244.10.2:22-2.57.122.194:41960.service - OpenSSH per-connection server daemon (2.57.122.194:41960). Jan 13 22:48:43.022257 sshd[3321]: Connection closed by 2.57.122.194 port 41960 Jan 13 22:48:43.023350 systemd[1]: sshd@9-10.244.10.2:22-2.57.122.194:41960.service: Deactivated successfully. Jan 13 22:48:43.443459 kubelet[1923]: E0113 22:48:43.443381 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:44.443938 kubelet[1923]: E0113 22:48:44.443851 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:45.001372 kubelet[1923]: I0113 22:48:45.000719 1923 topology_manager.go:215] "Topology Admit Handler" podUID="a7621b20-c03c-4a53-a05d-ca0fdec89e14" podNamespace="default" podName="test-pod-1" Jan 13 22:48:45.011210 systemd[1]: Created slice kubepods-besteffort-poda7621b20_c03c_4a53_a05d_ca0fdec89e14.slice - libcontainer container kubepods-besteffort-poda7621b20_c03c_4a53_a05d_ca0fdec89e14.slice. Jan 13 22:48:45.138196 kubelet[1923]: I0113 22:48:45.138019 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41e0dc75-5ef1-4586-9741-0e101cda4762\" (UniqueName: \"kubernetes.io/nfs/a7621b20-c03c-4a53-a05d-ca0fdec89e14-pvc-41e0dc75-5ef1-4586-9741-0e101cda4762\") pod \"test-pod-1\" (UID: \"a7621b20-c03c-4a53-a05d-ca0fdec89e14\") " pod="default/test-pod-1" Jan 13 22:48:45.138196 kubelet[1923]: I0113 22:48:45.138152 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ffzp\" (UniqueName: \"kubernetes.io/projected/a7621b20-c03c-4a53-a05d-ca0fdec89e14-kube-api-access-4ffzp\") pod \"test-pod-1\" (UID: \"a7621b20-c03c-4a53-a05d-ca0fdec89e14\") " pod="default/test-pod-1" Jan 13 22:48:45.282100 kernel: FS-Cache: Loaded Jan 13 22:48:45.374315 kernel: RPC: Registered named UNIX socket transport module. Jan 13 22:48:45.374506 kernel: RPC: Registered udp transport module. Jan 13 22:48:45.375363 kernel: RPC: Registered tcp transport module. Jan 13 22:48:45.376318 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 22:48:45.377529 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 22:48:45.444179 kubelet[1923]: E0113 22:48:45.444024 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:45.720515 kernel: NFS: Registering the id_resolver key type Jan 13 22:48:45.720814 kernel: Key type id_resolver registered Jan 13 22:48:45.720870 kernel: Key type id_legacy registered Jan 13 22:48:45.783136 nfsidmap[3339]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 13 22:48:45.792200 nfsidmap[3342]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 13 22:48:45.917764 containerd[1501]: time="2025-01-13T22:48:45.917480167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a7621b20-c03c-4a53-a05d-ca0fdec89e14,Namespace:default,Attempt:0,}" Jan 13 22:48:46.012881 systemd-networkd[1419]: lxc00abd14ee42a: Link UP Jan 13 22:48:46.019019 kernel: eth0: renamed from tmpe163d Jan 13 22:48:46.024485 systemd-networkd[1419]: lxc00abd14ee42a: Gained carrier Jan 13 22:48:46.268873 containerd[1501]: time="2025-01-13T22:48:46.268601925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:48:46.268873 containerd[1501]: time="2025-01-13T22:48:46.268739032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:48:46.268873 containerd[1501]: time="2025-01-13T22:48:46.268763639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:46.269598 containerd[1501]: time="2025-01-13T22:48:46.269429541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:46.310295 systemd[1]: Started cri-containerd-e163d12ea7a2ca4b2374e62ba89cb46d5da40c7dc204a6f55040593b70cbae28.scope - libcontainer container e163d12ea7a2ca4b2374e62ba89cb46d5da40c7dc204a6f55040593b70cbae28. Jan 13 22:48:46.372785 containerd[1501]: time="2025-01-13T22:48:46.372695913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:a7621b20-c03c-4a53-a05d-ca0fdec89e14,Namespace:default,Attempt:0,} returns sandbox id \"e163d12ea7a2ca4b2374e62ba89cb46d5da40c7dc204a6f55040593b70cbae28\"" Jan 13 22:48:46.375804 containerd[1501]: time="2025-01-13T22:48:46.375774084Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 22:48:46.444809 kubelet[1923]: E0113 22:48:46.444698 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:46.740092 containerd[1501]: time="2025-01-13T22:48:46.738299478Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:48:46.740092 containerd[1501]: time="2025-01-13T22:48:46.738998156Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 22:48:46.743915 containerd[1501]: time="2025-01-13T22:48:46.743836777Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 367.890363ms" Jan 13 22:48:46.743915 containerd[1501]: time="2025-01-13T22:48:46.743883404Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 22:48:46.747535 containerd[1501]: time="2025-01-13T22:48:46.747329282Z" level=info msg="CreateContainer within sandbox \"e163d12ea7a2ca4b2374e62ba89cb46d5da40c7dc204a6f55040593b70cbae28\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 22:48:46.763325 containerd[1501]: time="2025-01-13T22:48:46.763283769Z" level=info msg="CreateContainer within sandbox \"e163d12ea7a2ca4b2374e62ba89cb46d5da40c7dc204a6f55040593b70cbae28\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"191adf2a39a666a1f67d8de0601e19ddf8f5b70e3acd46816f3cee8e3c7fd884\"" Jan 13 22:48:46.764352 containerd[1501]: time="2025-01-13T22:48:46.764283092Z" level=info msg="StartContainer for \"191adf2a39a666a1f67d8de0601e19ddf8f5b70e3acd46816f3cee8e3c7fd884\"" Jan 13 22:48:46.803272 systemd[1]: Started cri-containerd-191adf2a39a666a1f67d8de0601e19ddf8f5b70e3acd46816f3cee8e3c7fd884.scope - libcontainer container 191adf2a39a666a1f67d8de0601e19ddf8f5b70e3acd46816f3cee8e3c7fd884. Jan 13 22:48:46.843131 containerd[1501]: time="2025-01-13T22:48:46.842396897Z" level=info msg="StartContainer for \"191adf2a39a666a1f67d8de0601e19ddf8f5b70e3acd46816f3cee8e3c7fd884\" returns successfully" Jan 13 22:48:47.445081 kubelet[1923]: E0113 22:48:47.444940 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:47.740420 systemd-networkd[1419]: lxc00abd14ee42a: Gained IPv6LL Jan 13 22:48:47.750787 kubelet[1923]: I0113 22:48:47.750605 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.381217083 podStartE2EDuration="19.750560007s" podCreationTimestamp="2025-01-13 22:48:28 +0000 UTC" firstStartedPulling="2025-01-13 22:48:46.37535325 +0000 UTC m=+67.780875038" lastFinishedPulling="2025-01-13 22:48:46.744696169 +0000 UTC m=+68.150217962" observedRunningTime="2025-01-13 22:48:47.750137705 +0000 UTC m=+69.155659506" watchObservedRunningTime="2025-01-13 22:48:47.750560007 +0000 UTC m=+69.156081818" Jan 13 22:48:48.446211 kubelet[1923]: E0113 22:48:48.446131 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:49.447385 kubelet[1923]: E0113 22:48:49.447267 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:50.448585 kubelet[1923]: E0113 22:48:50.448499 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:51.449456 kubelet[1923]: E0113 22:48:51.449384 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:52.449713 kubelet[1923]: E0113 22:48:52.449616 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:53.450255 kubelet[1923]: E0113 22:48:53.450166 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:54.451331 kubelet[1923]: E0113 22:48:54.451239 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:54.674601 containerd[1501]: time="2025-01-13T22:48:54.674408600Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 22:48:54.732912 containerd[1501]: time="2025-01-13T22:48:54.732648548Z" level=info msg="StopContainer for \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\" with timeout 2 (s)" Jan 13 22:48:54.741117 containerd[1501]: time="2025-01-13T22:48:54.740965354Z" level=info msg="Stop container \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\" with signal terminated" Jan 13 22:48:54.752576 systemd-networkd[1419]: lxc_health: Link DOWN Jan 13 22:48:54.752589 systemd-networkd[1419]: lxc_health: Lost carrier Jan 13 22:48:54.772584 systemd[1]: cri-containerd-254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59.scope: Deactivated successfully. Jan 13 22:48:54.773005 systemd[1]: cri-containerd-254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59.scope: Consumed 10.291s CPU time. Jan 13 22:48:54.803394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59-rootfs.mount: Deactivated successfully. Jan 13 22:48:54.827160 containerd[1501]: time="2025-01-13T22:48:54.812498716Z" level=info msg="shim disconnected" id=254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59 namespace=k8s.io Jan 13 22:48:54.827160 containerd[1501]: time="2025-01-13T22:48:54.826990443Z" level=warning msg="cleaning up after shim disconnected" id=254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59 namespace=k8s.io Jan 13 22:48:54.827160 containerd[1501]: time="2025-01-13T22:48:54.827020276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:48:54.851284 containerd[1501]: time="2025-01-13T22:48:54.851221271Z" level=info msg="StopContainer for \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\" returns successfully" Jan 13 22:48:54.854795 containerd[1501]: time="2025-01-13T22:48:54.852306177Z" level=info msg="StopPodSandbox for \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\"" Jan 13 22:48:54.854795 containerd[1501]: time="2025-01-13T22:48:54.852366372Z" level=info msg="Container to stop \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 22:48:54.854795 containerd[1501]: time="2025-01-13T22:48:54.852426544Z" level=info msg="Container to stop \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 22:48:54.854795 containerd[1501]: time="2025-01-13T22:48:54.852442158Z" level=info msg="Container to stop \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 22:48:54.854795 containerd[1501]: time="2025-01-13T22:48:54.852459456Z" level=info msg="Container to stop \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 22:48:54.854795 containerd[1501]: time="2025-01-13T22:48:54.852474620Z" level=info msg="Container to stop \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 22:48:54.854832 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f-shm.mount: Deactivated successfully. Jan 13 22:48:54.865586 systemd[1]: cri-containerd-772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f.scope: Deactivated successfully. Jan 13 22:48:54.893854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f-rootfs.mount: Deactivated successfully. Jan 13 22:48:54.898561 containerd[1501]: time="2025-01-13T22:48:54.898516625Z" level=info msg="shim disconnected" id=772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f namespace=k8s.io Jan 13 22:48:54.898990 containerd[1501]: time="2025-01-13T22:48:54.898762654Z" level=warning msg="cleaning up after shim disconnected" id=772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f namespace=k8s.io Jan 13 22:48:54.898990 containerd[1501]: time="2025-01-13T22:48:54.898789015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:48:54.920454 containerd[1501]: time="2025-01-13T22:48:54.920393804Z" level=info msg="TearDown network for sandbox \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" successfully" Jan 13 22:48:54.920454 containerd[1501]: time="2025-01-13T22:48:54.920441058Z" level=info msg="StopPodSandbox for \"772f0d4e0b0eca5e9cb5515136510bd8a93af374b796830d107f9dfad1cd2c4f\" returns successfully" Jan 13 22:48:55.011345 kubelet[1923]: I0113 22:48:55.010336 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-xtables-lock\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.011345 kubelet[1923]: I0113 22:48:55.010422 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1be9a641-8a33-4b43-8027-7ef38f5c3858-clustermesh-secrets\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.011345 kubelet[1923]: I0113 22:48:55.010454 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-host-proc-sys-kernel\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.011345 kubelet[1923]: I0113 22:48:55.010481 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-run\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.011345 kubelet[1923]: I0113 22:48:55.010505 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-bpf-maps\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.011345 kubelet[1923]: I0113 22:48:55.010529 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-host-proc-sys-net\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.011782 kubelet[1923]: I0113 22:48:55.010526 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.011782 kubelet[1923]: I0113 22:48:55.010556 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1be9a641-8a33-4b43-8027-7ef38f5c3858-hubble-tls\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.011782 kubelet[1923]: I0113 22:48:55.010583 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-hostproc\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.011782 kubelet[1923]: I0113 22:48:55.010601 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.011782 kubelet[1923]: I0113 22:48:55.010617 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-lib-modules\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.011782 kubelet[1923]: I0113 22:48:55.010645 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v65b5\" (UniqueName: \"kubernetes.io/projected/1be9a641-8a33-4b43-8027-7ef38f5c3858-kube-api-access-v65b5\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.015083 kubelet[1923]: I0113 22:48:55.010669 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-etc-cni-netd\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.015083 kubelet[1923]: I0113 22:48:55.010707 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-config-path\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.015083 kubelet[1923]: I0113 22:48:55.010733 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cni-path\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.015083 kubelet[1923]: I0113 22:48:55.010755 1923 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-cgroup\") pod \"1be9a641-8a33-4b43-8027-7ef38f5c3858\" (UID: \"1be9a641-8a33-4b43-8027-7ef38f5c3858\") " Jan 13 22:48:55.015083 kubelet[1923]: I0113 22:48:55.010809 1923 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-run\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.015083 kubelet[1923]: I0113 22:48:55.010831 1923 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-xtables-lock\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.015395 kubelet[1923]: I0113 22:48:55.010867 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.015395 kubelet[1923]: I0113 22:48:55.010908 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.015395 kubelet[1923]: I0113 22:48:55.010937 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.015395 kubelet[1923]: I0113 22:48:55.012939 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-hostproc" (OuterVolumeSpecName: "hostproc") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.015395 kubelet[1923]: I0113 22:48:55.013017 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.015962 kubelet[1923]: I0113 22:48:55.015926 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.016170 kubelet[1923]: I0113 22:48:55.016132 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.017086 kubelet[1923]: I0113 22:48:55.016439 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cni-path" (OuterVolumeSpecName: "cni-path") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 22:48:55.021873 kubelet[1923]: I0113 22:48:55.021777 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be9a641-8a33-4b43-8027-7ef38f5c3858-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 22:48:55.023785 kubelet[1923]: I0113 22:48:55.023753 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 22:48:55.024094 kubelet[1923]: I0113 22:48:55.024067 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be9a641-8a33-4b43-8027-7ef38f5c3858-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 22:48:55.024238 systemd[1]: var-lib-kubelet-pods-1be9a641\x2d8a33\x2d4b43\x2d8027\x2d7ef38f5c3858-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 22:48:55.024790 kubelet[1923]: I0113 22:48:55.024752 1923 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be9a641-8a33-4b43-8027-7ef38f5c3858-kube-api-access-v65b5" (OuterVolumeSpecName: "kube-api-access-v65b5") pod "1be9a641-8a33-4b43-8027-7ef38f5c3858" (UID: "1be9a641-8a33-4b43-8027-7ef38f5c3858"). InnerVolumeSpecName "kube-api-access-v65b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 22:48:55.111291 kubelet[1923]: I0113 22:48:55.111194 1923 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1be9a641-8a33-4b43-8027-7ef38f5c3858-clustermesh-secrets\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111291 kubelet[1923]: I0113 22:48:55.111272 1923 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-host-proc-sys-kernel\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111291 kubelet[1923]: I0113 22:48:55.111293 1923 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1be9a641-8a33-4b43-8027-7ef38f5c3858-hubble-tls\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111291 kubelet[1923]: I0113 22:48:55.111308 1923 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-hostproc\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111656 kubelet[1923]: I0113 22:48:55.111324 1923 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-bpf-maps\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111656 kubelet[1923]: I0113 22:48:55.111337 1923 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-host-proc-sys-net\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111656 kubelet[1923]: I0113 22:48:55.111352 1923 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-etc-cni-netd\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111656 kubelet[1923]: I0113 22:48:55.111364 1923 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-lib-modules\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111656 kubelet[1923]: I0113 22:48:55.111378 1923 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v65b5\" (UniqueName: \"kubernetes.io/projected/1be9a641-8a33-4b43-8027-7ef38f5c3858-kube-api-access-v65b5\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111656 kubelet[1923]: I0113 22:48:55.111393 1923 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-config-path\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111656 kubelet[1923]: I0113 22:48:55.111406 1923 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cni-path\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.111656 kubelet[1923]: I0113 22:48:55.111420 1923 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1be9a641-8a33-4b43-8027-7ef38f5c3858-cilium-cgroup\") on node \"10.244.10.2\" DevicePath \"\"" Jan 13 22:48:55.452458 kubelet[1923]: E0113 22:48:55.452347 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:55.546651 systemd[1]: Removed slice kubepods-burstable-pod1be9a641_8a33_4b43_8027_7ef38f5c3858.slice - libcontainer container kubepods-burstable-pod1be9a641_8a33_4b43_8027_7ef38f5c3858.slice. Jan 13 22:48:55.546906 systemd[1]: kubepods-burstable-pod1be9a641_8a33_4b43_8027_7ef38f5c3858.slice: Consumed 10.418s CPU time. Jan 13 22:48:55.600359 systemd[1]: var-lib-kubelet-pods-1be9a641\x2d8a33\x2d4b43\x2d8027\x2d7ef38f5c3858-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv65b5.mount: Deactivated successfully. Jan 13 22:48:55.600516 systemd[1]: var-lib-kubelet-pods-1be9a641\x2d8a33\x2d4b43\x2d8027\x2d7ef38f5c3858-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 22:48:55.773186 kubelet[1923]: I0113 22:48:55.772695 1923 scope.go:117] "RemoveContainer" containerID="254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59" Jan 13 22:48:55.774895 containerd[1501]: time="2025-01-13T22:48:55.774578140Z" level=info msg="RemoveContainer for \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\"" Jan 13 22:48:55.781720 containerd[1501]: time="2025-01-13T22:48:55.781477114Z" level=info msg="RemoveContainer for \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\" returns successfully" Jan 13 22:48:55.782550 kubelet[1923]: I0113 22:48:55.781753 1923 scope.go:117] "RemoveContainer" containerID="cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c" Jan 13 22:48:55.783247 containerd[1501]: time="2025-01-13T22:48:55.783143955Z" level=info msg="RemoveContainer for \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\"" Jan 13 22:48:55.786825 containerd[1501]: time="2025-01-13T22:48:55.786789418Z" level=info msg="RemoveContainer for \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\" returns successfully" Jan 13 22:48:55.786994 kubelet[1923]: I0113 22:48:55.786968 1923 scope.go:117] "RemoveContainer" containerID="7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f" Jan 13 22:48:55.790716 containerd[1501]: time="2025-01-13T22:48:55.790338201Z" level=info msg="RemoveContainer for \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\"" Jan 13 22:48:55.799726 containerd[1501]: time="2025-01-13T22:48:55.799673087Z" level=info msg="RemoveContainer for \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\" returns successfully" Jan 13 22:48:55.800393 kubelet[1923]: I0113 22:48:55.800277 1923 scope.go:117] "RemoveContainer" containerID="3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9" Jan 13 22:48:55.802586 containerd[1501]: time="2025-01-13T22:48:55.802266739Z" level=info msg="RemoveContainer for \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\"" Jan 13 22:48:55.805028 containerd[1501]: time="2025-01-13T22:48:55.804995927Z" level=info msg="RemoveContainer for \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\" returns successfully" Jan 13 22:48:55.805324 kubelet[1923]: I0113 22:48:55.805299 1923 scope.go:117] "RemoveContainer" containerID="993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7" Jan 13 22:48:55.807069 containerd[1501]: time="2025-01-13T22:48:55.806946317Z" level=info msg="RemoveContainer for \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\"" Jan 13 22:48:55.809808 containerd[1501]: time="2025-01-13T22:48:55.809743721Z" level=info msg="RemoveContainer for \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\" returns successfully" Jan 13 22:48:55.810240 kubelet[1923]: I0113 22:48:55.810110 1923 scope.go:117] "RemoveContainer" containerID="254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59" Jan 13 22:48:55.810760 containerd[1501]: time="2025-01-13T22:48:55.810363296Z" level=error msg="ContainerStatus for \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\": not found" Jan 13 22:48:55.821629 kubelet[1923]: E0113 22:48:55.821566 1923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\": not found" containerID="254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59" Jan 13 22:48:55.821786 kubelet[1923]: I0113 22:48:55.821642 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59"} err="failed to get container status \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\": rpc error: code = NotFound desc = an error occurred when try to find container \"254e610ca880cdef0e6fbe8fe633605ac015af93bc95e23b8594d7a048558c59\": not found" Jan 13 22:48:55.821786 kubelet[1923]: I0113 22:48:55.821767 1923 scope.go:117] "RemoveContainer" containerID="cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c" Jan 13 22:48:55.822237 containerd[1501]: time="2025-01-13T22:48:55.822147439Z" level=error msg="ContainerStatus for \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\": not found" Jan 13 22:48:55.822725 containerd[1501]: time="2025-01-13T22:48:55.822631334Z" level=error msg="ContainerStatus for \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\": not found" Jan 13 22:48:55.822808 kubelet[1923]: E0113 22:48:55.822374 1923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\": not found" containerID="cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c" Jan 13 22:48:55.822808 kubelet[1923]: I0113 22:48:55.822406 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c"} err="failed to get container status \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf8fa3d2bd54a491ef11040cc03ce8f76251c23d93718aa88cf6867924b73e2c\": not found" Jan 13 22:48:55.822808 kubelet[1923]: I0113 22:48:55.822428 1923 scope.go:117] "RemoveContainer" containerID="7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f" Jan 13 22:48:55.823424 kubelet[1923]: E0113 22:48:55.823118 1923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\": not found" containerID="7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f" Jan 13 22:48:55.823424 kubelet[1923]: I0113 22:48:55.823217 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f"} err="failed to get container status \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e1db27987096b6ccd14dc5ae581c22125e077bcb01fcda0b543d020e17b145f\": not found" Jan 13 22:48:55.823424 kubelet[1923]: I0113 22:48:55.823275 1923 scope.go:117] "RemoveContainer" containerID="3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9" Jan 13 22:48:55.823634 containerd[1501]: time="2025-01-13T22:48:55.823562774Z" level=error msg="ContainerStatus for \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\": not found" Jan 13 22:48:55.823813 kubelet[1923]: E0113 22:48:55.823744 1923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\": not found" containerID="3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9" Jan 13 22:48:55.823898 kubelet[1923]: I0113 22:48:55.823822 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9"} err="failed to get container status \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cbf4016fe4ddd2d32ca1515631837ecea6fe4897dd47fc0ac8c573b7b9257b9\": not found" Jan 13 22:48:55.823898 kubelet[1923]: I0113 22:48:55.823856 1923 scope.go:117] "RemoveContainer" containerID="993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7" Jan 13 22:48:55.824421 containerd[1501]: time="2025-01-13T22:48:55.824176094Z" level=error msg="ContainerStatus for \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\": not found" Jan 13 22:48:55.824520 kubelet[1923]: E0113 22:48:55.824334 1923 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\": not found" containerID="993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7" Jan 13 22:48:55.824520 kubelet[1923]: I0113 22:48:55.824370 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7"} err="failed to get container status \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"993feed854c3907da36cb2b5321c1f03fab9210eaf034db1393cbbd3d0f168e7\": not found" Jan 13 22:48:56.453149 kubelet[1923]: E0113 22:48:56.453066 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:57.454253 kubelet[1923]: E0113 22:48:57.454158 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:57.515079 kubelet[1923]: I0113 22:48:57.514220 1923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1be9a641-8a33-4b43-8027-7ef38f5c3858" path="/var/lib/kubelet/pods/1be9a641-8a33-4b43-8027-7ef38f5c3858/volumes" Jan 13 22:48:58.455388 kubelet[1923]: E0113 22:48:58.455261 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:59.120793 kubelet[1923]: I0113 22:48:59.120699 1923 topology_manager.go:215] "Topology Admit Handler" podUID="6c16a7fa-73a1-4085-88b9-babf912b3896" podNamespace="kube-system" podName="cilium-operator-599987898-29wz6" Jan 13 22:48:59.121128 kubelet[1923]: E0113 22:48:59.120845 1923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1be9a641-8a33-4b43-8027-7ef38f5c3858" containerName="cilium-agent" Jan 13 22:48:59.121128 kubelet[1923]: E0113 22:48:59.120874 1923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1be9a641-8a33-4b43-8027-7ef38f5c3858" containerName="mount-bpf-fs" Jan 13 22:48:59.121128 kubelet[1923]: E0113 22:48:59.120897 1923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1be9a641-8a33-4b43-8027-7ef38f5c3858" containerName="clean-cilium-state" Jan 13 22:48:59.121128 kubelet[1923]: E0113 22:48:59.120911 1923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1be9a641-8a33-4b43-8027-7ef38f5c3858" containerName="mount-cgroup" Jan 13 22:48:59.121128 kubelet[1923]: E0113 22:48:59.120924 1923 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1be9a641-8a33-4b43-8027-7ef38f5c3858" containerName="apply-sysctl-overwrites" Jan 13 22:48:59.121128 kubelet[1923]: I0113 22:48:59.121059 1923 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be9a641-8a33-4b43-8027-7ef38f5c3858" containerName="cilium-agent" Jan 13 22:48:59.130300 systemd[1]: Created slice kubepods-besteffort-pod6c16a7fa_73a1_4085_88b9_babf912b3896.slice - libcontainer container kubepods-besteffort-pod6c16a7fa_73a1_4085_88b9_babf912b3896.slice. Jan 13 22:48:59.138086 kubelet[1923]: I0113 22:48:59.137002 1923 topology_manager.go:215] "Topology Admit Handler" podUID="77d77526-7e2f-49d4-9145-6e0ba7d42992" podNamespace="kube-system" podName="cilium-pf7j5" Jan 13 22:48:59.145708 systemd[1]: Created slice kubepods-burstable-pod77d77526_7e2f_49d4_9145_6e0ba7d42992.slice - libcontainer container kubepods-burstable-pod77d77526_7e2f_49d4_9145_6e0ba7d42992.slice. Jan 13 22:48:59.240977 kubelet[1923]: I0113 22:48:59.240723 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-cilium-run\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.240977 kubelet[1923]: I0113 22:48:59.240787 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-etc-cni-netd\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.240977 kubelet[1923]: I0113 22:48:59.240832 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-xtables-lock\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.240977 kubelet[1923]: I0113 22:48:59.240902 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-cni-path\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241364 kubelet[1923]: I0113 22:48:59.241090 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c16a7fa-73a1-4085-88b9-babf912b3896-cilium-config-path\") pod \"cilium-operator-599987898-29wz6\" (UID: \"6c16a7fa-73a1-4085-88b9-babf912b3896\") " pod="kube-system/cilium-operator-599987898-29wz6" Jan 13 22:48:59.241364 kubelet[1923]: I0113 22:48:59.241199 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4bv8\" (UniqueName: \"kubernetes.io/projected/6c16a7fa-73a1-4085-88b9-babf912b3896-kube-api-access-m4bv8\") pod \"cilium-operator-599987898-29wz6\" (UID: \"6c16a7fa-73a1-4085-88b9-babf912b3896\") " pod="kube-system/cilium-operator-599987898-29wz6" Jan 13 22:48:59.241364 kubelet[1923]: I0113 22:48:59.241301 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-bpf-maps\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241540 kubelet[1923]: I0113 22:48:59.241378 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77d77526-7e2f-49d4-9145-6e0ba7d42992-hubble-tls\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241540 kubelet[1923]: I0113 22:48:59.241449 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-hostproc\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241540 kubelet[1923]: I0113 22:48:59.241532 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-lib-modules\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241729 kubelet[1923]: I0113 22:48:59.241574 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-host-proc-sys-net\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241729 kubelet[1923]: I0113 22:48:59.241630 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-host-proc-sys-kernel\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241729 kubelet[1923]: I0113 22:48:59.241663 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76kph\" (UniqueName: \"kubernetes.io/projected/77d77526-7e2f-49d4-9145-6e0ba7d42992-kube-api-access-76kph\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241918 kubelet[1923]: I0113 22:48:59.241747 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77d77526-7e2f-49d4-9145-6e0ba7d42992-cilium-cgroup\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241918 kubelet[1923]: I0113 22:48:59.241808 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77d77526-7e2f-49d4-9145-6e0ba7d42992-clustermesh-secrets\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.241918 kubelet[1923]: I0113 22:48:59.241897 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77d77526-7e2f-49d4-9145-6e0ba7d42992-cilium-config-path\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.242158 kubelet[1923]: I0113 22:48:59.241949 1923 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/77d77526-7e2f-49d4-9145-6e0ba7d42992-cilium-ipsec-secrets\") pod \"cilium-pf7j5\" (UID: \"77d77526-7e2f-49d4-9145-6e0ba7d42992\") " pod="kube-system/cilium-pf7j5" Jan 13 22:48:59.383287 kubelet[1923]: E0113 22:48:59.381847 1923 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:59.437074 containerd[1501]: time="2025-01-13T22:48:59.436984107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-29wz6,Uid:6c16a7fa-73a1-4085-88b9-babf912b3896,Namespace:kube-system,Attempt:0,}" Jan 13 22:48:59.456150 kubelet[1923]: E0113 22:48:59.456102 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:48:59.457531 containerd[1501]: time="2025-01-13T22:48:59.457060692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pf7j5,Uid:77d77526-7e2f-49d4-9145-6e0ba7d42992,Namespace:kube-system,Attempt:0,}" Jan 13 22:48:59.476893 containerd[1501]: time="2025-01-13T22:48:59.476724257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:48:59.477097 containerd[1501]: time="2025-01-13T22:48:59.476939810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:48:59.477184 containerd[1501]: time="2025-01-13T22:48:59.477067718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:59.477545 containerd[1501]: time="2025-01-13T22:48:59.477380912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:59.504033 containerd[1501]: time="2025-01-13T22:48:59.501976281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 22:48:59.504033 containerd[1501]: time="2025-01-13T22:48:59.503382218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 22:48:59.504033 containerd[1501]: time="2025-01-13T22:48:59.503414097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:59.504033 containerd[1501]: time="2025-01-13T22:48:59.503564628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 22:48:59.508374 systemd[1]: Started cri-containerd-74c5154dc51c01692d2349628086931473ee80f67486837b3bd96111bab62f88.scope - libcontainer container 74c5154dc51c01692d2349628086931473ee80f67486837b3bd96111bab62f88. Jan 13 22:48:59.547280 systemd[1]: Started cri-containerd-14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623.scope - libcontainer container 14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623. Jan 13 22:48:59.549014 kubelet[1923]: E0113 22:48:59.547938 1923 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 22:48:59.608028 containerd[1501]: time="2025-01-13T22:48:59.607619499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pf7j5,Uid:77d77526-7e2f-49d4-9145-6e0ba7d42992,Namespace:kube-system,Attempt:0,} returns sandbox id \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\"" Jan 13 22:48:59.612210 containerd[1501]: time="2025-01-13T22:48:59.611802620Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 22:48:59.616002 containerd[1501]: time="2025-01-13T22:48:59.615802753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-29wz6,Uid:6c16a7fa-73a1-4085-88b9-babf912b3896,Namespace:kube-system,Attempt:0,} returns sandbox id \"74c5154dc51c01692d2349628086931473ee80f67486837b3bd96111bab62f88\"" Jan 13 22:48:59.618707 containerd[1501]: time="2025-01-13T22:48:59.618660131Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 22:48:59.652454 containerd[1501]: time="2025-01-13T22:48:59.651466371Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0815ca92544ea769a4b32a97a074bdb39242b4c8d524c9b5b66f7e2dceb96173\"" Jan 13 22:48:59.653691 containerd[1501]: time="2025-01-13T22:48:59.653645744Z" level=info msg="StartContainer for \"0815ca92544ea769a4b32a97a074bdb39242b4c8d524c9b5b66f7e2dceb96173\"" Jan 13 22:48:59.694437 systemd[1]: Started cri-containerd-0815ca92544ea769a4b32a97a074bdb39242b4c8d524c9b5b66f7e2dceb96173.scope - libcontainer container 0815ca92544ea769a4b32a97a074bdb39242b4c8d524c9b5b66f7e2dceb96173. Jan 13 22:48:59.740440 containerd[1501]: time="2025-01-13T22:48:59.740358988Z" level=info msg="StartContainer for \"0815ca92544ea769a4b32a97a074bdb39242b4c8d524c9b5b66f7e2dceb96173\" returns successfully" Jan 13 22:48:59.758678 systemd[1]: cri-containerd-0815ca92544ea769a4b32a97a074bdb39242b4c8d524c9b5b66f7e2dceb96173.scope: Deactivated successfully. Jan 13 22:48:59.811961 containerd[1501]: time="2025-01-13T22:48:59.811537888Z" level=info msg="shim disconnected" id=0815ca92544ea769a4b32a97a074bdb39242b4c8d524c9b5b66f7e2dceb96173 namespace=k8s.io Jan 13 22:48:59.811961 containerd[1501]: time="2025-01-13T22:48:59.811672351Z" level=warning msg="cleaning up after shim disconnected" id=0815ca92544ea769a4b32a97a074bdb39242b4c8d524c9b5b66f7e2dceb96173 namespace=k8s.io Jan 13 22:48:59.811961 containerd[1501]: time="2025-01-13T22:48:59.811688726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:49:00.457171 kubelet[1923]: E0113 22:49:00.457094 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:00.805163 containerd[1501]: time="2025-01-13T22:49:00.805006216Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 22:49:00.821693 containerd[1501]: time="2025-01-13T22:49:00.821616617Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c873b30e68da7709cc40a9c30d908eb97d34a02ae15b83893a686848befd8b6f\"" Jan 13 22:49:00.823019 containerd[1501]: time="2025-01-13T22:49:00.822961390Z" level=info msg="StartContainer for \"c873b30e68da7709cc40a9c30d908eb97d34a02ae15b83893a686848befd8b6f\"" Jan 13 22:49:00.872464 systemd[1]: Started cri-containerd-c873b30e68da7709cc40a9c30d908eb97d34a02ae15b83893a686848befd8b6f.scope - libcontainer container c873b30e68da7709cc40a9c30d908eb97d34a02ae15b83893a686848befd8b6f. Jan 13 22:49:00.916380 containerd[1501]: time="2025-01-13T22:49:00.916289943Z" level=info msg="StartContainer for \"c873b30e68da7709cc40a9c30d908eb97d34a02ae15b83893a686848befd8b6f\" returns successfully" Jan 13 22:49:00.932391 systemd[1]: cri-containerd-c873b30e68da7709cc40a9c30d908eb97d34a02ae15b83893a686848befd8b6f.scope: Deactivated successfully. Jan 13 22:49:00.944532 kubelet[1923]: I0113 22:49:00.944450 1923 setters.go:580] "Node became not ready" node="10.244.10.2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T22:49:00Z","lastTransitionTime":"2025-01-13T22:49:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 22:49:00.975872 containerd[1501]: time="2025-01-13T22:49:00.975550993Z" level=info msg="shim disconnected" id=c873b30e68da7709cc40a9c30d908eb97d34a02ae15b83893a686848befd8b6f namespace=k8s.io Jan 13 22:49:00.975872 containerd[1501]: time="2025-01-13T22:49:00.975649595Z" level=warning msg="cleaning up after shim disconnected" id=c873b30e68da7709cc40a9c30d908eb97d34a02ae15b83893a686848befd8b6f namespace=k8s.io Jan 13 22:49:00.975872 containerd[1501]: time="2025-01-13T22:49:00.975665305Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:49:01.353673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c873b30e68da7709cc40a9c30d908eb97d34a02ae15b83893a686848befd8b6f-rootfs.mount: Deactivated successfully. Jan 13 22:49:01.457424 kubelet[1923]: E0113 22:49:01.457341 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:01.811725 containerd[1501]: time="2025-01-13T22:49:01.810411395Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 22:49:01.837377 containerd[1501]: time="2025-01-13T22:49:01.837176830Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4b9315baf32f99b5b502df14934b17a41d2215f7367c0aadb47234c84dc2eb6d\"" Jan 13 22:49:01.839733 containerd[1501]: time="2025-01-13T22:49:01.837781650Z" level=info msg="StartContainer for \"4b9315baf32f99b5b502df14934b17a41d2215f7367c0aadb47234c84dc2eb6d\"" Jan 13 22:49:01.884342 systemd[1]: Started cri-containerd-4b9315baf32f99b5b502df14934b17a41d2215f7367c0aadb47234c84dc2eb6d.scope - libcontainer container 4b9315baf32f99b5b502df14934b17a41d2215f7367c0aadb47234c84dc2eb6d. Jan 13 22:49:01.928888 containerd[1501]: time="2025-01-13T22:49:01.928706757Z" level=info msg="StartContainer for \"4b9315baf32f99b5b502df14934b17a41d2215f7367c0aadb47234c84dc2eb6d\" returns successfully" Jan 13 22:49:01.936239 systemd[1]: cri-containerd-4b9315baf32f99b5b502df14934b17a41d2215f7367c0aadb47234c84dc2eb6d.scope: Deactivated successfully. Jan 13 22:49:01.970408 containerd[1501]: time="2025-01-13T22:49:01.970280250Z" level=info msg="shim disconnected" id=4b9315baf32f99b5b502df14934b17a41d2215f7367c0aadb47234c84dc2eb6d namespace=k8s.io Jan 13 22:49:01.970641 containerd[1501]: time="2025-01-13T22:49:01.970416364Z" level=warning msg="cleaning up after shim disconnected" id=4b9315baf32f99b5b502df14934b17a41d2215f7367c0aadb47234c84dc2eb6d namespace=k8s.io Jan 13 22:49:01.970641 containerd[1501]: time="2025-01-13T22:49:01.970439627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:49:02.353391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b9315baf32f99b5b502df14934b17a41d2215f7367c0aadb47234c84dc2eb6d-rootfs.mount: Deactivated successfully. Jan 13 22:49:02.459255 kubelet[1923]: E0113 22:49:02.459060 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:02.615183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331050753.mount: Deactivated successfully. Jan 13 22:49:02.820468 containerd[1501]: time="2025-01-13T22:49:02.819972625Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 22:49:02.851741 containerd[1501]: time="2025-01-13T22:49:02.851023011Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f2f25421f5e29f4f6bcbcd74677f25ad2ca6c68629ea317243fe65f746fe7bf7\"" Jan 13 22:49:02.852002 containerd[1501]: time="2025-01-13T22:49:02.851850865Z" level=info msg="StartContainer for \"f2f25421f5e29f4f6bcbcd74677f25ad2ca6c68629ea317243fe65f746fe7bf7\"" Jan 13 22:49:02.921439 systemd[1]: Started cri-containerd-f2f25421f5e29f4f6bcbcd74677f25ad2ca6c68629ea317243fe65f746fe7bf7.scope - libcontainer container f2f25421f5e29f4f6bcbcd74677f25ad2ca6c68629ea317243fe65f746fe7bf7. Jan 13 22:49:02.977132 systemd[1]: cri-containerd-f2f25421f5e29f4f6bcbcd74677f25ad2ca6c68629ea317243fe65f746fe7bf7.scope: Deactivated successfully. Jan 13 22:49:02.983769 containerd[1501]: time="2025-01-13T22:49:02.983494525Z" level=info msg="StartContainer for \"f2f25421f5e29f4f6bcbcd74677f25ad2ca6c68629ea317243fe65f746fe7bf7\" returns successfully" Jan 13 22:49:03.083821 containerd[1501]: time="2025-01-13T22:49:03.082157379Z" level=info msg="shim disconnected" id=f2f25421f5e29f4f6bcbcd74677f25ad2ca6c68629ea317243fe65f746fe7bf7 namespace=k8s.io Jan 13 22:49:03.083821 containerd[1501]: time="2025-01-13T22:49:03.082253543Z" level=warning msg="cleaning up after shim disconnected" id=f2f25421f5e29f4f6bcbcd74677f25ad2ca6c68629ea317243fe65f746fe7bf7 namespace=k8s.io Jan 13 22:49:03.083821 containerd[1501]: time="2025-01-13T22:49:03.082269972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 22:49:03.111321 containerd[1501]: time="2025-01-13T22:49:03.111252521Z" level=warning msg="cleanup warnings time=\"2025-01-13T22:49:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 22:49:03.461067 kubelet[1923]: E0113 22:49:03.460960 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:03.531925 containerd[1501]: time="2025-01-13T22:49:03.530965263Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:49:03.533053 containerd[1501]: time="2025-01-13T22:49:03.532967057Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907241" Jan 13 22:49:03.534073 containerd[1501]: time="2025-01-13T22:49:03.534019202Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 22:49:03.537411 containerd[1501]: time="2025-01-13T22:49:03.537150666Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.918286143s" Jan 13 22:49:03.537411 containerd[1501]: time="2025-01-13T22:49:03.537213173Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 22:49:03.541062 containerd[1501]: time="2025-01-13T22:49:03.540772046Z" level=info msg="CreateContainer within sandbox \"74c5154dc51c01692d2349628086931473ee80f67486837b3bd96111bab62f88\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 22:49:03.557024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659897851.mount: Deactivated successfully. Jan 13 22:49:03.558649 containerd[1501]: time="2025-01-13T22:49:03.558265199Z" level=info msg="CreateContainer within sandbox \"74c5154dc51c01692d2349628086931473ee80f67486837b3bd96111bab62f88\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"989cfaaed82c110999b054c996cc8bfda111275b60fe77657a4acfdffa599a98\"" Jan 13 22:49:03.559234 containerd[1501]: time="2025-01-13T22:49:03.559190598Z" level=info msg="StartContainer for \"989cfaaed82c110999b054c996cc8bfda111275b60fe77657a4acfdffa599a98\"" Jan 13 22:49:03.611255 systemd[1]: Started cri-containerd-989cfaaed82c110999b054c996cc8bfda111275b60fe77657a4acfdffa599a98.scope - libcontainer container 989cfaaed82c110999b054c996cc8bfda111275b60fe77657a4acfdffa599a98. Jan 13 22:49:03.646843 containerd[1501]: time="2025-01-13T22:49:03.646707090Z" level=info msg="StartContainer for \"989cfaaed82c110999b054c996cc8bfda111275b60fe77657a4acfdffa599a98\" returns successfully" Jan 13 22:49:03.826896 containerd[1501]: time="2025-01-13T22:49:03.825972337Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 22:49:03.842390 containerd[1501]: time="2025-01-13T22:49:03.842316476Z" level=info msg="CreateContainer within sandbox \"14b88ea9a67ea115707ce9b3788ae3ff7fe46c280d147ade5e65b257607eb623\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1676f80dc411a3b1a61f8d16e7b996db27abc24f1837c76e3d4bcd7d01bc32ed\"" Jan 13 22:49:03.844177 containerd[1501]: time="2025-01-13T22:49:03.842968713Z" level=info msg="StartContainer for \"1676f80dc411a3b1a61f8d16e7b996db27abc24f1837c76e3d4bcd7d01bc32ed\"" Jan 13 22:49:03.900267 systemd[1]: Started cri-containerd-1676f80dc411a3b1a61f8d16e7b996db27abc24f1837c76e3d4bcd7d01bc32ed.scope - libcontainer container 1676f80dc411a3b1a61f8d16e7b996db27abc24f1837c76e3d4bcd7d01bc32ed. Jan 13 22:49:03.944728 containerd[1501]: time="2025-01-13T22:49:03.944658093Z" level=info msg="StartContainer for \"1676f80dc411a3b1a61f8d16e7b996db27abc24f1837c76e3d4bcd7d01bc32ed\" returns successfully" Jan 13 22:49:04.355672 systemd[1]: run-containerd-runc-k8s.io-989cfaaed82c110999b054c996cc8bfda111275b60fe77657a4acfdffa599a98-runc.5QXdm0.mount: Deactivated successfully. Jan 13 22:49:04.461669 kubelet[1923]: E0113 22:49:04.461600 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:04.701112 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 22:49:04.902003 kubelet[1923]: I0113 22:49:04.901590 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pf7j5" podStartSLOduration=5.901541748 podStartE2EDuration="5.901541748s" podCreationTimestamp="2025-01-13 22:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 22:49:04.895198136 +0000 UTC m=+86.300719955" watchObservedRunningTime="2025-01-13 22:49:04.901541748 +0000 UTC m=+86.307063530" Jan 13 22:49:04.902003 kubelet[1923]: I0113 22:49:04.901828 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-29wz6" podStartSLOduration=1.981068732 podStartE2EDuration="5.901819475s" podCreationTimestamp="2025-01-13 22:48:59 +0000 UTC" firstStartedPulling="2025-01-13 22:48:59.617826314 +0000 UTC m=+81.023348097" lastFinishedPulling="2025-01-13 22:49:03.538577046 +0000 UTC m=+84.944098840" observedRunningTime="2025-01-13 22:49:03.893169668 +0000 UTC m=+85.298691476" watchObservedRunningTime="2025-01-13 22:49:04.901819475 +0000 UTC m=+86.307341260" Jan 13 22:49:05.462361 kubelet[1923]: E0113 22:49:05.462301 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:06.463008 kubelet[1923]: E0113 22:49:06.462904 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:07.273165 systemd[1]: run-containerd-runc-k8s.io-1676f80dc411a3b1a61f8d16e7b996db27abc24f1837c76e3d4bcd7d01bc32ed-runc.3SIxto.mount: Deactivated successfully. Jan 13 22:49:07.463188 kubelet[1923]: E0113 22:49:07.463094 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:08.463817 kubelet[1923]: E0113 22:49:08.463696 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:08.485727 systemd-networkd[1419]: lxc_health: Link UP Jan 13 22:49:08.493866 systemd-networkd[1419]: lxc_health: Gained carrier Jan 13 22:49:09.464801 kubelet[1923]: E0113 22:49:09.464695 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:09.626453 systemd[1]: run-containerd-runc-k8s.io-1676f80dc411a3b1a61f8d16e7b996db27abc24f1837c76e3d4bcd7d01bc32ed-runc.b3SJGP.mount: Deactivated successfully. Jan 13 22:49:09.756300 systemd-networkd[1419]: lxc_health: Gained IPv6LL Jan 13 22:49:10.465644 kubelet[1923]: E0113 22:49:10.465565 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:11.466652 kubelet[1923]: E0113 22:49:11.466521 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:11.935666 systemd[1]: run-containerd-runc-k8s.io-1676f80dc411a3b1a61f8d16e7b996db27abc24f1837c76e3d4bcd7d01bc32ed-runc.VZc6ZB.mount: Deactivated successfully. Jan 13 22:49:12.467463 kubelet[1923]: E0113 22:49:12.467362 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:13.468021 kubelet[1923]: E0113 22:49:13.467930 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:14.469128 kubelet[1923]: E0113 22:49:14.468561 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:15.469146 kubelet[1923]: E0113 22:49:15.469012 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:16.470243 kubelet[1923]: E0113 22:49:16.470165 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:17.471264 kubelet[1923]: E0113 22:49:17.471187 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 22:49:18.471941 kubelet[1923]: E0113 22:49:18.471854 1923 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"