Jan 17 13:30:23.060539 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 13:30:23.060581 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 13:30:23.060596 kernel: BIOS-provided physical RAM map: Jan 17 13:30:23.060612 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 17 13:30:23.060622 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 17 13:30:23.060632 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 17 13:30:23.060643 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 17 13:30:23.060654 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 17 13:30:23.060664 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 13:30:23.060674 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 17 13:30:23.060685 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 17 13:30:23.060695 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 17 13:30:23.060710 kernel: NX (Execute Disable) protection: active Jan 17 13:30:23.060721 kernel: APIC: Static calls initialized Jan 17 13:30:23.060734 kernel: SMBIOS 2.8 present. Jan 17 13:30:23.060746 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 17 13:30:23.060757 kernel: Hypervisor detected: KVM Jan 17 13:30:23.060773 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 13:30:23.060785 kernel: kvm-clock: using sched offset of 4349536880 cycles Jan 17 13:30:23.060797 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 13:30:23.060809 kernel: tsc: Detected 2500.032 MHz processor Jan 17 13:30:23.060821 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 13:30:23.060833 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 13:30:23.060845 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 17 13:30:23.060856 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 17 13:30:23.060868 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 13:30:23.060884 kernel: Using GB pages for direct mapping Jan 17 13:30:23.060896 kernel: ACPI: Early table checksum verification disabled Jan 17 13:30:23.060908 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 17 13:30:23.060919 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 13:30:23.060931 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 13:30:23.063491 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 13:30:23.063514 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 17 13:30:23.063527 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 13:30:23.063538 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 13:30:23.063558 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 13:30:23.063570 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 13:30:23.063582 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 17 13:30:23.063593 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 17 13:30:23.063605 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 17 13:30:23.063623 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 17 13:30:23.063636 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 17 13:30:23.063653 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 17 13:30:23.063665 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 17 13:30:23.063677 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 13:30:23.063689 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 13:30:23.063702 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 17 13:30:23.063714 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 17 13:30:23.063726 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 17 13:30:23.063742 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 17 13:30:23.063755 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 17 13:30:23.063767 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 17 13:30:23.063779 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 17 13:30:23.063791 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 17 13:30:23.063802 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 17 13:30:23.063814 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 17 13:30:23.063826 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 17 13:30:23.063838 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 17 13:30:23.063850 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 17 13:30:23.063867 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 17 13:30:23.063879 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 13:30:23.063891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 17 13:30:23.063903 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 17 13:30:23.063915 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 17 13:30:23.063928 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 17 13:30:23.063940 kernel: Zone ranges: Jan 17 13:30:23.063971 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 13:30:23.063983 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 17 13:30:23.064002 kernel: Normal empty Jan 17 13:30:23.064014 kernel: Movable zone start for each node Jan 17 13:30:23.064026 kernel: Early memory node ranges Jan 17 13:30:23.064038 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 17 13:30:23.064050 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 17 13:30:23.064062 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 17 13:30:23.064075 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 13:30:23.064087 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 17 13:30:23.064099 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 17 13:30:23.064111 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 13:30:23.064128 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 13:30:23.064140 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 13:30:23.064152 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 13:30:23.064164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 13:30:23.064176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 13:30:23.064188 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 13:30:23.064200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 13:30:23.064212 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 13:30:23.064224 kernel: TSC deadline timer available Jan 17 13:30:23.064242 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 17 13:30:23.064254 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 13:30:23.064279 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 17 13:30:23.064292 kernel: Booting paravirtualized kernel on KVM Jan 17 13:30:23.064305 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 13:30:23.064317 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 17 13:30:23.064329 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 17 13:30:23.064342 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 17 13:30:23.064354 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 17 13:30:23.064372 kernel: kvm-guest: PV spinlocks enabled Jan 17 13:30:23.064384 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 13:30:23.064398 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 13:30:23.064411 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 13:30:23.064422 kernel: random: crng init done Jan 17 13:30:23.064435 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 13:30:23.064447 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 13:30:23.064459 kernel: Fallback order for Node 0: 0 Jan 17 13:30:23.064476 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 17 13:30:23.064489 kernel: Policy zone: DMA32 Jan 17 13:30:23.064501 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 13:30:23.064513 kernel: software IO TLB: area num 16. Jan 17 13:30:23.064525 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 194828K reserved, 0K cma-reserved) Jan 17 13:30:23.064538 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 17 13:30:23.064550 kernel: Kernel/User page tables isolation: enabled Jan 17 13:30:23.064562 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 13:30:23.064574 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 13:30:23.064591 kernel: Dynamic Preempt: voluntary Jan 17 13:30:23.064604 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 13:30:23.064622 kernel: rcu: RCU event tracing is enabled. Jan 17 13:30:23.064635 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 17 13:30:23.064648 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 13:30:23.064673 kernel: Rude variant of Tasks RCU enabled. Jan 17 13:30:23.064690 kernel: Tracing variant of Tasks RCU enabled. Jan 17 13:30:23.064703 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 13:30:23.064716 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 17 13:30:23.064729 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 17 13:30:23.064741 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 13:30:23.064759 kernel: Console: colour VGA+ 80x25 Jan 17 13:30:23.064771 kernel: printk: console [tty0] enabled Jan 17 13:30:23.064784 kernel: printk: console [ttyS0] enabled Jan 17 13:30:23.064797 kernel: ACPI: Core revision 20230628 Jan 17 13:30:23.064810 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 13:30:23.064822 kernel: x2apic enabled Jan 17 13:30:23.064840 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 13:30:23.064853 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 17 13:30:23.064866 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Jan 17 13:30:23.064879 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 13:30:23.064892 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 17 13:30:23.064904 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 17 13:30:23.064917 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 13:30:23.064929 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 13:30:23.064942 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 13:30:23.066994 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 13:30:23.067008 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 17 13:30:23.067021 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 13:30:23.067034 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 13:30:23.067046 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 13:30:23.067058 kernel: MMIO Stale Data: Unknown: No mitigations Jan 17 13:30:23.067071 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 17 13:30:23.067084 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 13:30:23.067097 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 13:30:23.067109 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 13:30:23.067122 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 13:30:23.067140 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 13:30:23.067153 kernel: Freeing SMP alternatives memory: 32K Jan 17 13:30:23.067165 kernel: pid_max: default: 32768 minimum: 301 Jan 17 13:30:23.067178 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 13:30:23.067191 kernel: landlock: Up and running. Jan 17 13:30:23.067203 kernel: SELinux: Initializing. Jan 17 13:30:23.067216 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 13:30:23.067229 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 17 13:30:23.067242 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 17 13:30:23.067255 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 13:30:23.067282 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 13:30:23.067302 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 17 13:30:23.067315 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 17 13:30:23.067327 kernel: signal: max sigframe size: 1776 Jan 17 13:30:23.067340 kernel: rcu: Hierarchical SRCU implementation. Jan 17 13:30:23.067353 kernel: rcu: Max phase no-delay instances is 400. Jan 17 13:30:23.067366 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 13:30:23.067379 kernel: smp: Bringing up secondary CPUs ... Jan 17 13:30:23.067392 kernel: smpboot: x86: Booting SMP configuration: Jan 17 13:30:23.067404 kernel: .... node #0, CPUs: #1 Jan 17 13:30:23.067423 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 17 13:30:23.067435 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 13:30:23.067448 kernel: smpboot: Max logical packages: 16 Jan 17 13:30:23.067461 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Jan 17 13:30:23.067474 kernel: devtmpfs: initialized Jan 17 13:30:23.067487 kernel: x86/mm: Memory block size: 128MB Jan 17 13:30:23.067500 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 13:30:23.067513 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 17 13:30:23.067526 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 13:30:23.067544 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 13:30:23.067557 kernel: audit: initializing netlink subsys (disabled) Jan 17 13:30:23.067569 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 13:30:23.067582 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 13:30:23.067595 kernel: audit: type=2000 audit(1737120621.382:1): state=initialized audit_enabled=0 res=1 Jan 17 13:30:23.067607 kernel: cpuidle: using governor menu Jan 17 13:30:23.067620 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 13:30:23.067633 kernel: dca service started, version 1.12.1 Jan 17 13:30:23.067646 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 13:30:23.067664 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 13:30:23.067677 kernel: PCI: Using configuration type 1 for base access Jan 17 13:30:23.067689 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 13:30:23.067702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 13:30:23.067715 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 13:30:23.067728 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 13:30:23.067740 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 13:30:23.067753 kernel: ACPI: Added _OSI(Module Device) Jan 17 13:30:23.067766 kernel: ACPI: Added _OSI(Processor Device) Jan 17 13:30:23.067784 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 13:30:23.067797 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 13:30:23.067809 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 13:30:23.067822 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 13:30:23.067834 kernel: ACPI: Interpreter enabled Jan 17 13:30:23.067847 kernel: ACPI: PM: (supports S0 S5) Jan 17 13:30:23.067860 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 13:30:23.067873 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 13:30:23.067886 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 13:30:23.067903 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 13:30:23.067916 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 13:30:23.070217 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 13:30:23.070416 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 13:30:23.070599 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 13:30:23.070620 kernel: PCI host bridge to bus 0000:00 Jan 17 13:30:23.070816 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 13:30:23.071015 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 13:30:23.071172 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 13:30:23.071341 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 17 13:30:23.071493 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 13:30:23.071643 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 17 13:30:23.071794 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 13:30:23.073029 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 13:30:23.073243 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 17 13:30:23.073430 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 17 13:30:23.073602 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 17 13:30:23.073768 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 17 13:30:23.073934 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 13:30:23.074166 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 13:30:23.074355 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 17 13:30:23.074530 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 13:30:23.074692 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 17 13:30:23.074866 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 13:30:23.078064 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 17 13:30:23.078243 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 13:30:23.078431 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 17 13:30:23.078606 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 13:30:23.078769 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 17 13:30:23.078969 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 13:30:23.079145 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 17 13:30:23.079339 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 13:30:23.079517 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 17 13:30:23.079694 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 13:30:23.079861 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 17 13:30:23.082123 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 17 13:30:23.082439 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 17 13:30:23.082633 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 17 13:30:23.082818 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 17 13:30:23.083059 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 17 13:30:23.083256 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 17 13:30:23.083439 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 17 13:30:23.083605 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 17 13:30:23.083786 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 17 13:30:23.086026 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 13:30:23.086211 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 13:30:23.086410 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 13:30:23.086575 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 17 13:30:23.086736 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 17 13:30:23.086907 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 13:30:23.087122 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 17 13:30:23.087316 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 17 13:30:23.087497 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 17 13:30:23.087663 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 13:30:23.087828 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 13:30:23.088021 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 13:30:23.088217 kernel: pci_bus 0000:02: extended config space not accessible Jan 17 13:30:23.088418 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 17 13:30:23.088603 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 17 13:30:23.088775 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 13:30:23.088965 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 13:30:23.089155 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 13:30:23.089342 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 17 13:30:23.089511 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 13:30:23.089673 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 13:30:23.089845 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 13:30:23.090075 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 13:30:23.090248 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 17 13:30:23.090428 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 13:30:23.090593 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 13:30:23.090756 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 13:30:23.090925 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 13:30:23.091174 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 13:30:23.091362 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 13:30:23.091524 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 13:30:23.091685 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 13:30:23.091844 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 13:30:23.092022 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 13:30:23.092181 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 13:30:23.092359 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 13:30:23.092525 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 13:30:23.092697 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 13:30:23.092859 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 13:30:23.093055 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 13:30:23.093219 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 13:30:23.093397 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 13:30:23.093417 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 13:30:23.093431 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 13:30:23.093444 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 13:30:23.093465 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 13:30:23.093479 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 13:30:23.093492 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 13:30:23.093505 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 13:30:23.093518 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 13:30:23.093531 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 13:30:23.093544 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 13:30:23.093556 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 13:30:23.093569 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 13:30:23.093587 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 13:30:23.093601 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 13:30:23.093613 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 13:30:23.093626 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 13:30:23.093639 kernel: iommu: Default domain type: Translated Jan 17 13:30:23.093652 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 13:30:23.093665 kernel: PCI: Using ACPI for IRQ routing Jan 17 13:30:23.093678 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 13:30:23.093692 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 17 13:30:23.093710 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 17 13:30:23.093873 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 13:30:23.094073 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 13:30:23.094234 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 13:30:23.094254 kernel: vgaarb: loaded Jan 17 13:30:23.094281 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 13:30:23.094295 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 13:30:23.094308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 13:30:23.094329 kernel: pnp: PnP ACPI init Jan 17 13:30:23.094500 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 13:30:23.094522 kernel: pnp: PnP ACPI: found 5 devices Jan 17 13:30:23.094536 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 13:30:23.094549 kernel: NET: Registered PF_INET protocol family Jan 17 13:30:23.094562 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 13:30:23.094575 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 17 13:30:23.094589 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 13:30:23.094602 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 13:30:23.094622 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 17 13:30:23.094635 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 17 13:30:23.094648 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 13:30:23.094662 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 17 13:30:23.094675 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 13:30:23.094687 kernel: NET: Registered PF_XDP protocol family Jan 17 13:30:23.094844 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 17 13:30:23.095044 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 17 13:30:23.095217 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 17 13:30:23.095394 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 17 13:30:23.095560 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 13:30:23.095724 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 13:30:23.095889 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 13:30:23.096082 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 13:30:23.096256 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 13:30:23.096434 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 13:30:23.096598 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 13:30:23.096763 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 17 13:30:23.096930 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 17 13:30:23.097137 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 17 13:30:23.097312 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 17 13:30:23.097483 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 17 13:30:23.097702 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 17 13:30:23.097877 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 17 13:30:23.098099 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 17 13:30:23.098273 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 17 13:30:23.098438 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 17 13:30:23.098599 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 13:30:23.098762 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 17 13:30:23.098924 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 17 13:30:23.099113 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 17 13:30:23.099313 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 13:30:23.099478 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 17 13:30:23.099642 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 17 13:30:23.099806 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 17 13:30:23.100005 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 13:30:23.100179 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 17 13:30:23.100383 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 17 13:30:23.100560 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 17 13:30:23.100731 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 13:30:23.100895 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 17 13:30:23.101102 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 17 13:30:23.101301 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 17 13:30:23.101467 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 13:30:23.101640 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 17 13:30:23.101820 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 17 13:30:23.102033 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 17 13:30:23.102198 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 13:30:23.102373 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 17 13:30:23.102536 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 17 13:30:23.102706 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 17 13:30:23.102867 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 13:30:23.103046 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 17 13:30:23.103208 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 17 13:30:23.103384 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 17 13:30:23.103547 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 13:30:23.103699 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 13:30:23.103847 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 13:30:23.104034 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 13:30:23.104184 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 17 13:30:23.104350 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 13:30:23.104499 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 17 13:30:23.104678 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 17 13:30:23.104836 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 17 13:30:23.105041 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 17 13:30:23.105214 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 17 13:30:23.105395 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 17 13:30:23.105550 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 17 13:30:23.105702 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 17 13:30:23.105867 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 17 13:30:23.106059 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 17 13:30:23.106215 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 17 13:30:23.106408 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 17 13:30:23.106565 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 17 13:30:23.106722 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 17 13:30:23.106899 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 17 13:30:23.107101 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 17 13:30:23.107255 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 17 13:30:23.107460 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 17 13:30:23.107624 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 17 13:30:23.107775 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 17 13:30:23.107938 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 17 13:30:23.108121 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 17 13:30:23.108327 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 17 13:30:23.108499 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 17 13:30:23.108658 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 17 13:30:23.108823 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 17 13:30:23.108844 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 13:30:23.108859 kernel: PCI: CLS 0 bytes, default 64 Jan 17 13:30:23.108873 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 13:30:23.108887 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 17 13:30:23.108901 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 13:30:23.108915 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 17 13:30:23.108929 kernel: Initialise system trusted keyrings Jan 17 13:30:23.108993 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 17 13:30:23.109009 kernel: Key type asymmetric registered Jan 17 13:30:23.109022 kernel: Asymmetric key parser 'x509' registered Jan 17 13:30:23.109045 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 13:30:23.109059 kernel: io scheduler mq-deadline registered Jan 17 13:30:23.109072 kernel: io scheduler kyber registered Jan 17 13:30:23.109086 kernel: io scheduler bfq registered Jan 17 13:30:23.109270 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 17 13:30:23.109440 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 17 13:30:23.109614 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 13:30:23.109786 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 17 13:30:23.110000 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 17 13:30:23.110170 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 13:30:23.110348 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 17 13:30:23.110510 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 17 13:30:23.110689 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 13:30:23.110845 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 17 13:30:23.111033 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 17 13:30:23.111205 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 13:30:23.111382 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 17 13:30:23.111562 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 17 13:30:23.111728 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 13:30:23.111912 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 17 13:30:23.112129 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 17 13:30:23.112308 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 13:30:23.112473 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 17 13:30:23.112634 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 17 13:30:23.112804 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 13:30:23.112984 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 17 13:30:23.113147 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 17 13:30:23.113326 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 13:30:23.113348 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 13:30:23.113363 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 13:30:23.113386 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 13:30:23.113400 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 13:30:23.113418 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 13:30:23.113432 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 13:30:23.113446 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 13:30:23.113460 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 13:30:23.113636 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 17 13:30:23.113658 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 13:30:23.113818 kernel: rtc_cmos 00:03: registered as rtc0 Jan 17 13:30:23.114004 kernel: rtc_cmos 00:03: setting system clock to 2025-01-17T13:30:22 UTC (1737120622) Jan 17 13:30:23.114161 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 17 13:30:23.114182 kernel: intel_pstate: CPU model not supported Jan 17 13:30:23.114196 kernel: NET: Registered PF_INET6 protocol family Jan 17 13:30:23.114209 kernel: Segment Routing with IPv6 Jan 17 13:30:23.114223 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 13:30:23.114237 kernel: NET: Registered PF_PACKET protocol family Jan 17 13:30:23.114250 kernel: Key type dns_resolver registered Jan 17 13:30:23.114285 kernel: IPI shorthand broadcast: enabled Jan 17 13:30:23.114300 kernel: sched_clock: Marking stable (1299003595, 237115190)->(1659807787, -123689002) Jan 17 13:30:23.114314 kernel: registered taskstats version 1 Jan 17 13:30:23.114327 kernel: Loading compiled-in X.509 certificates Jan 17 13:30:23.114342 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 13:30:23.114355 kernel: Key type .fscrypt registered Jan 17 13:30:23.114368 kernel: Key type fscrypt-provisioning registered Jan 17 13:30:23.114382 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 13:30:23.114401 kernel: ima: Allocated hash algorithm: sha1 Jan 17 13:30:23.114415 kernel: ima: No architecture policies found Jan 17 13:30:23.114428 kernel: clk: Disabling unused clocks Jan 17 13:30:23.114443 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 13:30:23.114457 kernel: Write protecting the kernel read-only data: 36864k Jan 17 13:30:23.114471 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 13:30:23.114484 kernel: Run /init as init process Jan 17 13:30:23.114498 kernel: with arguments: Jan 17 13:30:23.114511 kernel: /init Jan 17 13:30:23.114524 kernel: with environment: Jan 17 13:30:23.114543 kernel: HOME=/ Jan 17 13:30:23.114557 kernel: TERM=linux Jan 17 13:30:23.114570 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 13:30:23.114587 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 13:30:23.114604 systemd[1]: Detected virtualization kvm. Jan 17 13:30:23.114619 systemd[1]: Detected architecture x86-64. Jan 17 13:30:23.114633 systemd[1]: Running in initrd. Jan 17 13:30:23.114652 systemd[1]: No hostname configured, using default hostname. Jan 17 13:30:23.114666 systemd[1]: Hostname set to . Jan 17 13:30:23.114682 systemd[1]: Initializing machine ID from VM UUID. Jan 17 13:30:23.114696 systemd[1]: Queued start job for default target initrd.target. Jan 17 13:30:23.114711 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 13:30:23.114725 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 13:30:23.114741 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 13:30:23.114755 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 13:30:23.114775 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 13:30:23.114791 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 13:30:23.114807 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 13:30:23.114822 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 13:30:23.114837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 13:30:23.114852 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 13:30:23.114866 systemd[1]: Reached target paths.target - Path Units. Jan 17 13:30:23.114886 systemd[1]: Reached target slices.target - Slice Units. Jan 17 13:30:23.114900 systemd[1]: Reached target swap.target - Swaps. Jan 17 13:30:23.114915 systemd[1]: Reached target timers.target - Timer Units. Jan 17 13:30:23.114929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 13:30:23.114984 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 13:30:23.115003 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 13:30:23.115018 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 13:30:23.115033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 13:30:23.115047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 13:30:23.115070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 13:30:23.115085 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 13:30:23.115100 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 13:30:23.115120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 13:30:23.115135 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 13:30:23.115149 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 13:30:23.115164 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 13:30:23.115179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 13:30:23.115198 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 13:30:23.115213 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 13:30:23.115227 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 13:30:23.115242 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 13:30:23.115318 systemd-journald[201]: Collecting audit messages is disabled. Jan 17 13:30:23.115361 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 13:30:23.115377 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 13:30:23.115391 kernel: Bridge firewalling registered Jan 17 13:30:23.115407 systemd-journald[201]: Journal started Jan 17 13:30:23.115441 systemd-journald[201]: Runtime Journal (/run/log/journal/ba7c417df8b14169963c1a57ce7adb56) is 4.7M, max 38.0M, 33.2M free. Jan 17 13:30:23.052995 systemd-modules-load[202]: Inserted module 'overlay' Jan 17 13:30:23.085408 systemd-modules-load[202]: Inserted module 'br_netfilter' Jan 17 13:30:23.154972 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 13:30:23.155869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 13:30:23.158071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 13:30:23.160024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 13:30:23.169220 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 13:30:23.176201 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 13:30:23.178085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 13:30:23.193178 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 13:30:23.195163 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 13:30:23.206436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 13:30:23.208487 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 13:30:23.219180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 13:30:23.220341 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 13:30:23.227142 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 13:30:23.245964 dracut-cmdline[237]: dracut-dracut-053 Jan 17 13:30:23.254861 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 13:30:23.267345 systemd-resolved[236]: Positive Trust Anchors: Jan 17 13:30:23.267374 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 13:30:23.267420 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 13:30:23.277048 systemd-resolved[236]: Defaulting to hostname 'linux'. Jan 17 13:30:23.278878 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 13:30:23.281779 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 13:30:23.357037 kernel: SCSI subsystem initialized Jan 17 13:30:23.368974 kernel: Loading iSCSI transport class v2.0-870. Jan 17 13:30:23.381985 kernel: iscsi: registered transport (tcp) Jan 17 13:30:23.409318 kernel: iscsi: registered transport (qla4xxx) Jan 17 13:30:23.409409 kernel: QLogic iSCSI HBA Driver Jan 17 13:30:23.465410 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 13:30:23.470151 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 13:30:23.506814 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 13:30:23.506922 kernel: device-mapper: uevent: version 1.0.3 Jan 17 13:30:23.506942 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 13:30:23.557989 kernel: raid6: sse2x4 gen() 7703 MB/s Jan 17 13:30:23.575991 kernel: raid6: sse2x2 gen() 5381 MB/s Jan 17 13:30:23.594596 kernel: raid6: sse2x1 gen() 5326 MB/s Jan 17 13:30:23.594680 kernel: raid6: using algorithm sse2x4 gen() 7703 MB/s Jan 17 13:30:23.613633 kernel: raid6: .... xor() 4939 MB/s, rmw enabled Jan 17 13:30:23.613744 kernel: raid6: using ssse3x2 recovery algorithm Jan 17 13:30:23.640019 kernel: xor: automatically using best checksumming function avx Jan 17 13:30:23.843924 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 13:30:23.862627 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 13:30:23.872318 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 13:30:23.899140 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 17 13:30:23.906192 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 13:30:23.913158 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 13:30:23.935831 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 17 13:30:23.978507 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 13:30:23.984157 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 13:30:24.102189 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 13:30:24.110146 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 13:30:24.138527 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 13:30:24.140806 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 13:30:24.141839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 13:30:24.144370 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 13:30:24.155609 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 13:30:24.190797 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 13:30:24.240935 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 17 13:30:24.292169 kernel: ACPI: bus type USB registered Jan 17 13:30:24.292195 kernel: usbcore: registered new interface driver usbfs Jan 17 13:30:24.292214 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 13:30:24.292252 kernel: usbcore: registered new interface driver hub Jan 17 13:30:24.292284 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 17 13:30:24.292486 kernel: usbcore: registered new device driver usb Jan 17 13:30:24.292507 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 13:30:24.292524 kernel: GPT:17805311 != 125829119 Jan 17 13:30:24.292542 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 13:30:24.292559 kernel: GPT:17805311 != 125829119 Jan 17 13:30:24.292575 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 13:30:24.292593 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 13:30:24.292610 kernel: AVX version of gcm_enc/dec engaged. Jan 17 13:30:24.295408 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 13:30:24.298186 kernel: AES CTR mode by8 optimization enabled Jan 17 13:30:24.297153 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 13:30:24.301188 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 13:30:24.303566 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 13:30:24.303756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 13:30:24.307177 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 13:30:24.321606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 13:30:24.337013 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) Jan 17 13:30:24.352009 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (464) Jan 17 13:30:24.367971 kernel: libata version 3.00 loaded. Jan 17 13:30:24.376847 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 13:30:24.390778 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 13:30:24.426430 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 13:30:24.426460 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 13:30:24.426675 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 13:30:24.426873 kernel: scsi host0: ahci Jan 17 13:30:24.427129 kernel: scsi host1: ahci Jan 17 13:30:24.427350 kernel: scsi host2: ahci Jan 17 13:30:24.427549 kernel: scsi host3: ahci Jan 17 13:30:24.427745 kernel: scsi host4: ahci Jan 17 13:30:24.427930 kernel: scsi host5: ahci Jan 17 13:30:24.430339 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 17 13:30:24.430381 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 17 13:30:24.430412 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 17 13:30:24.430444 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 17 13:30:24.430475 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 17 13:30:24.430505 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 17 13:30:24.436131 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 13:30:24.498489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 13:30:24.505675 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 13:30:24.506573 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 13:30:24.515473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 13:30:24.521161 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 13:30:24.535290 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 13:30:24.544602 disk-uuid[564]: Primary Header is updated. Jan 17 13:30:24.544602 disk-uuid[564]: Secondary Entries is updated. Jan 17 13:30:24.544602 disk-uuid[564]: Secondary Header is updated. Jan 17 13:30:24.552989 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 13:30:24.556964 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 13:30:24.558186 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 13:30:24.740000 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 17 13:30:24.740087 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 13:30:24.740108 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 13:30:24.742664 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 13:30:24.743317 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 13:30:24.744980 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 13:30:24.760222 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 13:30:24.777502 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 17 13:30:24.777751 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 13:30:24.778011 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 17 13:30:24.778246 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 17 13:30:24.778454 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 17 13:30:24.778691 kernel: hub 1-0:1.0: USB hub found Jan 17 13:30:24.778933 kernel: hub 1-0:1.0: 4 ports detected Jan 17 13:30:24.780287 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 13:30:24.780598 kernel: hub 2-0:1.0: USB hub found Jan 17 13:30:24.780858 kernel: hub 2-0:1.0: 4 ports detected Jan 17 13:30:25.014098 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 13:30:25.155508 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 13:30:25.162409 kernel: usbcore: registered new interface driver usbhid Jan 17 13:30:25.162468 kernel: usbhid: USB HID core driver Jan 17 13:30:25.170389 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 17 13:30:25.170428 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 17 13:30:25.564542 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 13:30:25.565654 disk-uuid[568]: The operation has completed successfully. Jan 17 13:30:25.619578 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 13:30:25.619782 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 13:30:25.638157 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 13:30:25.644859 sh[587]: Success Jan 17 13:30:25.662011 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 17 13:30:25.720460 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 13:30:25.725131 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 13:30:25.728379 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 13:30:25.758332 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 13:30:25.758396 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 13:30:25.758417 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 13:30:25.761324 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 13:30:25.763024 kernel: BTRFS info (device dm-0): using free space tree Jan 17 13:30:25.772766 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 13:30:25.774753 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 13:30:25.784168 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 13:30:25.788572 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 13:30:25.801543 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 13:30:25.801592 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 13:30:25.803426 kernel: BTRFS info (device vda6): using free space tree Jan 17 13:30:25.808980 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 13:30:25.822868 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 13:30:25.826380 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 13:30:25.833604 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 13:30:25.842602 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 13:30:25.983865 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 13:30:25.983905 ignition[675]: Ignition 2.19.0 Jan 17 13:30:25.986430 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 13:30:25.983927 ignition[675]: Stage: fetch-offline Jan 17 13:30:25.984049 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 17 13:30:25.984077 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 13:30:25.984311 ignition[675]: parsed url from cmdline: "" Jan 17 13:30:25.984318 ignition[675]: no config URL provided Jan 17 13:30:25.984328 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 13:30:25.984345 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 17 13:30:25.984355 ignition[675]: failed to fetch config: resource requires networking Jan 17 13:30:25.984709 ignition[675]: Ignition finished successfully Jan 17 13:30:25.995217 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 13:30:26.034767 systemd-networkd[776]: lo: Link UP Jan 17 13:30:26.035806 systemd-networkd[776]: lo: Gained carrier Jan 17 13:30:26.038134 systemd-networkd[776]: Enumeration completed Jan 17 13:30:26.038681 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 13:30:26.038687 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 13:30:26.039118 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 13:30:26.042344 systemd-networkd[776]: eth0: Link UP Jan 17 13:30:26.042350 systemd-networkd[776]: eth0: Gained carrier Jan 17 13:30:26.042363 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 13:30:26.043538 systemd[1]: Reached target network.target - Network. Jan 17 13:30:26.052323 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 13:30:26.059332 systemd-networkd[776]: eth0: DHCPv4 address 10.230.31.134/30, gateway 10.230.31.133 acquired from 10.230.31.133 Jan 17 13:30:26.079319 ignition[778]: Ignition 2.19.0 Jan 17 13:30:26.079340 ignition[778]: Stage: fetch Jan 17 13:30:26.079627 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jan 17 13:30:26.079648 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 13:30:26.079823 ignition[778]: parsed url from cmdline: "" Jan 17 13:30:26.079830 ignition[778]: no config URL provided Jan 17 13:30:26.079840 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 13:30:26.079857 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jan 17 13:30:26.080040 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 17 13:30:26.080095 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 17 13:30:26.080279 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 17 13:30:26.098190 ignition[778]: GET result: OK Jan 17 13:30:26.099160 ignition[778]: parsing config with SHA512: 8c0306a5572b64e2cb0d472ccef5de617e432de11d4c558c0cb0386a2e3f18c9a247429e944d8fad3eefb07b175d7171c26308590861d057562e1fc60061eb8b Jan 17 13:30:26.103413 unknown[778]: fetched base config from "system" Jan 17 13:30:26.103431 unknown[778]: fetched base config from "system" Jan 17 13:30:26.103733 ignition[778]: fetch: fetch complete Jan 17 13:30:26.103441 unknown[778]: fetched user config from "openstack" Jan 17 13:30:26.103741 ignition[778]: fetch: fetch passed Jan 17 13:30:26.106403 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 13:30:26.103813 ignition[778]: Ignition finished successfully Jan 17 13:30:26.121424 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 13:30:26.143429 ignition[786]: Ignition 2.19.0 Jan 17 13:30:26.143452 ignition[786]: Stage: kargs Jan 17 13:30:26.143721 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 17 13:30:26.143742 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 13:30:26.146133 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 13:30:26.144659 ignition[786]: kargs: kargs passed Jan 17 13:30:26.144741 ignition[786]: Ignition finished successfully Jan 17 13:30:26.153229 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 13:30:26.174941 ignition[792]: Ignition 2.19.0 Jan 17 13:30:26.174982 ignition[792]: Stage: disks Jan 17 13:30:26.175237 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 17 13:30:26.177718 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 13:30:26.175259 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 13:30:26.176235 ignition[792]: disks: disks passed Jan 17 13:30:26.179660 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 13:30:26.176309 ignition[792]: Ignition finished successfully Jan 17 13:30:26.181418 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 13:30:26.182775 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 13:30:26.184405 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 13:30:26.185776 systemd[1]: Reached target basic.target - Basic System. Jan 17 13:30:26.206364 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 13:30:26.225071 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 13:30:26.228325 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 13:30:26.233095 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 13:30:26.354243 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 13:30:26.355407 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 13:30:26.357081 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 13:30:26.363081 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 13:30:26.375260 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 13:30:26.376519 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 13:30:26.381274 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 17 13:30:26.382618 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 13:30:26.382722 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 13:30:26.389602 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 13:30:26.395058 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (808) Jan 17 13:30:26.395094 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 13:30:26.395123 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 13:30:26.395141 kernel: BTRFS info (device vda6): using free space tree Jan 17 13:30:26.401968 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 13:30:26.402924 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 13:30:26.406529 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 13:30:26.513258 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 13:30:26.523189 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 17 13:30:26.534792 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 13:30:26.542968 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 13:30:26.648760 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 13:30:26.653111 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 13:30:26.660169 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 13:30:26.674001 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 13:30:26.691782 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 13:30:26.712982 ignition[926]: INFO : Ignition 2.19.0 Jan 17 13:30:26.712982 ignition[926]: INFO : Stage: mount Jan 17 13:30:26.712982 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 13:30:26.712982 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 13:30:26.717493 ignition[926]: INFO : mount: mount passed Jan 17 13:30:26.717493 ignition[926]: INFO : Ignition finished successfully Jan 17 13:30:26.715998 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 13:30:26.755526 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 13:30:28.042466 systemd-networkd[776]: eth0: Gained IPv6LL Jan 17 13:30:29.165233 systemd-networkd[776]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87e1:24:19ff:fee6:1f86/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87e1:24:19ff:fee6:1f86/64 assigned by NDisc. Jan 17 13:30:29.165253 systemd-networkd[776]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 13:30:33.570149 coreos-metadata[810]: Jan 17 13:30:33.570 WARN failed to locate config-drive, using the metadata service API instead Jan 17 13:30:33.594637 coreos-metadata[810]: Jan 17 13:30:33.594 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 13:30:33.607399 coreos-metadata[810]: Jan 17 13:30:33.607 INFO Fetch successful Jan 17 13:30:33.608341 coreos-metadata[810]: Jan 17 13:30:33.607 INFO wrote hostname srv-dlk5u.gb1.brightbox.com to /sysroot/etc/hostname Jan 17 13:30:33.610034 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 17 13:30:33.610224 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 17 13:30:33.620112 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 13:30:33.637240 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 13:30:33.651000 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 17 13:30:33.652968 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 13:30:33.654278 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 13:30:33.656178 kernel: BTRFS info (device vda6): using free space tree Jan 17 13:30:33.661973 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 13:30:33.664411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 13:30:33.696968 ignition[960]: INFO : Ignition 2.19.0 Jan 17 13:30:33.696968 ignition[960]: INFO : Stage: files Jan 17 13:30:33.698792 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 13:30:33.698792 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 13:30:33.698792 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 17 13:30:33.701520 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 13:30:33.701520 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 13:30:33.703611 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 13:30:33.704647 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 13:30:33.704647 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 13:30:33.704247 unknown[960]: wrote ssh authorized keys file for user: core Jan 17 13:30:33.707677 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 17 13:30:33.707677 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 13:30:33.707677 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 13:30:33.707677 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 13:30:33.707677 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 13:30:33.713741 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 13:30:33.713741 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 13:30:33.713741 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 13:30:34.341279 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 17 13:30:36.643539 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 13:30:36.643539 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 13:30:36.643539 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 13:30:36.643539 ignition[960]: INFO : files: files passed Jan 17 13:30:36.643539 ignition[960]: INFO : Ignition finished successfully Jan 17 13:30:36.656627 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 13:30:36.665246 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 13:30:36.667191 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 13:30:36.685648 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 13:30:36.685861 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 13:30:36.696889 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 13:30:36.698763 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 13:30:36.700262 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 13:30:36.703062 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 13:30:36.704238 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 13:30:36.712215 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 13:30:36.745619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 13:30:36.745832 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 13:30:36.748194 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 13:30:36.749486 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 13:30:36.751222 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 13:30:36.756206 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 13:30:36.777693 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 13:30:36.784180 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 13:30:36.801895 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 13:30:36.804034 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 13:30:36.805039 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 13:30:36.806656 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 13:30:36.806900 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 13:30:36.808851 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 13:30:36.809806 systemd[1]: Stopped target basic.target - Basic System. Jan 17 13:30:36.811437 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 13:30:36.812903 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 13:30:36.814326 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 13:30:36.815936 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 13:30:36.817584 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 13:30:36.819340 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 13:30:36.820844 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 13:30:36.822433 systemd[1]: Stopped target swap.target - Swaps. Jan 17 13:30:36.823875 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 13:30:36.824132 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 13:30:36.826032 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 13:30:36.827048 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 13:30:36.828493 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 13:30:36.828692 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 13:30:36.830143 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 13:30:36.830373 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 13:30:36.832519 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 13:30:36.832696 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 13:30:36.834404 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 13:30:36.834569 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 13:30:36.846896 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 13:30:36.847692 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 13:30:36.848009 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 13:30:36.852077 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 13:30:36.854382 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 13:30:36.854693 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 13:30:36.859512 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 13:30:36.859775 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 13:30:36.870641 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 13:30:36.870807 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 13:30:36.888626 ignition[1012]: INFO : Ignition 2.19.0 Jan 17 13:30:36.888626 ignition[1012]: INFO : Stage: umount Jan 17 13:30:36.891215 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 13:30:36.891215 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 17 13:30:36.891215 ignition[1012]: INFO : umount: umount passed Jan 17 13:30:36.891215 ignition[1012]: INFO : Ignition finished successfully Jan 17 13:30:36.892168 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 13:30:36.893928 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 13:30:36.894185 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 13:30:36.896463 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 13:30:36.896587 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 13:30:36.899498 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 13:30:36.899610 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 13:30:36.900360 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 13:30:36.900454 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 13:30:36.902131 systemd[1]: Stopped target network.target - Network. Jan 17 13:30:36.903438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 13:30:36.903529 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 13:30:36.905097 systemd[1]: Stopped target paths.target - Path Units. Jan 17 13:30:36.906486 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 13:30:36.906942 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 13:30:36.908113 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 13:30:36.909511 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 13:30:36.910975 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 13:30:36.911058 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 13:30:36.912373 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 13:30:36.912439 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 13:30:36.913711 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 13:30:36.913792 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 13:30:36.922203 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 13:30:36.922278 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 13:30:36.923899 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 13:30:36.927030 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 13:30:36.928750 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 13:30:36.928912 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 13:30:36.929164 systemd-networkd[776]: eth0: DHCPv6 lease lost Jan 17 13:30:36.932278 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 13:30:36.932425 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 13:30:36.935426 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 13:30:36.935598 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 13:30:36.940801 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 13:30:36.941015 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 13:30:36.945937 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 13:30:36.946420 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 13:30:36.954158 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 13:30:36.954897 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 13:30:36.954995 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 13:30:36.956617 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 13:30:36.956685 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 13:30:36.959221 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 13:30:36.959331 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 13:30:36.960117 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 13:30:36.960218 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 13:30:36.962246 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 13:30:36.972347 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 13:30:36.972572 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 13:30:36.976157 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 13:30:36.976335 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 13:30:36.979686 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 13:30:36.979778 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 13:30:36.981639 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 13:30:36.981698 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 13:30:36.983363 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 13:30:36.983456 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 13:30:36.985703 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 13:30:36.985774 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 13:30:36.987215 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 13:30:36.987288 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 13:30:37.002242 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 13:30:37.003074 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 13:30:37.003169 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 13:30:37.006054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 13:30:37.006134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 13:30:37.010735 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 13:30:37.010914 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 13:30:37.012425 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 13:30:37.019229 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 13:30:37.030588 systemd[1]: Switching root. Jan 17 13:30:37.062790 systemd-journald[201]: Journal stopped Jan 17 13:30:38.554018 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jan 17 13:30:38.554135 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 13:30:38.554163 kernel: SELinux: policy capability open_perms=1 Jan 17 13:30:38.554182 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 13:30:38.554209 kernel: SELinux: policy capability always_check_network=0 Jan 17 13:30:38.554229 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 13:30:38.554248 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 13:30:38.554273 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 13:30:38.554292 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 13:30:38.554311 kernel: audit: type=1403 audit(1737120637.380:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 13:30:38.554332 systemd[1]: Successfully loaded SELinux policy in 50.797ms. Jan 17 13:30:38.554360 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.950ms. Jan 17 13:30:38.554387 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 13:30:38.554408 systemd[1]: Detected virtualization kvm. Jan 17 13:30:38.554429 systemd[1]: Detected architecture x86-64. Jan 17 13:30:38.554453 systemd[1]: Detected first boot. Jan 17 13:30:38.554474 systemd[1]: Hostname set to . Jan 17 13:30:38.554503 systemd[1]: Initializing machine ID from VM UUID. Jan 17 13:30:38.554524 zram_generator::config[1055]: No configuration found. Jan 17 13:30:38.554546 systemd[1]: Populated /etc with preset unit settings. Jan 17 13:30:38.554567 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 13:30:38.554587 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 13:30:38.554608 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 13:30:38.554629 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 13:30:38.554650 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 13:30:38.554676 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 13:30:38.554697 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 13:30:38.554723 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 13:30:38.554744 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 13:30:38.554775 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 13:30:38.554798 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 13:30:38.554818 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 13:30:38.554839 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 13:30:38.554859 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 13:30:38.554885 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 13:30:38.554906 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 13:30:38.554931 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 13:30:38.554984 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 13:30:38.555007 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 13:30:38.555028 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 13:30:38.555048 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 13:30:38.555076 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 13:30:38.555097 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 13:30:38.555118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 13:30:38.555139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 13:30:38.555159 systemd[1]: Reached target slices.target - Slice Units. Jan 17 13:30:38.555180 systemd[1]: Reached target swap.target - Swaps. Jan 17 13:30:38.555200 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 13:30:38.555227 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 13:30:38.555253 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 13:30:38.555273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 13:30:38.555302 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 13:30:38.555322 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 13:30:38.555342 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 13:30:38.555362 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 13:30:38.555383 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 13:30:38.555403 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 13:30:38.555427 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 13:30:38.555453 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 13:30:38.555475 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 13:30:38.555496 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 13:30:38.555517 systemd[1]: Reached target machines.target - Containers. Jan 17 13:30:38.555538 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 13:30:38.555559 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 13:30:38.555579 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 13:30:38.555599 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 13:30:38.555620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 13:30:38.555645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 13:30:38.555667 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 13:30:38.555696 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 13:30:38.555730 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 13:30:38.555751 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 13:30:38.555793 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 13:30:38.555823 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 13:30:38.555851 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 13:30:38.555872 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 13:30:38.555892 kernel: fuse: init (API version 7.39) Jan 17 13:30:38.555913 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 13:30:38.555937 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 13:30:38.555995 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 13:30:38.556019 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 13:30:38.556051 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 13:30:38.556072 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 13:30:38.556092 systemd[1]: Stopped verity-setup.service. Jan 17 13:30:38.556114 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 13:30:38.556134 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 13:30:38.556156 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 13:30:38.556177 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 13:30:38.556198 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 13:30:38.556224 kernel: ACPI: bus type drm_connector registered Jan 17 13:30:38.556244 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 13:30:38.556284 kernel: loop: module loaded Jan 17 13:30:38.556304 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 13:30:38.556324 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 13:30:38.556377 systemd-journald[1148]: Collecting audit messages is disabled. Jan 17 13:30:38.556418 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 13:30:38.556440 systemd-journald[1148]: Journal started Jan 17 13:30:38.556490 systemd-journald[1148]: Runtime Journal (/run/log/journal/ba7c417df8b14169963c1a57ce7adb56) is 4.7M, max 38.0M, 33.2M free. Jan 17 13:30:38.150582 systemd[1]: Queued start job for default target multi-user.target. Jan 17 13:30:38.170402 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 13:30:38.171107 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 13:30:38.559019 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 13:30:38.561995 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 13:30:38.565974 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 13:30:38.566228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 13:30:38.566553 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 13:30:38.567872 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 13:30:38.568239 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 13:30:38.569513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 13:30:38.569831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 13:30:38.571235 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 13:30:38.571561 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 13:30:38.572821 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 13:30:38.573253 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 13:30:38.574496 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 13:30:38.575817 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 13:30:38.582815 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 13:30:38.597512 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 13:30:38.606032 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 13:30:38.617855 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 13:30:38.619467 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 13:30:38.619618 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 13:30:38.622385 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 13:30:38.631376 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 13:30:38.641113 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 13:30:38.642018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 13:30:38.650082 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 13:30:38.652222 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 13:30:38.653263 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 13:30:38.656295 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 13:30:38.658126 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 13:30:38.661164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 13:30:38.670170 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 13:30:38.676138 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 13:30:38.682427 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 13:30:38.683401 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 13:30:38.686010 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 13:30:38.717927 systemd-journald[1148]: Time spent on flushing to /var/log/journal/ba7c417df8b14169963c1a57ce7adb56 is 81.984ms for 1122 entries. Jan 17 13:30:38.717927 systemd-journald[1148]: System Journal (/var/log/journal/ba7c417df8b14169963c1a57ce7adb56) is 8.0M, max 584.8M, 576.8M free. Jan 17 13:30:38.810296 systemd-journald[1148]: Received client request to flush runtime journal. Jan 17 13:30:38.810354 kernel: loop0: detected capacity change from 0 to 8 Jan 17 13:30:38.810381 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 13:30:38.751504 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 13:30:38.752645 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 13:30:38.763176 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 13:30:38.813213 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 13:30:38.846124 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 13:30:38.849987 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 13:30:38.852019 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 13:30:38.854886 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 13:30:38.918051 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 13:30:38.905724 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 13:30:38.920099 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 13:30:38.956258 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 13:30:38.970572 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 13:30:38.977613 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 13:30:39.020248 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 13:30:39.033608 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 17 13:30:39.033636 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 17 13:30:39.054091 kernel: loop4: detected capacity change from 0 to 8 Jan 17 13:30:39.053052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 13:30:39.067983 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 13:30:39.095998 kernel: loop6: detected capacity change from 0 to 211296 Jan 17 13:30:39.132000 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 13:30:39.187182 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 17 13:30:39.188157 (sd-merge)[1212]: Merged extensions into '/usr'. Jan 17 13:30:39.202226 systemd[1]: Reloading requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 13:30:39.202265 systemd[1]: Reloading... Jan 17 13:30:39.336034 zram_generator::config[1238]: No configuration found. Jan 17 13:30:39.468980 ldconfig[1183]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 13:30:39.592889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 13:30:39.661755 systemd[1]: Reloading finished in 458 ms. Jan 17 13:30:39.697536 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 13:30:39.701506 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 13:30:39.702835 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 13:30:39.715211 systemd[1]: Starting ensure-sysext.service... Jan 17 13:30:39.718181 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 13:30:39.725180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 13:30:39.738133 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Jan 17 13:30:39.738174 systemd[1]: Reloading... Jan 17 13:30:39.764568 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 13:30:39.766863 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 13:30:39.768608 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 13:30:39.769451 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 17 13:30:39.769982 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Jan 17 13:30:39.775613 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 13:30:39.775631 systemd-tmpfiles[1297]: Skipping /boot Jan 17 13:30:39.799368 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 13:30:39.799390 systemd-tmpfiles[1297]: Skipping /boot Jan 17 13:30:39.821801 systemd-udevd[1298]: Using default interface naming scheme 'v255'. Jan 17 13:30:39.857971 zram_generator::config[1325]: No configuration found. Jan 17 13:30:40.050053 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1342) Jan 17 13:30:40.116936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 13:30:40.159980 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 17 13:30:40.186372 kernel: ACPI: button: Power Button [PWRF] Jan 17 13:30:40.220978 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 13:30:40.222156 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 13:30:40.223151 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 13:30:40.224260 systemd[1]: Reloading finished in 485 ms. Jan 17 13:30:40.247814 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 13:30:40.256008 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 13:30:40.269972 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 13:30:40.282262 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 13:30:40.282541 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 13:30:40.297823 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 17 13:30:40.322611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 13:30:40.329278 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 13:30:40.339043 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 13:30:40.341219 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 13:30:40.353322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 13:30:40.359298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 13:30:40.372428 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 13:30:40.374577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 13:30:40.379393 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 13:30:40.384263 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 13:30:40.392231 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 13:30:40.401280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 13:30:40.412261 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 13:30:40.415549 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 13:30:40.419029 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 13:30:40.420051 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 13:30:40.421609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 13:30:40.422322 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 13:30:40.431360 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 13:30:40.431586 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 13:30:40.449293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 13:30:40.449679 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 13:30:40.473653 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 13:30:40.478359 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 13:30:40.482399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 13:30:40.483290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 13:30:40.490355 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 13:30:40.493354 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 13:30:40.494645 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 13:30:40.509300 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 13:30:40.509830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 13:30:40.519415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 13:30:40.520446 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 13:30:40.520818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 13:30:40.525845 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 13:30:40.526132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 13:30:40.535150 systemd[1]: Finished ensure-sysext.service. Jan 17 13:30:40.537116 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 13:30:40.539674 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 13:30:40.543544 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 13:30:40.551402 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 13:30:40.611311 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 13:30:40.619177 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 13:30:40.621029 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 13:30:40.632666 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 13:30:40.632926 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 13:30:40.641826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 13:30:40.642159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 13:30:40.644424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 13:30:40.646810 augenrules[1450]: No rules Jan 17 13:30:40.649077 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 13:30:40.654225 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 13:30:40.654457 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 13:30:40.655513 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 13:30:40.667612 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 13:30:40.701072 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 13:30:40.777104 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 13:30:40.790146 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 13:30:40.875311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 13:30:40.878716 systemd-networkd[1415]: lo: Link UP Jan 17 13:30:40.878728 systemd-networkd[1415]: lo: Gained carrier Jan 17 13:30:40.883115 systemd-networkd[1415]: Enumeration completed Jan 17 13:30:40.883238 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 13:30:40.897233 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 13:30:40.897247 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 13:30:40.905228 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 13:30:40.906626 systemd-networkd[1415]: eth0: Link UP Jan 17 13:30:40.906633 systemd-networkd[1415]: eth0: Gained carrier Jan 17 13:30:40.906659 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 13:30:40.914925 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 13:30:40.924335 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 13:30:40.925370 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 13:30:40.925784 systemd-resolved[1416]: Positive Trust Anchors: Jan 17 13:30:40.925796 systemd-resolved[1416]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 13:30:40.925841 systemd-resolved[1416]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 13:30:40.928046 systemd-networkd[1415]: eth0: DHCPv4 address 10.230.31.134/30, gateway 10.230.31.133 acquired from 10.230.31.133 Jan 17 13:30:40.928905 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Jan 17 13:30:40.933999 systemd-resolved[1416]: Using system hostname 'srv-dlk5u.gb1.brightbox.com'. Jan 17 13:30:40.936897 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 13:30:40.937786 systemd[1]: Reached target network.target - Network. Jan 17 13:30:40.939389 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 13:30:40.963308 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 13:30:40.964463 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 13:30:40.965324 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 13:30:40.966206 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 13:30:40.967261 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 13:30:40.968366 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 13:30:40.969306 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 13:30:40.970182 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 13:30:40.971009 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 13:30:40.971067 systemd[1]: Reached target paths.target - Path Units. Jan 17 13:30:40.971757 systemd[1]: Reached target timers.target - Timer Units. Jan 17 13:30:40.973829 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 13:30:40.976543 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 13:30:40.982188 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 13:30:40.984915 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 13:30:40.986340 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 13:30:40.987226 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 13:30:40.987902 systemd[1]: Reached target basic.target - Basic System. Jan 17 13:30:40.988690 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 13:30:40.988743 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 13:30:41.002158 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 13:30:41.006550 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 13:30:41.011968 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 13:30:41.015181 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 13:30:41.025085 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 13:30:41.028224 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 13:30:41.029076 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 13:30:41.032183 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 13:30:41.040146 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 13:30:41.050803 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 13:30:41.069189 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 13:30:41.071635 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 13:30:41.072863 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 13:30:41.084285 extend-filesystems[1485]: Found loop4 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found loop5 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found loop6 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found loop7 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found vda Jan 17 13:30:41.087129 extend-filesystems[1485]: Found vda1 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found vda2 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found vda3 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found usr Jan 17 13:30:41.087129 extend-filesystems[1485]: Found vda4 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found vda6 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found vda7 Jan 17 13:30:41.087129 extend-filesystems[1485]: Found vda9 Jan 17 13:30:41.087129 extend-filesystems[1485]: Checking size of /dev/vda9 Jan 17 13:30:41.086207 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 13:30:41.159428 jq[1483]: false Jan 17 13:30:41.095100 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 13:30:41.150640 dbus-daemon[1482]: [system] SELinux support is enabled Jan 17 13:30:41.167185 extend-filesystems[1485]: Resized partition /dev/vda9 Jan 17 13:30:41.170662 update_engine[1492]: I20250117 13:30:41.127498 1492 main.cc:92] Flatcar Update Engine starting Jan 17 13:30:41.175068 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 17 13:30:41.098499 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 13:30:41.178136 extend-filesystems[1509]: resize2fs 1.47.1 (20-May-2024) Jan 17 13:30:41.184894 jq[1494]: true Jan 17 13:30:41.133407 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 13:30:41.186068 dbus-daemon[1482]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1415 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 13:30:41.198653 jq[1496]: true Jan 17 13:30:41.133726 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 13:30:41.136331 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 13:30:41.136585 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 13:30:41.151255 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 13:30:41.155828 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 13:30:41.155876 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 13:30:41.214698 update_engine[1492]: I20250117 13:30:41.212579 1492 update_check_scheduler.cc:74] Next update check in 9m29s Jan 17 13:30:41.157802 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 13:30:41.157831 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 13:30:41.203147 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 13:30:41.212055 systemd[1]: Started update-engine.service - Update Engine. Jan 17 13:30:41.224153 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 13:30:41.240498 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 13:30:41.245082 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 13:30:41.245417 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 13:30:41.710398 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1335) Jan 17 13:30:41.709445 systemd-resolved[1416]: Clock change detected. Flushing caches. Jan 17 13:30:41.709956 systemd-timesyncd[1443]: Contacted time server 178.62.68.79:123 (0.flatcar.pool.ntp.org). Jan 17 13:30:41.710054 systemd-timesyncd[1443]: Initial clock synchronization to Fri 2025-01-17 13:30:41.709328 UTC. Jan 17 13:30:41.792435 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 13:30:41.803311 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 13:30:41.804891 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 13:30:41.805290 systemd-logind[1490]: New seat seat0. Jan 17 13:30:41.810339 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 13:30:41.852116 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Jan 17 13:30:41.854343 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 13:30:41.888189 systemd[1]: Starting sshkeys.service... Jan 17 13:30:41.896216 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 13:30:41.930935 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 13:30:41.936847 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 17 13:30:41.940132 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 13:30:41.961030 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 13:30:41.961030 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 17 13:30:41.961030 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 17 13:30:41.970112 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 13:30:41.966057 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 13:30:41.970698 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Jan 17 13:30:41.966591 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 13:30:42.019437 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 13:30:42.021547 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 13:30:42.022239 dbus-daemon[1482]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1517 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 13:30:42.038683 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 13:30:42.057265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 13:30:42.084043 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 13:30:42.093029 systemd[1]: Started sshd@0-10.230.31.134:22-139.178.68.195:54586.service - OpenSSH per-connection server daemon (139.178.68.195:54586). Jan 17 13:30:42.104016 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 13:30:42.104286 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 13:30:42.105928 polkitd[1560]: Started polkitd version 121 Jan 17 13:30:42.115841 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 13:30:42.131106 polkitd[1560]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 13:30:42.133992 polkitd[1560]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 13:30:42.135671 polkitd[1560]: Finished loading, compiling and executing 2 rules Jan 17 13:30:42.137401 dbus-daemon[1482]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 13:30:42.137930 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 13:30:42.139558 polkitd[1560]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 13:30:42.164143 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 13:30:42.166993 systemd-hostnamed[1517]: Hostname set to (static) Jan 17 13:30:42.176180 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 13:30:42.182740 containerd[1512]: time="2025-01-17T13:30:42.182590641Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 13:30:42.185261 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 13:30:42.187266 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 13:30:42.214425 containerd[1512]: time="2025-01-17T13:30:42.214313658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 13:30:42.216993 containerd[1512]: time="2025-01-17T13:30:42.216930754Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 13:30:42.217103 containerd[1512]: time="2025-01-17T13:30:42.217077468Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 13:30:42.217874 containerd[1512]: time="2025-01-17T13:30:42.217177634Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 13:30:42.217874 containerd[1512]: time="2025-01-17T13:30:42.217563474Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 13:30:42.217874 containerd[1512]: time="2025-01-17T13:30:42.217599329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 13:30:42.217874 containerd[1512]: time="2025-01-17T13:30:42.217704242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 13:30:42.217874 containerd[1512]: time="2025-01-17T13:30:42.217728023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 13:30:42.218330 containerd[1512]: time="2025-01-17T13:30:42.218299351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 13:30:42.218444 containerd[1512]: time="2025-01-17T13:30:42.218408291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 13:30:42.218566 containerd[1512]: time="2025-01-17T13:30:42.218539647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 13:30:42.218683 containerd[1512]: time="2025-01-17T13:30:42.218659611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 13:30:42.218909 containerd[1512]: time="2025-01-17T13:30:42.218882189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 13:30:42.219524 containerd[1512]: time="2025-01-17T13:30:42.219475853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 13:30:42.219776 containerd[1512]: time="2025-01-17T13:30:42.219746673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 13:30:42.219884 containerd[1512]: time="2025-01-17T13:30:42.219860541Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 13:30:42.220439 containerd[1512]: time="2025-01-17T13:30:42.220104308Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 13:30:42.220439 containerd[1512]: time="2025-01-17T13:30:42.220194830Z" level=info msg="metadata content store policy set" policy=shared Jan 17 13:30:42.223684 containerd[1512]: time="2025-01-17T13:30:42.223655288Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 13:30:42.223889 containerd[1512]: time="2025-01-17T13:30:42.223852351Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 13:30:42.224025 containerd[1512]: time="2025-01-17T13:30:42.224000991Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 13:30:42.224489 containerd[1512]: time="2025-01-17T13:30:42.224122403Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 13:30:42.224489 containerd[1512]: time="2025-01-17T13:30:42.224183904Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 13:30:42.224489 containerd[1512]: time="2025-01-17T13:30:42.224384022Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 13:30:42.225009 containerd[1512]: time="2025-01-17T13:30:42.224981094Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 13:30:42.225388 containerd[1512]: time="2025-01-17T13:30:42.225361685Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 13:30:42.225528 containerd[1512]: time="2025-01-17T13:30:42.225493824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 13:30:42.225666 containerd[1512]: time="2025-01-17T13:30:42.225640864Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 13:30:42.225844 containerd[1512]: time="2025-01-17T13:30:42.225760733Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 13:30:42.226010 containerd[1512]: time="2025-01-17T13:30:42.225792933Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 13:30:42.226010 containerd[1512]: time="2025-01-17T13:30:42.225943275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 13:30:42.226010 containerd[1512]: time="2025-01-17T13:30:42.225982855Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 13:30:42.226318 containerd[1512]: time="2025-01-17T13:30:42.226186927Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 13:30:42.226318 containerd[1512]: time="2025-01-17T13:30:42.226240859Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 13:30:42.226318 containerd[1512]: time="2025-01-17T13:30:42.226276141Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 13:30:42.226523 containerd[1512]: time="2025-01-17T13:30:42.226297604Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 13:30:42.226668 containerd[1512]: time="2025-01-17T13:30:42.226477656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.226668 containerd[1512]: time="2025-01-17T13:30:42.226626752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.226919 containerd[1512]: time="2025-01-17T13:30:42.226775231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.226919 containerd[1512]: time="2025-01-17T13:30:42.226851970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.226919 containerd[1512]: time="2025-01-17T13:30:42.226876409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.227416 containerd[1512]: time="2025-01-17T13:30:42.227100326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.227416 containerd[1512]: time="2025-01-17T13:30:42.227131401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.227416 containerd[1512]: time="2025-01-17T13:30:42.227185448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.227416 containerd[1512]: time="2025-01-17T13:30:42.227222504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.227416 containerd[1512]: time="2025-01-17T13:30:42.227264848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.227416 containerd[1512]: time="2025-01-17T13:30:42.227287028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.227416 containerd[1512]: time="2025-01-17T13:30:42.227318245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.229335 containerd[1512]: time="2025-01-17T13:30:42.228896424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.229335 containerd[1512]: time="2025-01-17T13:30:42.229110006Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 13:30:42.229335 containerd[1512]: time="2025-01-17T13:30:42.229200940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.229335 containerd[1512]: time="2025-01-17T13:30:42.229242981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.229335 containerd[1512]: time="2025-01-17T13:30:42.229266639Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 13:30:42.229533 containerd[1512]: time="2025-01-17T13:30:42.229428122Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 13:30:42.229533 containerd[1512]: time="2025-01-17T13:30:42.229495480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 13:30:42.229632 containerd[1512]: time="2025-01-17T13:30:42.229538305Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 13:30:42.229632 containerd[1512]: time="2025-01-17T13:30:42.229586490Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 13:30:42.229632 containerd[1512]: time="2025-01-17T13:30:42.229611779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.229728 containerd[1512]: time="2025-01-17T13:30:42.229635683Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 13:30:42.229728 containerd[1512]: time="2025-01-17T13:30:42.229687488Z" level=info msg="NRI interface is disabled by configuration." Jan 17 13:30:42.229728 containerd[1512]: time="2025-01-17T13:30:42.229712486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 13:30:42.231124 containerd[1512]: time="2025-01-17T13:30:42.230697718Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 13:30:42.231124 containerd[1512]: time="2025-01-17T13:30:42.230845047Z" level=info msg="Connect containerd service" Jan 17 13:30:42.231124 containerd[1512]: time="2025-01-17T13:30:42.230916733Z" level=info msg="using legacy CRI server" Jan 17 13:30:42.231124 containerd[1512]: time="2025-01-17T13:30:42.230965775Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 13:30:42.231617 containerd[1512]: time="2025-01-17T13:30:42.231146184Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 13:30:42.232255 containerd[1512]: time="2025-01-17T13:30:42.232196471Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 13:30:42.233300 containerd[1512]: time="2025-01-17T13:30:42.232398570Z" level=info msg="Start subscribing containerd event" Jan 17 13:30:42.233300 containerd[1512]: time="2025-01-17T13:30:42.232548287Z" level=info msg="Start recovering state" Jan 17 13:30:42.233300 containerd[1512]: time="2025-01-17T13:30:42.232683814Z" level=info msg="Start event monitor" Jan 17 13:30:42.233300 containerd[1512]: time="2025-01-17T13:30:42.232724587Z" level=info msg="Start snapshots syncer" Jan 17 13:30:42.233300 containerd[1512]: time="2025-01-17T13:30:42.232748262Z" level=info msg="Start cni network conf syncer for default" Jan 17 13:30:42.233300 containerd[1512]: time="2025-01-17T13:30:42.232763071Z" level=info msg="Start streaming server" Jan 17 13:30:42.233527 containerd[1512]: time="2025-01-17T13:30:42.233306305Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 13:30:42.233527 containerd[1512]: time="2025-01-17T13:30:42.233500921Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 13:30:42.233763 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 13:30:42.235453 containerd[1512]: time="2025-01-17T13:30:42.235418154Z" level=info msg="containerd successfully booted in 0.054314s" Jan 17 13:30:42.868248 systemd-networkd[1415]: eth0: Gained IPv6LL Jan 17 13:30:42.871426 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 13:30:42.878968 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 13:30:42.896239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 13:30:42.900223 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 13:30:42.944012 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 13:30:43.037831 sshd[1564]: Accepted publickey for core from 139.178.68.195 port 54586 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 13:30:43.042180 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 13:30:43.064221 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 13:30:43.071573 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 13:30:43.078141 systemd-logind[1490]: New session 1 of user core. Jan 17 13:30:43.100423 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 13:30:43.114393 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 13:30:43.120764 (systemd)[1599]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 13:30:43.268346 systemd[1599]: Queued start job for default target default.target. Jan 17 13:30:43.278141 systemd[1599]: Created slice app.slice - User Application Slice. Jan 17 13:30:43.278320 systemd[1599]: Reached target paths.target - Paths. Jan 17 13:30:43.278441 systemd[1599]: Reached target timers.target - Timers. Jan 17 13:30:43.285109 systemd[1599]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 13:30:43.299962 systemd[1599]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 13:30:43.300872 systemd[1599]: Reached target sockets.target - Sockets. Jan 17 13:30:43.300905 systemd[1599]: Reached target basic.target - Basic System. Jan 17 13:30:43.300983 systemd[1599]: Reached target default.target - Main User Target. Jan 17 13:30:43.301046 systemd[1599]: Startup finished in 170ms. Jan 17 13:30:43.301697 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 13:30:43.311423 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 13:30:43.886905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 13:30:43.897286 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 13:30:43.950672 systemd[1]: Started sshd@1-10.230.31.134:22-139.178.68.195:54600.service - OpenSSH per-connection server daemon (139.178.68.195:54600). Jan 17 13:30:44.377078 systemd-networkd[1415]: eth0: Ignoring DHCPv6 address 2a02:1348:179:87e1:24:19ff:fee6:1f86/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:87e1:24:19ff:fee6:1f86/64 assigned by NDisc. Jan 17 13:30:44.377093 systemd-networkd[1415]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 17 13:30:44.717480 kubelet[1614]: E0117 13:30:44.716951 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 13:30:44.719307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 13:30:44.719563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 13:30:44.720059 systemd[1]: kubelet.service: Consumed 1.115s CPU time. Jan 17 13:30:44.840738 sshd[1616]: Accepted publickey for core from 139.178.68.195 port 54600 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 13:30:44.842900 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 13:30:44.850846 systemd-logind[1490]: New session 2 of user core. Jan 17 13:30:44.862159 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 13:30:45.460186 sshd[1616]: pam_unix(sshd:session): session closed for user core Jan 17 13:30:45.464493 systemd[1]: sshd@1-10.230.31.134:22-139.178.68.195:54600.service: Deactivated successfully. Jan 17 13:30:45.467267 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 13:30:45.469390 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Jan 17 13:30:45.471088 systemd-logind[1490]: Removed session 2. Jan 17 13:30:45.622244 systemd[1]: Started sshd@2-10.230.31.134:22-139.178.68.195:57514.service - OpenSSH per-connection server daemon (139.178.68.195:57514). Jan 17 13:30:46.509619 sshd[1633]: Accepted publickey for core from 139.178.68.195 port 57514 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 13:30:46.512106 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 13:30:46.518516 systemd-logind[1490]: New session 3 of user core. Jan 17 13:30:46.530180 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 13:30:47.133306 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 17 13:30:47.137657 systemd[1]: sshd@2-10.230.31.134:22-139.178.68.195:57514.service: Deactivated successfully. Jan 17 13:30:47.140192 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 13:30:47.142643 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Jan 17 13:30:47.144247 systemd-logind[1490]: Removed session 3. Jan 17 13:30:47.237689 login[1579]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 13:30:47.240713 login[1581]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 13:30:47.244235 systemd-logind[1490]: New session 4 of user core. Jan 17 13:30:47.255146 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 13:30:47.259261 systemd-logind[1490]: New session 5 of user core. Jan 17 13:30:47.268317 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 13:30:48.963503 coreos-metadata[1481]: Jan 17 13:30:48.963 WARN failed to locate config-drive, using the metadata service API instead Jan 17 13:30:48.989270 coreos-metadata[1481]: Jan 17 13:30:48.989 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 17 13:30:48.995044 coreos-metadata[1481]: Jan 17 13:30:48.995 INFO Fetch failed with 404: resource not found Jan 17 13:30:48.995044 coreos-metadata[1481]: Jan 17 13:30:48.995 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 17 13:30:48.996012 coreos-metadata[1481]: Jan 17 13:30:48.995 INFO Fetch successful Jan 17 13:30:48.996241 coreos-metadata[1481]: Jan 17 13:30:48.996 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 17 13:30:49.008827 coreos-metadata[1481]: Jan 17 13:30:49.008 INFO Fetch successful Jan 17 13:30:49.009045 coreos-metadata[1481]: Jan 17 13:30:49.009 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 17 13:30:49.022161 coreos-metadata[1481]: Jan 17 13:30:49.022 INFO Fetch successful Jan 17 13:30:49.022481 coreos-metadata[1481]: Jan 17 13:30:49.022 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 17 13:30:49.039769 coreos-metadata[1481]: Jan 17 13:30:49.039 INFO Fetch successful Jan 17 13:30:49.040083 coreos-metadata[1481]: Jan 17 13:30:49.040 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 17 13:30:49.058928 coreos-metadata[1481]: Jan 17 13:30:49.058 INFO Fetch successful Jan 17 13:30:49.084824 coreos-metadata[1545]: Jan 17 13:30:49.082 WARN failed to locate config-drive, using the metadata service API instead Jan 17 13:30:49.094301 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 13:30:49.095511 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 13:30:49.107952 coreos-metadata[1545]: Jan 17 13:30:49.107 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 17 13:30:49.132718 coreos-metadata[1545]: Jan 17 13:30:49.132 INFO Fetch successful Jan 17 13:30:49.133046 coreos-metadata[1545]: Jan 17 13:30:49.132 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 13:30:49.165554 coreos-metadata[1545]: Jan 17 13:30:49.164 INFO Fetch successful Jan 17 13:30:49.167539 unknown[1545]: wrote ssh authorized keys file for user: core Jan 17 13:30:49.198641 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Jan 17 13:30:49.200254 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 13:30:49.202787 systemd[1]: Finished sshkeys.service. Jan 17 13:30:49.206142 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 13:30:49.206376 systemd[1]: Startup finished in 1.482s (kernel) + 14.624s (initrd) + 11.447s (userspace) = 27.553s. Jan 17 13:30:54.944442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 13:30:54.950047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 13:30:55.123031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 13:30:55.138297 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 13:30:55.216868 kubelet[1686]: E0117 13:30:55.216660 1686 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 13:30:55.222229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 13:30:55.222508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 13:30:57.298240 systemd[1]: Started sshd@3-10.230.31.134:22-139.178.68.195:35168.service - OpenSSH per-connection server daemon (139.178.68.195:35168). Jan 17 13:30:58.183519 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 35168 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 13:30:58.185613 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 13:30:58.193126 systemd-logind[1490]: New session 6 of user core. Jan 17 13:30:58.201047 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 13:30:58.803796 sshd[1695]: pam_unix(sshd:session): session closed for user core Jan 17 13:30:58.808677 systemd[1]: sshd@3-10.230.31.134:22-139.178.68.195:35168.service: Deactivated successfully. Jan 17 13:30:58.810735 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 13:30:58.811733 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Jan 17 13:30:58.813218 systemd-logind[1490]: Removed session 6. Jan 17 13:30:58.963211 systemd[1]: Started sshd@4-10.230.31.134:22-139.178.68.195:35174.service - OpenSSH per-connection server daemon (139.178.68.195:35174). Jan 17 13:30:59.849258 sshd[1702]: Accepted publickey for core from 139.178.68.195 port 35174 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 13:30:59.851402 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 13:30:59.857598 systemd-logind[1490]: New session 7 of user core. Jan 17 13:30:59.867116 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 13:31:00.473767 sshd[1702]: pam_unix(sshd:session): session closed for user core Jan 17 13:31:00.478304 systemd[1]: sshd@4-10.230.31.134:22-139.178.68.195:35174.service: Deactivated successfully. Jan 17 13:31:00.480515 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 13:31:00.482642 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Jan 17 13:31:00.484316 systemd-logind[1490]: Removed session 7. Jan 17 13:31:00.635157 systemd[1]: Started sshd@5-10.230.31.134:22-139.178.68.195:35176.service - OpenSSH per-connection server daemon (139.178.68.195:35176). Jan 17 13:31:01.517199 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 35176 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 13:31:01.520101 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 13:31:01.526375 systemd-logind[1490]: New session 8 of user core. Jan 17 13:31:01.535111 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 13:31:02.137145 sshd[1709]: pam_unix(sshd:session): session closed for user core Jan 17 13:31:02.141158 systemd[1]: sshd@5-10.230.31.134:22-139.178.68.195:35176.service: Deactivated successfully. Jan 17 13:31:02.143635 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 13:31:02.145575 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Jan 17 13:31:02.147119 systemd-logind[1490]: Removed session 8. Jan 17 13:31:02.292576 systemd[1]: Started sshd@6-10.230.31.134:22-139.178.68.195:35186.service - OpenSSH per-connection server daemon (139.178.68.195:35186). Jan 17 13:31:03.174500 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 35186 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 13:31:03.176650 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 13:31:03.184146 systemd-logind[1490]: New session 9 of user core. Jan 17 13:31:03.200015 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 13:31:03.711342 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 13:31:03.711873 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 13:31:03.728513 sudo[1719]: pam_unix(sudo:session): session closed for user root Jan 17 13:31:03.871960 sshd[1716]: pam_unix(sshd:session): session closed for user core Jan 17 13:31:03.876144 systemd[1]: sshd@6-10.230.31.134:22-139.178.68.195:35186.service: Deactivated successfully. Jan 17 13:31:03.878501 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 13:31:03.880495 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Jan 17 13:31:03.882158 systemd-logind[1490]: Removed session 9. Jan 17 13:31:04.032242 systemd[1]: Started sshd@7-10.230.31.134:22-139.178.68.195:35202.service - OpenSSH per-connection server daemon (139.178.68.195:35202). Jan 17 13:31:04.918654 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 35202 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 13:31:04.921635 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 13:31:04.929525 systemd-logind[1490]: New session 10 of user core. Jan 17 13:31:04.941065 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 13:31:05.399570 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 13:31:05.400127 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 13:31:05.401407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 13:31:05.409453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 13:31:05.413635 sudo[1728]: pam_unix(sudo:session): session closed for user root Jan 17 13:31:05.423699 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 13:31:05.424226 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 13:31:05.450023 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 13:31:05.453132 auditctl[1734]: No rules Jan 17 13:31:05.454644 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 13:31:05.455120 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 13:31:05.465965 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 13:31:05.514366 augenrules[1752]: No rules Jan 17 13:31:05.516643 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 13:31:05.519371 sudo[1727]: pam_unix(sudo:session): session closed for user root Jan 17 13:31:05.565944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 13:31:05.573365 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 13:31:05.635107 kubelet[1762]: E0117 13:31:05.635012 1762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 13:31:05.637508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 13:31:05.637785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 13:31:05.663765 sshd[1724]: pam_unix(sshd:session): session closed for user core Jan 17 13:31:05.668600 systemd[1]: sshd@7-10.230.31.134:22-139.178.68.195:35202.service: Deactivated successfully. Jan 17 13:31:05.670773 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 13:31:05.672686 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Jan 17 13:31:05.674254 systemd-logind[1490]: Removed session 10. Jan 17 13:31:05.830169 systemd[1]: Started sshd@8-10.230.31.134:22-139.178.68.195:36908.service - OpenSSH per-connection server daemon (139.178.68.195:36908). Jan 17 13:31:06.713306 sshd[1773]: Accepted publickey for core from 139.178.68.195 port 36908 ssh2: RSA SHA256:TT4gvIAgNhAz04Mo5jblLEXBxthkX9+8yM5WVquD3e8 Jan 17 13:31:06.715353 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 13:31:06.722514 systemd-logind[1490]: New session 11 of user core. Jan 17 13:31:06.730017 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 13:31:07.189403 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 13:31:07.190263 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 13:31:07.996134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 13:31:08.008154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 13:31:08.040959 systemd[1]: Reloading requested from client PID 1814 ('systemctl') (unit session-11.scope)... Jan 17 13:31:08.041182 systemd[1]: Reloading... Jan 17 13:31:08.180074 zram_generator::config[1853]: No configuration found. Jan 17 13:31:08.365144 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 13:31:08.471437 systemd[1]: Reloading finished in 429 ms. Jan 17 13:31:08.538719 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 13:31:08.538894 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 13:31:08.539294 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 13:31:08.546239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 13:31:08.684826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 13:31:08.698243 (kubelet)[1920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 13:31:08.761866 kubelet[1920]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 13:31:08.761866 kubelet[1920]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 13:31:08.761866 kubelet[1920]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 13:31:08.762464 kubelet[1920]: I0117 13:31:08.761942 1920 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 13:31:09.190024 kubelet[1920]: I0117 13:31:09.189958 1920 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 13:31:09.190024 kubelet[1920]: I0117 13:31:09.190015 1920 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 13:31:09.190378 kubelet[1920]: I0117 13:31:09.190338 1920 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 13:31:09.212534 kubelet[1920]: I0117 13:31:09.211850 1920 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 13:31:09.227270 kubelet[1920]: I0117 13:31:09.226776 1920 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 13:31:09.228541 kubelet[1920]: I0117 13:31:09.228516 1920 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 13:31:09.229080 kubelet[1920]: I0117 13:31:09.229053 1920 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 13:31:09.229842 kubelet[1920]: I0117 13:31:09.229454 1920 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 13:31:09.229842 kubelet[1920]: I0117 13:31:09.229482 1920 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 13:31:09.229842 kubelet[1920]: I0117 13:31:09.229663 1920 state_mem.go:36] "Initialized new in-memory state store" Jan 17 13:31:09.230046 kubelet[1920]: I0117 13:31:09.230025 1920 kubelet.go:396] "Attempting to sync node with API server" Jan 17 13:31:09.230180 kubelet[1920]: I0117 13:31:09.230159 1920 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 13:31:09.230336 kubelet[1920]: I0117 13:31:09.230316 1920 kubelet.go:312] "Adding apiserver pod source" Jan 17 13:31:09.230763 kubelet[1920]: I0117 13:31:09.230441 1920 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 13:31:09.230846 kubelet[1920]: E0117 13:31:09.230786 1920 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:09.232781 kubelet[1920]: E0117 13:31:09.231860 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:09.232781 kubelet[1920]: I0117 13:31:09.232411 1920 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 13:31:09.235831 kubelet[1920]: I0117 13:31:09.235514 1920 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 13:31:09.235831 kubelet[1920]: W0117 13:31:09.235603 1920 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 13:31:09.236629 kubelet[1920]: I0117 13:31:09.236600 1920 server.go:1256] "Started kubelet" Jan 17 13:31:09.238169 kubelet[1920]: I0117 13:31:09.238145 1920 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 13:31:09.250370 kubelet[1920]: I0117 13:31:09.248869 1920 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 13:31:09.250370 kubelet[1920]: I0117 13:31:09.250069 1920 server.go:461] "Adding debug handlers to kubelet server" Jan 17 13:31:09.251529 kubelet[1920]: I0117 13:31:09.251503 1920 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 13:31:09.251772 kubelet[1920]: I0117 13:31:09.251761 1920 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 13:31:09.257178 kubelet[1920]: I0117 13:31:09.257153 1920 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 13:31:09.258732 kubelet[1920]: I0117 13:31:09.257878 1920 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 13:31:09.258732 kubelet[1920]: E0117 13:31:09.257904 1920 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.31.134.181b7e0b85d583c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.31.134,UID:10.230.31.134,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.31.134,},FirstTimestamp:2025-01-17 13:31:09.236560833 +0000 UTC m=+0.533448957,LastTimestamp:2025-01-17 13:31:09.236560833 +0000 UTC m=+0.533448957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.31.134,}" Jan 17 13:31:09.258732 kubelet[1920]: W0117 13:31:09.257981 1920 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.230.31.134" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 13:31:09.258732 kubelet[1920]: I0117 13:31:09.258000 1920 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 13:31:09.258732 kubelet[1920]: E0117 13:31:09.258012 1920 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.230.31.134" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 17 13:31:09.258732 kubelet[1920]: W0117 13:31:09.258192 1920 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 13:31:09.259614 kubelet[1920]: E0117 13:31:09.258217 1920 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 17 13:31:09.262544 kubelet[1920]: I0117 13:31:09.261889 1920 factory.go:221] Registration of the systemd container factory successfully Jan 17 13:31:09.262544 kubelet[1920]: I0117 13:31:09.262030 1920 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 13:31:09.262544 kubelet[1920]: E0117 13:31:09.262081 1920 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.230.31.134\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 17 13:31:09.262544 kubelet[1920]: W0117 13:31:09.262149 1920 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 13:31:09.262544 kubelet[1920]: E0117 13:31:09.262173 1920 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 17 13:31:09.264329 kubelet[1920]: E0117 13:31:09.264304 1920 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 13:31:09.264919 kubelet[1920]: I0117 13:31:09.264892 1920 factory.go:221] Registration of the containerd container factory successfully Jan 17 13:31:09.282035 kubelet[1920]: I0117 13:31:09.281554 1920 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 13:31:09.282035 kubelet[1920]: I0117 13:31:09.281584 1920 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 13:31:09.282035 kubelet[1920]: I0117 13:31:09.281611 1920 state_mem.go:36] "Initialized new in-memory state store" Jan 17 13:31:09.285289 kubelet[1920]: I0117 13:31:09.284081 1920 policy_none.go:49] "None policy: Start" Jan 17 13:31:09.285289 kubelet[1920]: I0117 13:31:09.284909 1920 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 13:31:09.285289 kubelet[1920]: I0117 13:31:09.284940 1920 state_mem.go:35] "Initializing new in-memory state store" Jan 17 13:31:09.296461 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 13:31:09.309460 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 13:31:09.316318 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 13:31:09.323270 kubelet[1920]: I0117 13:31:09.322222 1920 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 13:31:09.323270 kubelet[1920]: I0117 13:31:09.322607 1920 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 13:31:09.329764 kubelet[1920]: E0117 13:31:09.329740 1920 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.31.134\" not found" Jan 17 13:31:09.347162 kubelet[1920]: I0117 13:31:09.347130 1920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 13:31:09.349801 kubelet[1920]: I0117 13:31:09.349770 1920 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 13:31:09.349985 kubelet[1920]: I0117 13:31:09.349965 1920 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 13:31:09.350155 kubelet[1920]: I0117 13:31:09.350133 1920 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 13:31:09.350436 kubelet[1920]: E0117 13:31:09.350408 1920 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 17 13:31:09.359235 kubelet[1920]: I0117 13:31:09.359211 1920 kubelet_node_status.go:73] "Attempting to register node" node="10.230.31.134" Jan 17 13:31:09.366104 kubelet[1920]: I0117 13:31:09.366058 1920 kubelet_node_status.go:76] "Successfully registered node" node="10.230.31.134" Jan 17 13:31:09.379798 kubelet[1920]: E0117 13:31:09.379757 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:09.480067 kubelet[1920]: E0117 13:31:09.480008 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:09.580890 kubelet[1920]: E0117 13:31:09.580787 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:09.681419 kubelet[1920]: E0117 13:31:09.681341 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:09.782309 kubelet[1920]: E0117 13:31:09.782123 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:09.883012 kubelet[1920]: E0117 13:31:09.882943 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:09.983906 kubelet[1920]: E0117 13:31:09.983828 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:10.084904 kubelet[1920]: E0117 13:31:10.084604 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:10.185561 kubelet[1920]: E0117 13:31:10.185460 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:10.193039 kubelet[1920]: I0117 13:31:10.192905 1920 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 17 13:31:10.193496 kubelet[1920]: W0117 13:31:10.193435 1920 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 17 13:31:10.215852 sudo[1776]: pam_unix(sudo:session): session closed for user root Jan 17 13:31:10.232440 kubelet[1920]: E0117 13:31:10.232368 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:10.286076 kubelet[1920]: E0117 13:31:10.286023 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:10.362364 sshd[1773]: pam_unix(sshd:session): session closed for user core Jan 17 13:31:10.367674 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Jan 17 13:31:10.368990 systemd[1]: sshd@8-10.230.31.134:22-139.178.68.195:36908.service: Deactivated successfully. Jan 17 13:31:10.372310 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 13:31:10.375747 systemd-logind[1490]: Removed session 11. Jan 17 13:31:10.386994 kubelet[1920]: E0117 13:31:10.386947 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:10.487714 kubelet[1920]: E0117 13:31:10.487626 1920 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.31.134\" not found" Jan 17 13:31:10.589374 kubelet[1920]: I0117 13:31:10.589084 1920 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 17 13:31:10.590272 containerd[1512]: time="2025-01-17T13:31:10.590051437Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 13:31:10.591429 kubelet[1920]: I0117 13:31:10.590447 1920 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 17 13:31:11.232216 kubelet[1920]: I0117 13:31:11.232141 1920 apiserver.go:52] "Watching apiserver" Jan 17 13:31:11.233269 kubelet[1920]: E0117 13:31:11.232481 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:11.239830 kubelet[1920]: I0117 13:31:11.239296 1920 topology_manager.go:215] "Topology Admit Handler" podUID="ada5e9e5-b1ae-4896-999e-acbf3887e8d6" podNamespace="kube-system" podName="kube-proxy-fgk4g" Jan 17 13:31:11.239830 kubelet[1920]: I0117 13:31:11.239450 1920 topology_manager.go:215] "Topology Admit Handler" podUID="5cb377ae-d47c-4c26-aa92-6cabcf7a2548" podNamespace="kube-system" podName="cilium-dfxbk" Jan 17 13:31:11.250724 systemd[1]: Created slice kubepods-burstable-pod5cb377ae_d47c_4c26_aa92_6cabcf7a2548.slice - libcontainer container kubepods-burstable-pod5cb377ae_d47c_4c26_aa92_6cabcf7a2548.slice. Jan 17 13:31:11.258717 kubelet[1920]: I0117 13:31:11.258663 1920 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 13:31:11.270234 systemd[1]: Created slice kubepods-besteffort-podada5e9e5_b1ae_4896_999e_acbf3887e8d6.slice - libcontainer container kubepods-besteffort-podada5e9e5_b1ae_4896_999e_acbf3887e8d6.slice. Jan 17 13:31:11.271201 kubelet[1920]: I0117 13:31:11.270601 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-xtables-lock\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271201 kubelet[1920]: I0117 13:31:11.270649 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-host-proc-sys-kernel\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271201 kubelet[1920]: I0117 13:31:11.270684 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skdsz\" (UniqueName: \"kubernetes.io/projected/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-kube-api-access-skdsz\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271201 kubelet[1920]: I0117 13:31:11.270715 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmhcg\" (UniqueName: \"kubernetes.io/projected/ada5e9e5-b1ae-4896-999e-acbf3887e8d6-kube-api-access-nmhcg\") pod \"kube-proxy-fgk4g\" (UID: \"ada5e9e5-b1ae-4896-999e-acbf3887e8d6\") " pod="kube-system/kube-proxy-fgk4g" Jan 17 13:31:11.271201 kubelet[1920]: I0117 13:31:11.270748 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-run\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271429 kubelet[1920]: I0117 13:31:11.270778 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ada5e9e5-b1ae-4896-999e-acbf3887e8d6-lib-modules\") pod \"kube-proxy-fgk4g\" (UID: \"ada5e9e5-b1ae-4896-999e-acbf3887e8d6\") " pod="kube-system/kube-proxy-fgk4g" Jan 17 13:31:11.271429 kubelet[1920]: I0117 13:31:11.270829 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-lib-modules\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271429 kubelet[1920]: I0117 13:31:11.270881 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-config-path\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271429 kubelet[1920]: I0117 13:31:11.270932 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-host-proc-sys-net\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271429 kubelet[1920]: I0117 13:31:11.270966 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-hubble-tls\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271429 kubelet[1920]: I0117 13:31:11.270996 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ada5e9e5-b1ae-4896-999e-acbf3887e8d6-kube-proxy\") pod \"kube-proxy-fgk4g\" (UID: \"ada5e9e5-b1ae-4896-999e-acbf3887e8d6\") " pod="kube-system/kube-proxy-fgk4g" Jan 17 13:31:11.271708 kubelet[1920]: I0117 13:31:11.271036 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ada5e9e5-b1ae-4896-999e-acbf3887e8d6-xtables-lock\") pod \"kube-proxy-fgk4g\" (UID: \"ada5e9e5-b1ae-4896-999e-acbf3887e8d6\") " pod="kube-system/kube-proxy-fgk4g" Jan 17 13:31:11.271708 kubelet[1920]: I0117 13:31:11.271072 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-cgroup\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271708 kubelet[1920]: I0117 13:31:11.271128 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cni-path\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271708 kubelet[1920]: I0117 13:31:11.271171 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-bpf-maps\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271708 kubelet[1920]: I0117 13:31:11.271200 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-hostproc\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271708 kubelet[1920]: I0117 13:31:11.271232 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-etc-cni-netd\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.271970 kubelet[1920]: I0117 13:31:11.271265 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-clustermesh-secrets\") pod \"cilium-dfxbk\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " pod="kube-system/cilium-dfxbk" Jan 17 13:31:11.569035 containerd[1512]: time="2025-01-17T13:31:11.568881622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfxbk,Uid:5cb377ae-d47c-4c26-aa92-6cabcf7a2548,Namespace:kube-system,Attempt:0,}" Jan 17 13:31:11.581860 containerd[1512]: time="2025-01-17T13:31:11.581500613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgk4g,Uid:ada5e9e5-b1ae-4896-999e-acbf3887e8d6,Namespace:kube-system,Attempt:0,}" Jan 17 13:31:12.233177 kubelet[1920]: E0117 13:31:12.233118 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:12.286996 containerd[1512]: time="2025-01-17T13:31:12.286940665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 13:31:12.288274 containerd[1512]: time="2025-01-17T13:31:12.288221187Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 13:31:12.289721 containerd[1512]: time="2025-01-17T13:31:12.289625096Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 13:31:12.289721 containerd[1512]: time="2025-01-17T13:31:12.289687687Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 13:31:12.290389 containerd[1512]: time="2025-01-17T13:31:12.290300403Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 13:31:12.294486 containerd[1512]: time="2025-01-17T13:31:12.294379229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 13:31:12.296495 containerd[1512]: time="2025-01-17T13:31:12.295717854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 714.074849ms" Jan 17 13:31:12.298497 containerd[1512]: time="2025-01-17T13:31:12.298448731Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 729.349233ms" Jan 17 13:31:12.382386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954307629.mount: Deactivated successfully. Jan 17 13:31:12.467208 containerd[1512]: time="2025-01-17T13:31:12.466867494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 13:31:12.467208 containerd[1512]: time="2025-01-17T13:31:12.467008573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 13:31:12.467208 containerd[1512]: time="2025-01-17T13:31:12.467041042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:31:12.468577 containerd[1512]: time="2025-01-17T13:31:12.466886593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 13:31:12.468577 containerd[1512]: time="2025-01-17T13:31:12.468084384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 13:31:12.468577 containerd[1512]: time="2025-01-17T13:31:12.468121348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:31:12.468577 containerd[1512]: time="2025-01-17T13:31:12.468241459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:31:12.468577 containerd[1512]: time="2025-01-17T13:31:12.467922611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:31:12.564505 systemd[1]: run-containerd-runc-k8s.io-fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae-runc.VnMBRz.mount: Deactivated successfully. Jan 17 13:31:12.576300 systemd[1]: Started cri-containerd-841a22aee13c5d06c013ad8e542dcd78981fae8a262824cf1f5affbdb231012c.scope - libcontainer container 841a22aee13c5d06c013ad8e542dcd78981fae8a262824cf1f5affbdb231012c. Jan 17 13:31:12.581868 systemd[1]: Started cri-containerd-fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae.scope - libcontainer container fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae. Jan 17 13:31:12.627441 containerd[1512]: time="2025-01-17T13:31:12.627241397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgk4g,Uid:ada5e9e5-b1ae-4896-999e-acbf3887e8d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"841a22aee13c5d06c013ad8e542dcd78981fae8a262824cf1f5affbdb231012c\"" Jan 17 13:31:12.627441 containerd[1512]: time="2025-01-17T13:31:12.627456904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfxbk,Uid:5cb377ae-d47c-4c26-aa92-6cabcf7a2548,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\"" Jan 17 13:31:12.634671 containerd[1512]: time="2025-01-17T13:31:12.634632005Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 13:31:13.233929 kubelet[1920]: E0117 13:31:13.233858 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:14.235059 kubelet[1920]: E0117 13:31:14.235001 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:14.427459 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 13:31:15.236323 kubelet[1920]: E0117 13:31:15.236260 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:16.237466 kubelet[1920]: E0117 13:31:16.237381 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:17.238082 kubelet[1920]: E0117 13:31:17.237999 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:18.239267 kubelet[1920]: E0117 13:31:18.239212 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:19.051845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566854043.mount: Deactivated successfully. Jan 17 13:31:19.241139 kubelet[1920]: E0117 13:31:19.241062 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:20.241348 kubelet[1920]: E0117 13:31:20.241308 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:21.242571 kubelet[1920]: E0117 13:31:21.242370 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:21.897446 containerd[1512]: time="2025-01-17T13:31:21.897317928Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:31:21.898715 containerd[1512]: time="2025-01-17T13:31:21.898667021Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735335" Jan 17 13:31:21.900237 containerd[1512]: time="2025-01-17T13:31:21.900159835Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:31:21.903518 containerd[1512]: time="2025-01-17T13:31:21.902850798Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.268134703s" Jan 17 13:31:21.903518 containerd[1512]: time="2025-01-17T13:31:21.902914032Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 17 13:31:21.904541 containerd[1512]: time="2025-01-17T13:31:21.904479041Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 13:31:21.905983 containerd[1512]: time="2025-01-17T13:31:21.905792865Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 13:31:21.942252 containerd[1512]: time="2025-01-17T13:31:21.942203604Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\"" Jan 17 13:31:21.943127 containerd[1512]: time="2025-01-17T13:31:21.943086242Z" level=info msg="StartContainer for \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\"" Jan 17 13:31:21.989042 systemd[1]: Started cri-containerd-9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3.scope - libcontainer container 9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3. Jan 17 13:31:22.024867 containerd[1512]: time="2025-01-17T13:31:22.024763614Z" level=info msg="StartContainer for \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\" returns successfully" Jan 17 13:31:22.040321 systemd[1]: cri-containerd-9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3.scope: Deactivated successfully. Jan 17 13:31:22.243223 kubelet[1920]: E0117 13:31:22.243170 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:22.417283 containerd[1512]: time="2025-01-17T13:31:22.417146934Z" level=info msg="shim disconnected" id=9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3 namespace=k8s.io Jan 17 13:31:22.417834 containerd[1512]: time="2025-01-17T13:31:22.417534821Z" level=warning msg="cleaning up after shim disconnected" id=9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3 namespace=k8s.io Jan 17 13:31:22.417834 containerd[1512]: time="2025-01-17T13:31:22.417564307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:31:22.938540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3-rootfs.mount: Deactivated successfully. Jan 17 13:31:23.244354 kubelet[1920]: E0117 13:31:23.244304 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:23.398672 containerd[1512]: time="2025-01-17T13:31:23.398577204Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 13:31:23.416639 containerd[1512]: time="2025-01-17T13:31:23.416461114Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\"" Jan 17 13:31:23.419453 containerd[1512]: time="2025-01-17T13:31:23.418290131Z" level=info msg="StartContainer for \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\"" Jan 17 13:31:23.470195 systemd[1]: Started cri-containerd-e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02.scope - libcontainer container e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02. Jan 17 13:31:23.529801 containerd[1512]: time="2025-01-17T13:31:23.529641974Z" level=info msg="StartContainer for \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\" returns successfully" Jan 17 13:31:23.547064 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 13:31:23.547460 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 13:31:23.547597 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 13:31:23.557298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 13:31:23.557649 systemd[1]: cri-containerd-e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02.scope: Deactivated successfully. Jan 17 13:31:23.597916 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 13:31:23.658890 containerd[1512]: time="2025-01-17T13:31:23.658069909Z" level=info msg="shim disconnected" id=e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02 namespace=k8s.io Jan 17 13:31:23.658890 containerd[1512]: time="2025-01-17T13:31:23.658237035Z" level=warning msg="cleaning up after shim disconnected" id=e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02 namespace=k8s.io Jan 17 13:31:23.658890 containerd[1512]: time="2025-01-17T13:31:23.658259548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:31:23.940750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02-rootfs.mount: Deactivated successfully. Jan 17 13:31:24.240506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627086258.mount: Deactivated successfully. Jan 17 13:31:24.245655 kubelet[1920]: E0117 13:31:24.245497 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:24.408691 containerd[1512]: time="2025-01-17T13:31:24.408203884Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 13:31:24.436338 containerd[1512]: time="2025-01-17T13:31:24.436295053Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\"" Jan 17 13:31:24.438336 containerd[1512]: time="2025-01-17T13:31:24.437944463Z" level=info msg="StartContainer for \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\"" Jan 17 13:31:24.518134 systemd[1]: Started cri-containerd-d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f.scope - libcontainer container d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f. Jan 17 13:31:24.592857 systemd[1]: cri-containerd-d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f.scope: Deactivated successfully. Jan 17 13:31:24.595589 containerd[1512]: time="2025-01-17T13:31:24.594586256Z" level=info msg="StartContainer for \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\" returns successfully" Jan 17 13:31:24.794903 containerd[1512]: time="2025-01-17T13:31:24.794718381Z" level=info msg="shim disconnected" id=d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f namespace=k8s.io Jan 17 13:31:24.795213 containerd[1512]: time="2025-01-17T13:31:24.795183081Z" level=warning msg="cleaning up after shim disconnected" id=d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f namespace=k8s.io Jan 17 13:31:24.795314 containerd[1512]: time="2025-01-17T13:31:24.795291199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:31:24.939935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f-rootfs.mount: Deactivated successfully. Jan 17 13:31:25.008586 containerd[1512]: time="2025-01-17T13:31:25.008527254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:31:25.009715 containerd[1512]: time="2025-01-17T13:31:25.009632380Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620949" Jan 17 13:31:25.009799 containerd[1512]: time="2025-01-17T13:31:25.009734780Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:31:25.013393 containerd[1512]: time="2025-01-17T13:31:25.013318105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:31:25.014413 containerd[1512]: time="2025-01-17T13:31:25.014257445Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 3.109706807s" Jan 17 13:31:25.014413 containerd[1512]: time="2025-01-17T13:31:25.014299715Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 13:31:25.016836 containerd[1512]: time="2025-01-17T13:31:25.016735529Z" level=info msg="CreateContainer within sandbox \"841a22aee13c5d06c013ad8e542dcd78981fae8a262824cf1f5affbdb231012c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 13:31:25.034865 containerd[1512]: time="2025-01-17T13:31:25.034437617Z" level=info msg="CreateContainer within sandbox \"841a22aee13c5d06c013ad8e542dcd78981fae8a262824cf1f5affbdb231012c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1672b8db33faf5e59ddf6a779aa0db801d96f6013e0ec0bd9d5c9c8724b32a67\"" Jan 17 13:31:25.037245 containerd[1512]: time="2025-01-17T13:31:25.037158738Z" level=info msg="StartContainer for \"1672b8db33faf5e59ddf6a779aa0db801d96f6013e0ec0bd9d5c9c8724b32a67\"" Jan 17 13:31:25.082030 systemd[1]: Started cri-containerd-1672b8db33faf5e59ddf6a779aa0db801d96f6013e0ec0bd9d5c9c8724b32a67.scope - libcontainer container 1672b8db33faf5e59ddf6a779aa0db801d96f6013e0ec0bd9d5c9c8724b32a67. Jan 17 13:31:25.122040 containerd[1512]: time="2025-01-17T13:31:25.121916575Z" level=info msg="StartContainer for \"1672b8db33faf5e59ddf6a779aa0db801d96f6013e0ec0bd9d5c9c8724b32a67\" returns successfully" Jan 17 13:31:25.246000 kubelet[1920]: E0117 13:31:25.245920 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:25.416886 containerd[1512]: time="2025-01-17T13:31:25.416479224Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 13:31:25.435392 containerd[1512]: time="2025-01-17T13:31:25.435333311Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\"" Jan 17 13:31:25.438480 containerd[1512]: time="2025-01-17T13:31:25.436833617Z" level=info msg="StartContainer for \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\"" Jan 17 13:31:25.456212 kubelet[1920]: I0117 13:31:25.456098 1920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fgk4g" podStartSLOduration=4.073525822 podStartE2EDuration="16.45603005s" podCreationTimestamp="2025-01-17 13:31:09 +0000 UTC" firstStartedPulling="2025-01-17 13:31:12.632064275 +0000 UTC m=+3.928952393" lastFinishedPulling="2025-01-17 13:31:25.014568503 +0000 UTC m=+16.311456621" observedRunningTime="2025-01-17 13:31:25.455546185 +0000 UTC m=+16.752434330" watchObservedRunningTime="2025-01-17 13:31:25.45603005 +0000 UTC m=+16.752918218" Jan 17 13:31:25.491227 systemd[1]: Started cri-containerd-8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099.scope - libcontainer container 8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099. Jan 17 13:31:25.532481 systemd[1]: cri-containerd-8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099.scope: Deactivated successfully. Jan 17 13:31:25.536630 containerd[1512]: time="2025-01-17T13:31:25.535898981Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cb377ae_d47c_4c26_aa92_6cabcf7a2548.slice/cri-containerd-8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099.scope/memory.events\": no such file or directory" Jan 17 13:31:25.537786 containerd[1512]: time="2025-01-17T13:31:25.537742265Z" level=info msg="StartContainer for \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\" returns successfully" Jan 17 13:31:25.632200 containerd[1512]: time="2025-01-17T13:31:25.632080761Z" level=info msg="shim disconnected" id=8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099 namespace=k8s.io Jan 17 13:31:25.632200 containerd[1512]: time="2025-01-17T13:31:25.632157839Z" level=warning msg="cleaning up after shim disconnected" id=8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099 namespace=k8s.io Jan 17 13:31:25.632200 containerd[1512]: time="2025-01-17T13:31:25.632173655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:31:25.939450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329991871.mount: Deactivated successfully. Jan 17 13:31:26.246670 kubelet[1920]: E0117 13:31:26.246541 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:26.427369 containerd[1512]: time="2025-01-17T13:31:26.427315221Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 13:31:26.459139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3168782755.mount: Deactivated successfully. Jan 17 13:31:26.460598 containerd[1512]: time="2025-01-17T13:31:26.460472439Z" level=info msg="CreateContainer within sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\"" Jan 17 13:31:26.461344 containerd[1512]: time="2025-01-17T13:31:26.461231659Z" level=info msg="StartContainer for \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\"" Jan 17 13:31:26.508033 systemd[1]: Started cri-containerd-4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337.scope - libcontainer container 4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337. Jan 17 13:31:26.547198 containerd[1512]: time="2025-01-17T13:31:26.547096398Z" level=info msg="StartContainer for \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\" returns successfully" Jan 17 13:31:26.653885 update_engine[1492]: I20250117 13:31:26.653244 1492 update_attempter.cc:509] Updating boot flags... Jan 17 13:31:26.735923 kubelet[1920]: I0117 13:31:26.732865 1920 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 13:31:26.740236 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2564) Jan 17 13:31:26.841419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2564) Jan 17 13:31:27.238967 kernel: Initializing XFRM netlink socket Jan 17 13:31:27.247220 kubelet[1920]: E0117 13:31:27.247151 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:27.468061 kubelet[1920]: I0117 13:31:27.468005 1920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dfxbk" podStartSLOduration=9.195561098 podStartE2EDuration="18.467893977s" podCreationTimestamp="2025-01-17 13:31:09 +0000 UTC" firstStartedPulling="2025-01-17 13:31:12.630996532 +0000 UTC m=+3.927884649" lastFinishedPulling="2025-01-17 13:31:21.903329393 +0000 UTC m=+13.200217528" observedRunningTime="2025-01-17 13:31:27.466724589 +0000 UTC m=+18.763612726" watchObservedRunningTime="2025-01-17 13:31:27.467893977 +0000 UTC m=+18.764782129" Jan 17 13:31:28.248048 kubelet[1920]: E0117 13:31:28.247937 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:28.977551 systemd-networkd[1415]: cilium_host: Link UP Jan 17 13:31:28.981490 systemd-networkd[1415]: cilium_net: Link UP Jan 17 13:31:28.983350 systemd-networkd[1415]: cilium_net: Gained carrier Jan 17 13:31:28.984336 systemd-networkd[1415]: cilium_host: Gained carrier Jan 17 13:31:29.138784 systemd-networkd[1415]: cilium_vxlan: Link UP Jan 17 13:31:29.139015 systemd-networkd[1415]: cilium_vxlan: Gained carrier Jan 17 13:31:29.231599 kubelet[1920]: E0117 13:31:29.231456 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:29.248838 kubelet[1920]: E0117 13:31:29.248781 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:29.539861 kernel: NET: Registered PF_ALG protocol family Jan 17 13:31:29.588152 systemd-networkd[1415]: cilium_net: Gained IPv6LL Jan 17 13:31:29.844057 systemd-networkd[1415]: cilium_host: Gained IPv6LL Jan 17 13:31:30.249958 kubelet[1920]: E0117 13:31:30.249902 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:30.356021 systemd-networkd[1415]: cilium_vxlan: Gained IPv6LL Jan 17 13:31:30.552923 systemd-networkd[1415]: lxc_health: Link UP Jan 17 13:31:30.558887 systemd-networkd[1415]: lxc_health: Gained carrier Jan 17 13:31:31.250999 kubelet[1920]: E0117 13:31:31.250928 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:31.956007 systemd-networkd[1415]: lxc_health: Gained IPv6LL Jan 17 13:31:32.252183 kubelet[1920]: E0117 13:31:32.251545 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:33.252387 kubelet[1920]: E0117 13:31:33.252296 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:34.253649 kubelet[1920]: E0117 13:31:34.253538 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:34.792843 kubelet[1920]: I0117 13:31:34.791761 1920 topology_manager.go:215] "Topology Admit Handler" podUID="0a8fff33-b617-42c8-aa26-cb79e7a7affd" podNamespace="default" podName="nginx-deployment-6d5f899847-5jjpw" Jan 17 13:31:34.803909 systemd[1]: Created slice kubepods-besteffort-pod0a8fff33_b617_42c8_aa26_cb79e7a7affd.slice - libcontainer container kubepods-besteffort-pod0a8fff33_b617_42c8_aa26_cb79e7a7affd.slice. Jan 17 13:31:34.824852 kubelet[1920]: I0117 13:31:34.824669 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrfp\" (UniqueName: \"kubernetes.io/projected/0a8fff33-b617-42c8-aa26-cb79e7a7affd-kube-api-access-7rrfp\") pod \"nginx-deployment-6d5f899847-5jjpw\" (UID: \"0a8fff33-b617-42c8-aa26-cb79e7a7affd\") " pod="default/nginx-deployment-6d5f899847-5jjpw" Jan 17 13:31:35.111410 containerd[1512]: time="2025-01-17T13:31:35.110987420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5jjpw,Uid:0a8fff33-b617-42c8-aa26-cb79e7a7affd,Namespace:default,Attempt:0,}" Jan 17 13:31:35.183201 systemd-networkd[1415]: lxccab09728ebd2: Link UP Jan 17 13:31:35.191923 kernel: eth0: renamed from tmp84499 Jan 17 13:31:35.203031 systemd-networkd[1415]: lxccab09728ebd2: Gained carrier Jan 17 13:31:35.254684 kubelet[1920]: E0117 13:31:35.254621 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:36.254969 kubelet[1920]: E0117 13:31:36.254870 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:36.948977 systemd-networkd[1415]: lxccab09728ebd2: Gained IPv6LL Jan 17 13:31:37.015983 kubelet[1920]: I0117 13:31:37.015795 1920 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 13:31:37.216362 containerd[1512]: time="2025-01-17T13:31:37.215799123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 13:31:37.216362 containerd[1512]: time="2025-01-17T13:31:37.215951716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 13:31:37.216362 containerd[1512]: time="2025-01-17T13:31:37.215989984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:31:37.216362 containerd[1512]: time="2025-01-17T13:31:37.216162161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:31:37.255274 kubelet[1920]: E0117 13:31:37.255207 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:37.279349 systemd[1]: run-containerd-runc-k8s.io-844999ccf8441227a40848b9dcac0d588793dd6cbfdbd7b2a67cf7f54c2f5ab1-runc.vtcVAw.mount: Deactivated successfully. Jan 17 13:31:37.288032 systemd[1]: Started cri-containerd-844999ccf8441227a40848b9dcac0d588793dd6cbfdbd7b2a67cf7f54c2f5ab1.scope - libcontainer container 844999ccf8441227a40848b9dcac0d588793dd6cbfdbd7b2a67cf7f54c2f5ab1. Jan 17 13:31:37.356340 containerd[1512]: time="2025-01-17T13:31:37.356254931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-5jjpw,Uid:0a8fff33-b617-42c8-aa26-cb79e7a7affd,Namespace:default,Attempt:0,} returns sandbox id \"844999ccf8441227a40848b9dcac0d588793dd6cbfdbd7b2a67cf7f54c2f5ab1\"" Jan 17 13:31:37.358901 containerd[1512]: time="2025-01-17T13:31:37.358867304Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 13:31:38.255880 kubelet[1920]: E0117 13:31:38.255722 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:39.256318 kubelet[1920]: E0117 13:31:39.256140 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:40.256354 kubelet[1920]: E0117 13:31:40.256293 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:41.006732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762146431.mount: Deactivated successfully. Jan 17 13:31:41.257850 kubelet[1920]: E0117 13:31:41.256617 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:42.257306 kubelet[1920]: E0117 13:31:42.257252 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:42.655940 containerd[1512]: time="2025-01-17T13:31:42.655033267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:31:42.656556 containerd[1512]: time="2025-01-17T13:31:42.656512351Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 17 13:31:42.657268 containerd[1512]: time="2025-01-17T13:31:42.657155503Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:31:42.661100 containerd[1512]: time="2025-01-17T13:31:42.661039643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:31:42.663528 containerd[1512]: time="2025-01-17T13:31:42.662510614Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 5.303579093s" Jan 17 13:31:42.663528 containerd[1512]: time="2025-01-17T13:31:42.662560458Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 13:31:42.664996 containerd[1512]: time="2025-01-17T13:31:42.664960177Z" level=info msg="CreateContainer within sandbox \"844999ccf8441227a40848b9dcac0d588793dd6cbfdbd7b2a67cf7f54c2f5ab1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 17 13:31:42.680789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141180699.mount: Deactivated successfully. Jan 17 13:31:42.695169 containerd[1512]: time="2025-01-17T13:31:42.695040860Z" level=info msg="CreateContainer within sandbox \"844999ccf8441227a40848b9dcac0d588793dd6cbfdbd7b2a67cf7f54c2f5ab1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e1112041d904d702fa9510bb060470f783ebeba90115ce7ee74622c46d8a7c3d\"" Jan 17 13:31:42.696163 containerd[1512]: time="2025-01-17T13:31:42.695897179Z" level=info msg="StartContainer for \"e1112041d904d702fa9510bb060470f783ebeba90115ce7ee74622c46d8a7c3d\"" Jan 17 13:31:42.742085 systemd[1]: Started cri-containerd-e1112041d904d702fa9510bb060470f783ebeba90115ce7ee74622c46d8a7c3d.scope - libcontainer container e1112041d904d702fa9510bb060470f783ebeba90115ce7ee74622c46d8a7c3d. Jan 17 13:31:42.782524 containerd[1512]: time="2025-01-17T13:31:42.782447302Z" level=info msg="StartContainer for \"e1112041d904d702fa9510bb060470f783ebeba90115ce7ee74622c46d8a7c3d\" returns successfully" Jan 17 13:31:43.258360 kubelet[1920]: E0117 13:31:43.258282 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:43.495729 kubelet[1920]: I0117 13:31:43.495607 1920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-5jjpw" podStartSLOduration=4.190789141 podStartE2EDuration="9.495561678s" podCreationTimestamp="2025-01-17 13:31:34 +0000 UTC" firstStartedPulling="2025-01-17 13:31:37.358132465 +0000 UTC m=+28.655020585" lastFinishedPulling="2025-01-17 13:31:42.662905002 +0000 UTC m=+33.959793122" observedRunningTime="2025-01-17 13:31:43.494516133 +0000 UTC m=+34.791404285" watchObservedRunningTime="2025-01-17 13:31:43.495561678 +0000 UTC m=+34.792449803" Jan 17 13:31:44.258783 kubelet[1920]: E0117 13:31:44.258712 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:45.260041 kubelet[1920]: E0117 13:31:45.259911 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:46.260754 kubelet[1920]: E0117 13:31:46.260690 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:47.261666 kubelet[1920]: E0117 13:31:47.261555 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:48.262565 kubelet[1920]: E0117 13:31:48.262380 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:49.230903 kubelet[1920]: E0117 13:31:49.230798 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:49.263168 kubelet[1920]: E0117 13:31:49.263126 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:50.263856 kubelet[1920]: E0117 13:31:50.263772 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:51.264247 kubelet[1920]: E0117 13:31:51.264127 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:52.264896 kubelet[1920]: E0117 13:31:52.264801 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:53.265656 kubelet[1920]: E0117 13:31:53.265580 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:54.266666 kubelet[1920]: E0117 13:31:54.266572 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:55.267259 kubelet[1920]: E0117 13:31:55.267193 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:56.075228 kubelet[1920]: I0117 13:31:56.075087 1920 topology_manager.go:215] "Topology Admit Handler" podUID="a2392a4d-fec3-455a-8b8e-65858df0aa3a" podNamespace="default" podName="nfs-server-provisioner-0" Jan 17 13:31:56.083457 systemd[1]: Created slice kubepods-besteffort-poda2392a4d_fec3_455a_8b8e_65858df0aa3a.slice - libcontainer container kubepods-besteffort-poda2392a4d_fec3_455a_8b8e_65858df0aa3a.slice. Jan 17 13:31:56.170413 kubelet[1920]: I0117 13:31:56.170326 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a2392a4d-fec3-455a-8b8e-65858df0aa3a-data\") pod \"nfs-server-provisioner-0\" (UID: \"a2392a4d-fec3-455a-8b8e-65858df0aa3a\") " pod="default/nfs-server-provisioner-0" Jan 17 13:31:56.170669 kubelet[1920]: I0117 13:31:56.170461 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztjw\" (UniqueName: \"kubernetes.io/projected/a2392a4d-fec3-455a-8b8e-65858df0aa3a-kube-api-access-kztjw\") pod \"nfs-server-provisioner-0\" (UID: \"a2392a4d-fec3-455a-8b8e-65858df0aa3a\") " pod="default/nfs-server-provisioner-0" Jan 17 13:31:56.268041 kubelet[1920]: E0117 13:31:56.267978 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:56.388731 containerd[1512]: time="2025-01-17T13:31:56.388159232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a2392a4d-fec3-455a-8b8e-65858df0aa3a,Namespace:default,Attempt:0,}" Jan 17 13:31:56.442108 systemd-networkd[1415]: lxcf0054c7374c5: Link UP Jan 17 13:31:56.458681 kernel: eth0: renamed from tmpc5b4b Jan 17 13:31:56.465343 systemd-networkd[1415]: lxcf0054c7374c5: Gained carrier Jan 17 13:31:56.747746 containerd[1512]: time="2025-01-17T13:31:56.747437043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 13:31:56.747746 containerd[1512]: time="2025-01-17T13:31:56.747532880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 13:31:56.747746 containerd[1512]: time="2025-01-17T13:31:56.747558983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:31:56.747746 containerd[1512]: time="2025-01-17T13:31:56.747684435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:31:56.785074 systemd[1]: Started cri-containerd-c5b4bc6c7245a51aa66dff44aa74ad822c8678164bc057f636694c90c8eca7b7.scope - libcontainer container c5b4bc6c7245a51aa66dff44aa74ad822c8678164bc057f636694c90c8eca7b7. Jan 17 13:31:56.842397 containerd[1512]: time="2025-01-17T13:31:56.842319707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a2392a4d-fec3-455a-8b8e-65858df0aa3a,Namespace:default,Attempt:0,} returns sandbox id \"c5b4bc6c7245a51aa66dff44aa74ad822c8678164bc057f636694c90c8eca7b7\"" Jan 17 13:31:56.845855 containerd[1512]: time="2025-01-17T13:31:56.845781010Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 17 13:31:57.269090 kubelet[1920]: E0117 13:31:57.268967 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:57.940355 systemd-networkd[1415]: lxcf0054c7374c5: Gained IPv6LL Jan 17 13:31:58.269788 kubelet[1920]: E0117 13:31:58.269731 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:59.270610 kubelet[1920]: E0117 13:31:59.270460 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:31:59.832077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount563855618.mount: Deactivated successfully. Jan 17 13:32:00.271937 kubelet[1920]: E0117 13:32:00.271859 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:01.273719 kubelet[1920]: E0117 13:32:01.273671 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:02.274869 kubelet[1920]: E0117 13:32:02.274565 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:02.615133 containerd[1512]: time="2025-01-17T13:32:02.614599167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:32:02.616389 containerd[1512]: time="2025-01-17T13:32:02.616317594Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Jan 17 13:32:02.618044 containerd[1512]: time="2025-01-17T13:32:02.617980679Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:32:02.621775 containerd[1512]: time="2025-01-17T13:32:02.621737128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:32:02.624156 containerd[1512]: time="2025-01-17T13:32:02.623268156Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.777403674s" Jan 17 13:32:02.624156 containerd[1512]: time="2025-01-17T13:32:02.623334778Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 17 13:32:02.626070 containerd[1512]: time="2025-01-17T13:32:02.626036706Z" level=info msg="CreateContainer within sandbox \"c5b4bc6c7245a51aa66dff44aa74ad822c8678164bc057f636694c90c8eca7b7\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 17 13:32:02.644136 containerd[1512]: time="2025-01-17T13:32:02.644093247Z" level=info msg="CreateContainer within sandbox \"c5b4bc6c7245a51aa66dff44aa74ad822c8678164bc057f636694c90c8eca7b7\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"62e1f00896624c80add27f33c0b7ff4bc13fd9fa9e6af0db977ec226da7c4280\"" Jan 17 13:32:02.645054 containerd[1512]: time="2025-01-17T13:32:02.645020433Z" level=info msg="StartContainer for \"62e1f00896624c80add27f33c0b7ff4bc13fd9fa9e6af0db977ec226da7c4280\"" Jan 17 13:32:02.697071 systemd[1]: Started cri-containerd-62e1f00896624c80add27f33c0b7ff4bc13fd9fa9e6af0db977ec226da7c4280.scope - libcontainer container 62e1f00896624c80add27f33c0b7ff4bc13fd9fa9e6af0db977ec226da7c4280. Jan 17 13:32:02.733313 containerd[1512]: time="2025-01-17T13:32:02.733252661Z" level=info msg="StartContainer for \"62e1f00896624c80add27f33c0b7ff4bc13fd9fa9e6af0db977ec226da7c4280\" returns successfully" Jan 17 13:32:03.275516 kubelet[1920]: E0117 13:32:03.275440 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:03.547014 kubelet[1920]: I0117 13:32:03.546849 1920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.767800619 podStartE2EDuration="7.546782019s" podCreationTimestamp="2025-01-17 13:31:56 +0000 UTC" firstStartedPulling="2025-01-17 13:31:56.844629938 +0000 UTC m=+48.141518057" lastFinishedPulling="2025-01-17 13:32:02.623611333 +0000 UTC m=+53.920499457" observedRunningTime="2025-01-17 13:32:03.545763333 +0000 UTC m=+54.842651477" watchObservedRunningTime="2025-01-17 13:32:03.546782019 +0000 UTC m=+54.843670150" Jan 17 13:32:04.276642 kubelet[1920]: E0117 13:32:04.276566 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:05.277061 kubelet[1920]: E0117 13:32:05.276969 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:06.277940 kubelet[1920]: E0117 13:32:06.277870 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:07.278943 kubelet[1920]: E0117 13:32:07.278867 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:08.279291 kubelet[1920]: E0117 13:32:08.279206 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:09.231477 kubelet[1920]: E0117 13:32:09.231308 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:09.283846 kubelet[1920]: E0117 13:32:09.282862 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:10.283344 kubelet[1920]: E0117 13:32:10.283247 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:11.283999 kubelet[1920]: E0117 13:32:11.283923 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:12.284689 kubelet[1920]: E0117 13:32:12.284604 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:12.891831 kubelet[1920]: I0117 13:32:12.891685 1920 topology_manager.go:215] "Topology Admit Handler" podUID="e95396ce-5f9f-42e2-b1c2-193dd7279828" podNamespace="default" podName="test-pod-1" Jan 17 13:32:12.900738 systemd[1]: Created slice kubepods-besteffort-pode95396ce_5f9f_42e2_b1c2_193dd7279828.slice - libcontainer container kubepods-besteffort-pode95396ce_5f9f_42e2_b1c2_193dd7279828.slice. Jan 17 13:32:12.978739 kubelet[1920]: I0117 13:32:12.978673 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8a4ea327-19ef-46e6-842e-1d5f9775f8cb\" (UniqueName: \"kubernetes.io/nfs/e95396ce-5f9f-42e2-b1c2-193dd7279828-pvc-8a4ea327-19ef-46e6-842e-1d5f9775f8cb\") pod \"test-pod-1\" (UID: \"e95396ce-5f9f-42e2-b1c2-193dd7279828\") " pod="default/test-pod-1" Jan 17 13:32:12.978739 kubelet[1920]: I0117 13:32:12.978743 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnlv\" (UniqueName: \"kubernetes.io/projected/e95396ce-5f9f-42e2-b1c2-193dd7279828-kube-api-access-6vnlv\") pod \"test-pod-1\" (UID: \"e95396ce-5f9f-42e2-b1c2-193dd7279828\") " pod="default/test-pod-1" Jan 17 13:32:13.123867 kernel: FS-Cache: Loaded Jan 17 13:32:13.214134 kernel: RPC: Registered named UNIX socket transport module. Jan 17 13:32:13.214249 kernel: RPC: Registered udp transport module. Jan 17 13:32:13.215136 kernel: RPC: Registered tcp transport module. Jan 17 13:32:13.216132 kernel: RPC: Registered tcp-with-tls transport module. Jan 17 13:32:13.217294 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 17 13:32:13.285248 kubelet[1920]: E0117 13:32:13.285114 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:13.560194 kernel: NFS: Registering the id_resolver key type Jan 17 13:32:13.560396 kernel: Key type id_resolver registered Jan 17 13:32:13.560451 kernel: Key type id_legacy registered Jan 17 13:32:13.609179 nfsidmap[3323]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 17 13:32:13.617187 nfsidmap[3327]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Jan 17 13:32:13.805981 containerd[1512]: time="2025-01-17T13:32:13.805902413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e95396ce-5f9f-42e2-b1c2-193dd7279828,Namespace:default,Attempt:0,}" Jan 17 13:32:13.877911 systemd-networkd[1415]: lxc55dfb81dca78: Link UP Jan 17 13:32:13.884832 kernel: eth0: renamed from tmpbd3ed Jan 17 13:32:13.893373 systemd-networkd[1415]: lxc55dfb81dca78: Gained carrier Jan 17 13:32:14.169925 containerd[1512]: time="2025-01-17T13:32:14.169655404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 13:32:14.170199 containerd[1512]: time="2025-01-17T13:32:14.169746997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 13:32:14.170199 containerd[1512]: time="2025-01-17T13:32:14.169769703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:32:14.170199 containerd[1512]: time="2025-01-17T13:32:14.170148810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:32:14.199059 systemd[1]: run-containerd-runc-k8s.io-bd3ed76f31cc5b0e11f359bf9e0651ef03fcd0c5bff05499e792a3d554353f41-runc.Iw6fF0.mount: Deactivated successfully. Jan 17 13:32:14.210071 systemd[1]: Started cri-containerd-bd3ed76f31cc5b0e11f359bf9e0651ef03fcd0c5bff05499e792a3d554353f41.scope - libcontainer container bd3ed76f31cc5b0e11f359bf9e0651ef03fcd0c5bff05499e792a3d554353f41. Jan 17 13:32:14.266187 containerd[1512]: time="2025-01-17T13:32:14.266100293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e95396ce-5f9f-42e2-b1c2-193dd7279828,Namespace:default,Attempt:0,} returns sandbox id \"bd3ed76f31cc5b0e11f359bf9e0651ef03fcd0c5bff05499e792a3d554353f41\"" Jan 17 13:32:14.269158 containerd[1512]: time="2025-01-17T13:32:14.269099051Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 17 13:32:14.285875 kubelet[1920]: E0117 13:32:14.285800 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:14.641384 containerd[1512]: time="2025-01-17T13:32:14.641164661Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:32:14.642055 containerd[1512]: time="2025-01-17T13:32:14.641995907Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 17 13:32:14.647611 containerd[1512]: time="2025-01-17T13:32:14.647566416Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 378.261209ms" Jan 17 13:32:14.647709 containerd[1512]: time="2025-01-17T13:32:14.647609375Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 17 13:32:14.650679 containerd[1512]: time="2025-01-17T13:32:14.650634981Z" level=info msg="CreateContainer within sandbox \"bd3ed76f31cc5b0e11f359bf9e0651ef03fcd0c5bff05499e792a3d554353f41\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 17 13:32:14.688119 containerd[1512]: time="2025-01-17T13:32:14.688044298Z" level=info msg="CreateContainer within sandbox \"bd3ed76f31cc5b0e11f359bf9e0651ef03fcd0c5bff05499e792a3d554353f41\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"db432cfaeead16eb2ec0457665d000b633b658c18c1e4a272b549b787d48def6\"" Jan 17 13:32:14.688755 containerd[1512]: time="2025-01-17T13:32:14.688711887Z" level=info msg="StartContainer for \"db432cfaeead16eb2ec0457665d000b633b658c18c1e4a272b549b787d48def6\"" Jan 17 13:32:14.728066 systemd[1]: Started cri-containerd-db432cfaeead16eb2ec0457665d000b633b658c18c1e4a272b549b787d48def6.scope - libcontainer container db432cfaeead16eb2ec0457665d000b633b658c18c1e4a272b549b787d48def6. Jan 17 13:32:14.760452 containerd[1512]: time="2025-01-17T13:32:14.760384026Z" level=info msg="StartContainer for \"db432cfaeead16eb2ec0457665d000b633b658c18c1e4a272b549b787d48def6\" returns successfully" Jan 17 13:32:15.176009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879364197.mount: Deactivated successfully. Jan 17 13:32:15.286703 kubelet[1920]: E0117 13:32:15.286624 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:15.348223 systemd-networkd[1415]: lxc55dfb81dca78: Gained IPv6LL Jan 17 13:32:15.571329 kubelet[1920]: I0117 13:32:15.571277 1920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.191528737 podStartE2EDuration="17.571194981s" podCreationTimestamp="2025-01-17 13:31:58 +0000 UTC" firstStartedPulling="2025-01-17 13:32:14.268328831 +0000 UTC m=+65.565216953" lastFinishedPulling="2025-01-17 13:32:14.647995065 +0000 UTC m=+65.944883197" observedRunningTime="2025-01-17 13:32:15.570126426 +0000 UTC m=+66.867014579" watchObservedRunningTime="2025-01-17 13:32:15.571194981 +0000 UTC m=+66.868083112" Jan 17 13:32:16.287786 kubelet[1920]: E0117 13:32:16.287719 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:17.288957 kubelet[1920]: E0117 13:32:17.288854 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:18.290109 kubelet[1920]: E0117 13:32:18.290055 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:19.291968 kubelet[1920]: E0117 13:32:19.291884 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:20.292861 kubelet[1920]: E0117 13:32:20.292738 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:21.293209 kubelet[1920]: E0117 13:32:21.293135 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:22.294306 kubelet[1920]: E0117 13:32:22.294217 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:23.295370 kubelet[1920]: E0117 13:32:23.295295 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:23.854688 containerd[1512]: time="2025-01-17T13:32:23.854584502Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 13:32:23.863125 containerd[1512]: time="2025-01-17T13:32:23.863029885Z" level=info msg="StopContainer for \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\" with timeout 2 (s)" Jan 17 13:32:23.863672 containerd[1512]: time="2025-01-17T13:32:23.863610095Z" level=info msg="Stop container \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\" with signal terminated" Jan 17 13:32:23.875581 systemd-networkd[1415]: lxc_health: Link DOWN Jan 17 13:32:23.875593 systemd-networkd[1415]: lxc_health: Lost carrier Jan 17 13:32:23.898239 systemd[1]: cri-containerd-4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337.scope: Deactivated successfully. Jan 17 13:32:23.899248 systemd[1]: cri-containerd-4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337.scope: Consumed 10.110s CPU time. Jan 17 13:32:23.934909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337-rootfs.mount: Deactivated successfully. Jan 17 13:32:23.971394 containerd[1512]: time="2025-01-17T13:32:23.945152417Z" level=info msg="shim disconnected" id=4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337 namespace=k8s.io Jan 17 13:32:23.971394 containerd[1512]: time="2025-01-17T13:32:23.971367272Z" level=warning msg="cleaning up after shim disconnected" id=4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337 namespace=k8s.io Jan 17 13:32:23.971394 containerd[1512]: time="2025-01-17T13:32:23.971396436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:32:23.993905 containerd[1512]: time="2025-01-17T13:32:23.993469295Z" level=info msg="StopContainer for \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\" returns successfully" Jan 17 13:32:24.020767 containerd[1512]: time="2025-01-17T13:32:24.020698220Z" level=info msg="StopPodSandbox for \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\"" Jan 17 13:32:24.020968 containerd[1512]: time="2025-01-17T13:32:24.020772967Z" level=info msg="Container to stop \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 13:32:24.020968 containerd[1512]: time="2025-01-17T13:32:24.020797083Z" level=info msg="Container to stop \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 13:32:24.020968 containerd[1512]: time="2025-01-17T13:32:24.020838123Z" level=info msg="Container to stop \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 13:32:24.020968 containerd[1512]: time="2025-01-17T13:32:24.020856766Z" level=info msg="Container to stop \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 13:32:24.020968 containerd[1512]: time="2025-01-17T13:32:24.020873056Z" level=info msg="Container to stop \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 13:32:24.024596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae-shm.mount: Deactivated successfully. Jan 17 13:32:24.033712 systemd[1]: cri-containerd-fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae.scope: Deactivated successfully. Jan 17 13:32:24.065595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae-rootfs.mount: Deactivated successfully. Jan 17 13:32:24.070490 containerd[1512]: time="2025-01-17T13:32:24.070180967Z" level=info msg="shim disconnected" id=fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae namespace=k8s.io Jan 17 13:32:24.070490 containerd[1512]: time="2025-01-17T13:32:24.070264866Z" level=warning msg="cleaning up after shim disconnected" id=fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae namespace=k8s.io Jan 17 13:32:24.070490 containerd[1512]: time="2025-01-17T13:32:24.070280436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:32:24.098837 containerd[1512]: time="2025-01-17T13:32:24.098725847Z" level=info msg="TearDown network for sandbox \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" successfully" Jan 17 13:32:24.098837 containerd[1512]: time="2025-01-17T13:32:24.098779831Z" level=info msg="StopPodSandbox for \"fc30a972ec28816982652131d6bdfd4f12bbb873843383596ae5c295fe189bae\" returns successfully" Jan 17 13:32:24.251851 kubelet[1920]: I0117 13:32:24.250756 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-xtables-lock\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.251851 kubelet[1920]: I0117 13:32:24.250850 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skdsz\" (UniqueName: \"kubernetes.io/projected/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-kube-api-access-skdsz\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.251851 kubelet[1920]: I0117 13:32:24.250893 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-config-path\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.251851 kubelet[1920]: I0117 13:32:24.250921 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-bpf-maps\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.251851 kubelet[1920]: I0117 13:32:24.250947 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-etc-cni-netd\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.251851 kubelet[1920]: I0117 13:32:24.250939 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.252327 kubelet[1920]: I0117 13:32:24.251002 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-clustermesh-secrets\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.252327 kubelet[1920]: I0117 13:32:24.251033 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-run\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.252327 kubelet[1920]: I0117 13:32:24.251071 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-host-proc-sys-net\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.252327 kubelet[1920]: I0117 13:32:24.251100 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-cgroup\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.252327 kubelet[1920]: I0117 13:32:24.251127 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-hostproc\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.252327 kubelet[1920]: I0117 13:32:24.251152 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-host-proc-sys-kernel\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.252704 kubelet[1920]: I0117 13:32:24.251178 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cni-path\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.252704 kubelet[1920]: I0117 13:32:24.251203 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-lib-modules\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.252704 kubelet[1920]: I0117 13:32:24.251253 1920 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-hubble-tls\") pod \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\" (UID: \"5cb377ae-d47c-4c26-aa92-6cabcf7a2548\") " Jan 17 13:32:24.252704 kubelet[1920]: I0117 13:32:24.251326 1920 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-xtables-lock\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.254072 kubelet[1920]: I0117 13:32:24.254037 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.254143 kubelet[1920]: I0117 13:32:24.254109 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.258828 kubelet[1920]: I0117 13:32:24.257165 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.258828 kubelet[1920]: I0117 13:32:24.257212 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.258828 kubelet[1920]: I0117 13:32:24.257257 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.258828 kubelet[1920]: I0117 13:32:24.257287 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-hostproc" (OuterVolumeSpecName: "hostproc") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.258828 kubelet[1920]: I0117 13:32:24.257312 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.259107 kubelet[1920]: I0117 13:32:24.257340 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cni-path" (OuterVolumeSpecName: "cni-path") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.259107 kubelet[1920]: I0117 13:32:24.257380 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 13:32:24.259588 systemd[1]: var-lib-kubelet-pods-5cb377ae\x2dd47c\x2d4c26\x2daa92\x2d6cabcf7a2548-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 13:32:24.263311 kubelet[1920]: I0117 13:32:24.263276 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 13:32:24.265203 kubelet[1920]: I0117 13:32:24.265174 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 13:32:24.265358 kubelet[1920]: I0117 13:32:24.265333 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 13:32:24.265471 kubelet[1920]: I0117 13:32:24.265394 1920 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-kube-api-access-skdsz" (OuterVolumeSpecName: "kube-api-access-skdsz") pod "5cb377ae-d47c-4c26-aa92-6cabcf7a2548" (UID: "5cb377ae-d47c-4c26-aa92-6cabcf7a2548"). InnerVolumeSpecName "kube-api-access-skdsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 13:32:24.295593 kubelet[1920]: E0117 13:32:24.295561 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:24.351869 kubelet[1920]: I0117 13:32:24.351470 1920 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-config-path\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.351869 kubelet[1920]: I0117 13:32:24.351517 1920 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-bpf-maps\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.351869 kubelet[1920]: I0117 13:32:24.351535 1920 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-etc-cni-netd\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.351869 kubelet[1920]: I0117 13:32:24.351552 1920 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-clustermesh-secrets\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.351869 kubelet[1920]: I0117 13:32:24.351569 1920 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-skdsz\" (UniqueName: \"kubernetes.io/projected/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-kube-api-access-skdsz\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.351869 kubelet[1920]: I0117 13:32:24.351596 1920 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-host-proc-sys-net\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.351869 kubelet[1920]: I0117 13:32:24.351611 1920 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-cgroup\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.351869 kubelet[1920]: I0117 13:32:24.351626 1920 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-hostproc\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.352413 kubelet[1920]: I0117 13:32:24.351666 1920 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-host-proc-sys-kernel\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.352413 kubelet[1920]: I0117 13:32:24.351685 1920 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cilium-run\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.352413 kubelet[1920]: I0117 13:32:24.351700 1920 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-cni-path\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.352413 kubelet[1920]: I0117 13:32:24.351716 1920 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-lib-modules\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.352413 kubelet[1920]: I0117 13:32:24.351731 1920 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5cb377ae-d47c-4c26-aa92-6cabcf7a2548-hubble-tls\") on node \"10.230.31.134\" DevicePath \"\"" Jan 17 13:32:24.352413 kubelet[1920]: E0117 13:32:24.352191 1920 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 13:32:24.581409 kubelet[1920]: I0117 13:32:24.581245 1920 scope.go:117] "RemoveContainer" containerID="4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337" Jan 17 13:32:24.585720 containerd[1512]: time="2025-01-17T13:32:24.585341759Z" level=info msg="RemoveContainer for \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\"" Jan 17 13:32:24.590834 containerd[1512]: time="2025-01-17T13:32:24.590698590Z" level=info msg="RemoveContainer for \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\" returns successfully" Jan 17 13:32:24.591349 kubelet[1920]: I0117 13:32:24.591307 1920 scope.go:117] "RemoveContainer" containerID="8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099" Jan 17 13:32:24.592171 systemd[1]: Removed slice kubepods-burstable-pod5cb377ae_d47c_4c26_aa92_6cabcf7a2548.slice - libcontainer container kubepods-burstable-pod5cb377ae_d47c_4c26_aa92_6cabcf7a2548.slice. Jan 17 13:32:24.592336 systemd[1]: kubepods-burstable-pod5cb377ae_d47c_4c26_aa92_6cabcf7a2548.slice: Consumed 10.229s CPU time. Jan 17 13:32:24.593981 containerd[1512]: time="2025-01-17T13:32:24.593412988Z" level=info msg="RemoveContainer for \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\"" Jan 17 13:32:24.609761 containerd[1512]: time="2025-01-17T13:32:24.609632462Z" level=info msg="RemoveContainer for \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\" returns successfully" Jan 17 13:32:24.609999 kubelet[1920]: I0117 13:32:24.609890 1920 scope.go:117] "RemoveContainer" containerID="d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f" Jan 17 13:32:24.611321 containerd[1512]: time="2025-01-17T13:32:24.611292216Z" level=info msg="RemoveContainer for \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\"" Jan 17 13:32:24.613963 containerd[1512]: time="2025-01-17T13:32:24.613917823Z" level=info msg="RemoveContainer for \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\" returns successfully" Jan 17 13:32:24.614156 kubelet[1920]: I0117 13:32:24.614098 1920 scope.go:117] "RemoveContainer" containerID="e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02" Jan 17 13:32:24.615293 containerd[1512]: time="2025-01-17T13:32:24.615252312Z" level=info msg="RemoveContainer for \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\"" Jan 17 13:32:24.625824 containerd[1512]: time="2025-01-17T13:32:24.624581162Z" level=info msg="RemoveContainer for \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\" returns successfully" Jan 17 13:32:24.628569 kubelet[1920]: I0117 13:32:24.628532 1920 scope.go:117] "RemoveContainer" containerID="9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3" Jan 17 13:32:24.631346 containerd[1512]: time="2025-01-17T13:32:24.631291416Z" level=info msg="RemoveContainer for \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\"" Jan 17 13:32:24.634040 containerd[1512]: time="2025-01-17T13:32:24.633998677Z" level=info msg="RemoveContainer for \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\" returns successfully" Jan 17 13:32:24.634284 kubelet[1920]: I0117 13:32:24.634183 1920 scope.go:117] "RemoveContainer" containerID="4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337" Jan 17 13:32:24.639136 containerd[1512]: time="2025-01-17T13:32:24.639035590Z" level=error msg="ContainerStatus for \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\": not found" Jan 17 13:32:24.664567 kubelet[1920]: E0117 13:32:24.664512 1920 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\": not found" containerID="4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337" Jan 17 13:32:24.664723 kubelet[1920]: I0117 13:32:24.664686 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337"} err="failed to get container status \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ca4b54234d2ba9dd94e34c3ba64b877369e19977b31ca06874a91db1235e337\": not found" Jan 17 13:32:24.664777 kubelet[1920]: I0117 13:32:24.664730 1920 scope.go:117] "RemoveContainer" containerID="8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099" Jan 17 13:32:24.665100 containerd[1512]: time="2025-01-17T13:32:24.665021772Z" level=error msg="ContainerStatus for \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\": not found" Jan 17 13:32:24.665337 kubelet[1920]: E0117 13:32:24.665232 1920 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\": not found" containerID="8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099" Jan 17 13:32:24.665337 kubelet[1920]: I0117 13:32:24.665269 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099"} err="failed to get container status \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\": rpc error: code = NotFound desc = an error occurred when try to find container \"8542381899c69e10a46b75cc54b1f4e2997e20b5b3811dc298aff50bc3c71099\": not found" Jan 17 13:32:24.665337 kubelet[1920]: I0117 13:32:24.665295 1920 scope.go:117] "RemoveContainer" containerID="d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f" Jan 17 13:32:24.665885 containerd[1512]: time="2025-01-17T13:32:24.665680120Z" level=error msg="ContainerStatus for \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\": not found" Jan 17 13:32:24.666248 kubelet[1920]: E0117 13:32:24.666076 1920 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\": not found" containerID="d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f" Jan 17 13:32:24.666248 kubelet[1920]: I0117 13:32:24.666130 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f"} err="failed to get container status \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0a58850aaf2bd4c3b3cd384ccb0b44a5faa8edd99591841de8d3b09f114a70f\": not found" Jan 17 13:32:24.666248 kubelet[1920]: I0117 13:32:24.666148 1920 scope.go:117] "RemoveContainer" containerID="e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02" Jan 17 13:32:24.666776 containerd[1512]: time="2025-01-17T13:32:24.666685223Z" level=error msg="ContainerStatus for \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\": not found" Jan 17 13:32:24.667110 kubelet[1920]: E0117 13:32:24.666856 1920 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\": not found" containerID="e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02" Jan 17 13:32:24.667110 kubelet[1920]: I0117 13:32:24.666904 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02"} err="failed to get container status \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\": rpc error: code = NotFound desc = an error occurred when try to find container \"e22c81ce987308d8b725b0673aa5bf32d5ef8609541efb7777d353b289d0dc02\": not found" Jan 17 13:32:24.667110 kubelet[1920]: I0117 13:32:24.666933 1920 scope.go:117] "RemoveContainer" containerID="9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3" Jan 17 13:32:24.667514 containerd[1512]: time="2025-01-17T13:32:24.667400655Z" level=error msg="ContainerStatus for \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\": not found" Jan 17 13:32:24.667747 kubelet[1920]: E0117 13:32:24.667711 1920 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\": not found" containerID="9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3" Jan 17 13:32:24.667834 kubelet[1920]: I0117 13:32:24.667774 1920 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3"} err="failed to get container status \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d0931d9293e1cf08080d85c0dd24f1f70016342aced86c3d890a4d40775e7d3\": not found" Jan 17 13:32:24.775128 systemd[1]: var-lib-kubelet-pods-5cb377ae\x2dd47c\x2d4c26\x2daa92\x2d6cabcf7a2548-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dskdsz.mount: Deactivated successfully. Jan 17 13:32:24.775309 systemd[1]: var-lib-kubelet-pods-5cb377ae\x2dd47c\x2d4c26\x2daa92\x2d6cabcf7a2548-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 13:32:25.295993 kubelet[1920]: E0117 13:32:25.295908 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:25.355273 kubelet[1920]: I0117 13:32:25.355147 1920 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5cb377ae-d47c-4c26-aa92-6cabcf7a2548" path="/var/lib/kubelet/pods/5cb377ae-d47c-4c26-aa92-6cabcf7a2548/volumes" Jan 17 13:32:26.296547 kubelet[1920]: E0117 13:32:26.296449 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:27.297605 kubelet[1920]: E0117 13:32:27.297502 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:28.298748 kubelet[1920]: E0117 13:32:28.298625 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:28.783208 kubelet[1920]: I0117 13:32:28.783096 1920 topology_manager.go:215] "Topology Admit Handler" podUID="65377ba3-e4f5-4f32-a0d4-b92a63120fad" podNamespace="kube-system" podName="cilium-txb8n" Jan 17 13:32:28.783208 kubelet[1920]: E0117 13:32:28.783215 1920 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cb377ae-d47c-4c26-aa92-6cabcf7a2548" containerName="mount-cgroup" Jan 17 13:32:28.783496 kubelet[1920]: E0117 13:32:28.783250 1920 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cb377ae-d47c-4c26-aa92-6cabcf7a2548" containerName="clean-cilium-state" Jan 17 13:32:28.783496 kubelet[1920]: E0117 13:32:28.783264 1920 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cb377ae-d47c-4c26-aa92-6cabcf7a2548" containerName="cilium-agent" Jan 17 13:32:28.783496 kubelet[1920]: E0117 13:32:28.783276 1920 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cb377ae-d47c-4c26-aa92-6cabcf7a2548" containerName="apply-sysctl-overwrites" Jan 17 13:32:28.783496 kubelet[1920]: E0117 13:32:28.783321 1920 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5cb377ae-d47c-4c26-aa92-6cabcf7a2548" containerName="mount-bpf-fs" Jan 17 13:32:28.783496 kubelet[1920]: I0117 13:32:28.783390 1920 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb377ae-d47c-4c26-aa92-6cabcf7a2548" containerName="cilium-agent" Jan 17 13:32:28.784710 kubelet[1920]: I0117 13:32:28.784107 1920 topology_manager.go:215] "Topology Admit Handler" podUID="a0e5e9b1-da6c-4bba-9142-7f29c1953090" podNamespace="kube-system" podName="cilium-operator-5cc964979-mblzq" Jan 17 13:32:28.793518 systemd[1]: Created slice kubepods-burstable-pod65377ba3_e4f5_4f32_a0d4_b92a63120fad.slice - libcontainer container kubepods-burstable-pod65377ba3_e4f5_4f32_a0d4_b92a63120fad.slice. Jan 17 13:32:28.795129 kubelet[1920]: W0117 13:32:28.795003 1920 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.230.31.134" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.31.134' and this object Jan 17 13:32:28.795129 kubelet[1920]: E0117 13:32:28.795065 1920 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.230.31.134" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.31.134' and this object Jan 17 13:32:28.795129 kubelet[1920]: W0117 13:32:28.795127 1920 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.230.31.134" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.31.134' and this object Jan 17 13:32:28.795374 kubelet[1920]: E0117 13:32:28.795150 1920 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.230.31.134" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.31.134' and this object Jan 17 13:32:28.795374 kubelet[1920]: W0117 13:32:28.795286 1920 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.230.31.134" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.230.31.134' and this object Jan 17 13:32:28.795374 kubelet[1920]: E0117 13:32:28.795309 1920 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.230.31.134" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.230.31.134' and this object Jan 17 13:32:28.795760 kubelet[1920]: W0117 13:32:28.795733 1920 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.230.31.134" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.31.134' and this object Jan 17 13:32:28.795992 kubelet[1920]: E0117 13:32:28.795765 1920 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.230.31.134" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.31.134' and this object Jan 17 13:32:28.817115 systemd[1]: Created slice kubepods-besteffort-poda0e5e9b1_da6c_4bba_9142_7f29c1953090.slice - libcontainer container kubepods-besteffort-poda0e5e9b1_da6c_4bba_9142_7f29c1953090.slice. Jan 17 13:32:28.883096 kubelet[1920]: I0117 13:32:28.882975 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-host-proc-sys-kernel\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883096 kubelet[1920]: I0117 13:32:28.883053 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-cilium-cgroup\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883364 kubelet[1920]: I0117 13:32:28.883160 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-cni-path\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883364 kubelet[1920]: I0117 13:32:28.883217 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-lib-modules\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883364 kubelet[1920]: I0117 13:32:28.883267 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvdcd\" (UniqueName: \"kubernetes.io/projected/65377ba3-e4f5-4f32-a0d4-b92a63120fad-kube-api-access-hvdcd\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883364 kubelet[1920]: I0117 13:32:28.883299 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2k56\" (UniqueName: \"kubernetes.io/projected/a0e5e9b1-da6c-4bba-9142-7f29c1953090-kube-api-access-l2k56\") pod \"cilium-operator-5cc964979-mblzq\" (UID: \"a0e5e9b1-da6c-4bba-9142-7f29c1953090\") " pod="kube-system/cilium-operator-5cc964979-mblzq" Jan 17 13:32:28.883364 kubelet[1920]: I0117 13:32:28.883347 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-bpf-maps\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883601 kubelet[1920]: I0117 13:32:28.883389 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/65377ba3-e4f5-4f32-a0d4-b92a63120fad-cilium-ipsec-secrets\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883601 kubelet[1920]: I0117 13:32:28.883417 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65377ba3-e4f5-4f32-a0d4-b92a63120fad-hubble-tls\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883601 kubelet[1920]: I0117 13:32:28.883475 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-host-proc-sys-net\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883601 kubelet[1920]: I0117 13:32:28.883511 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-cilium-run\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883601 kubelet[1920]: I0117 13:32:28.883552 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-xtables-lock\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883837 kubelet[1920]: I0117 13:32:28.883646 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65377ba3-e4f5-4f32-a0d4-b92a63120fad-clustermesh-secrets\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883837 kubelet[1920]: I0117 13:32:28.883704 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a0e5e9b1-da6c-4bba-9142-7f29c1953090-cilium-config-path\") pod \"cilium-operator-5cc964979-mblzq\" (UID: \"a0e5e9b1-da6c-4bba-9142-7f29c1953090\") " pod="kube-system/cilium-operator-5cc964979-mblzq" Jan 17 13:32:28.883837 kubelet[1920]: I0117 13:32:28.883737 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-etc-cni-netd\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.883837 kubelet[1920]: I0117 13:32:28.883773 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65377ba3-e4f5-4f32-a0d4-b92a63120fad-hostproc\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:28.884025 kubelet[1920]: I0117 13:32:28.883842 1920 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65377ba3-e4f5-4f32-a0d4-b92a63120fad-cilium-config-path\") pod \"cilium-txb8n\" (UID: \"65377ba3-e4f5-4f32-a0d4-b92a63120fad\") " pod="kube-system/cilium-txb8n" Jan 17 13:32:29.231473 kubelet[1920]: E0117 13:32:29.231400 1920 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:29.298898 kubelet[1920]: E0117 13:32:29.298837 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:29.353533 kubelet[1920]: E0117 13:32:29.353398 1920 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 13:32:29.986969 kubelet[1920]: E0117 13:32:29.986516 1920 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 17 13:32:29.986969 kubelet[1920]: E0117 13:32:29.986752 1920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65377ba3-e4f5-4f32-a0d4-b92a63120fad-cilium-config-path podName:65377ba3-e4f5-4f32-a0d4-b92a63120fad nodeName:}" failed. No retries permitted until 2025-01-17 13:32:30.48668861 +0000 UTC m=+81.783576727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/65377ba3-e4f5-4f32-a0d4-b92a63120fad-cilium-config-path") pod "cilium-txb8n" (UID: "65377ba3-e4f5-4f32-a0d4-b92a63120fad") : failed to sync configmap cache: timed out waiting for the condition Jan 17 13:32:29.987856 kubelet[1920]: E0117 13:32:29.987298 1920 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 17 13:32:29.987856 kubelet[1920]: E0117 13:32:29.987347 1920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0e5e9b1-da6c-4bba-9142-7f29c1953090-cilium-config-path podName:a0e5e9b1-da6c-4bba-9142-7f29c1953090 nodeName:}" failed. No retries permitted until 2025-01-17 13:32:30.48733407 +0000 UTC m=+81.784222197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a0e5e9b1-da6c-4bba-9142-7f29c1953090-cilium-config-path") pod "cilium-operator-5cc964979-mblzq" (UID: "a0e5e9b1-da6c-4bba-9142-7f29c1953090") : failed to sync configmap cache: timed out waiting for the condition Jan 17 13:32:29.988489 kubelet[1920]: E0117 13:32:29.988196 1920 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 17 13:32:29.988489 kubelet[1920]: E0117 13:32:29.988222 1920 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 13:32:29.988489 kubelet[1920]: E0117 13:32:29.988276 1920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65377ba3-e4f5-4f32-a0d4-b92a63120fad-clustermesh-secrets podName:65377ba3-e4f5-4f32-a0d4-b92a63120fad nodeName:}" failed. No retries permitted until 2025-01-17 13:32:30.488261208 +0000 UTC m=+81.785149337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/65377ba3-e4f5-4f32-a0d4-b92a63120fad-clustermesh-secrets") pod "cilium-txb8n" (UID: "65377ba3-e4f5-4f32-a0d4-b92a63120fad") : failed to sync secret cache: timed out waiting for the condition Jan 17 13:32:29.988489 kubelet[1920]: E0117 13:32:29.988296 1920 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-txb8n: failed to sync secret cache: timed out waiting for the condition Jan 17 13:32:29.988489 kubelet[1920]: E0117 13:32:29.988446 1920 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/65377ba3-e4f5-4f32-a0d4-b92a63120fad-hubble-tls podName:65377ba3-e4f5-4f32-a0d4-b92a63120fad nodeName:}" failed. No retries permitted until 2025-01-17 13:32:30.488410952 +0000 UTC m=+81.785299072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/65377ba3-e4f5-4f32-a0d4-b92a63120fad-hubble-tls") pod "cilium-txb8n" (UID: "65377ba3-e4f5-4f32-a0d4-b92a63120fad") : failed to sync secret cache: timed out waiting for the condition Jan 17 13:32:30.299868 kubelet[1920]: E0117 13:32:30.299560 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:30.616247 containerd[1512]: time="2025-01-17T13:32:30.615964122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txb8n,Uid:65377ba3-e4f5-4f32-a0d4-b92a63120fad,Namespace:kube-system,Attempt:0,}" Jan 17 13:32:30.621495 containerd[1512]: time="2025-01-17T13:32:30.620876126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-mblzq,Uid:a0e5e9b1-da6c-4bba-9142-7f29c1953090,Namespace:kube-system,Attempt:0,}" Jan 17 13:32:30.657598 kubelet[1920]: I0117 13:32:30.657540 1920 setters.go:568] "Node became not ready" node="10.230.31.134" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T13:32:30Z","lastTransitionTime":"2025-01-17T13:32:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 13:32:30.660980 containerd[1512]: time="2025-01-17T13:32:30.660801571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 13:32:30.661241 containerd[1512]: time="2025-01-17T13:32:30.661154811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 13:32:30.661935 containerd[1512]: time="2025-01-17T13:32:30.661432687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:32:30.663953 containerd[1512]: time="2025-01-17T13:32:30.662575964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:32:30.675145 containerd[1512]: time="2025-01-17T13:32:30.675038917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 13:32:30.675864 containerd[1512]: time="2025-01-17T13:32:30.675116116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 13:32:30.675864 containerd[1512]: time="2025-01-17T13:32:30.675132657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:32:30.675864 containerd[1512]: time="2025-01-17T13:32:30.675229011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 13:32:30.706055 systemd[1]: Started cri-containerd-1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a.scope - libcontainer container 1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a. Jan 17 13:32:30.709155 systemd[1]: Started cri-containerd-d2ba5c91335b41a811905fbce016c45250fb8df04afe60a8459cd6991030d142.scope - libcontainer container d2ba5c91335b41a811905fbce016c45250fb8df04afe60a8459cd6991030d142. Jan 17 13:32:30.765172 containerd[1512]: time="2025-01-17T13:32:30.765112647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txb8n,Uid:65377ba3-e4f5-4f32-a0d4-b92a63120fad,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\"" Jan 17 13:32:30.770316 containerd[1512]: time="2025-01-17T13:32:30.770165876Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 13:32:30.792410 containerd[1512]: time="2025-01-17T13:32:30.792345391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-mblzq,Uid:a0e5e9b1-da6c-4bba-9142-7f29c1953090,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2ba5c91335b41a811905fbce016c45250fb8df04afe60a8459cd6991030d142\"" Jan 17 13:32:30.793958 containerd[1512]: time="2025-01-17T13:32:30.793474808Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9ded1fb4d4fda8be129d34948e3e0735402cd65c114cc8b5f001b4569bf5f9a1\"" Jan 17 13:32:30.794349 containerd[1512]: time="2025-01-17T13:32:30.794314993Z" level=info msg="StartContainer for \"9ded1fb4d4fda8be129d34948e3e0735402cd65c114cc8b5f001b4569bf5f9a1\"" Jan 17 13:32:30.796993 containerd[1512]: time="2025-01-17T13:32:30.795574402Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 13:32:30.843058 systemd[1]: Started cri-containerd-9ded1fb4d4fda8be129d34948e3e0735402cd65c114cc8b5f001b4569bf5f9a1.scope - libcontainer container 9ded1fb4d4fda8be129d34948e3e0735402cd65c114cc8b5f001b4569bf5f9a1. Jan 17 13:32:30.880738 containerd[1512]: time="2025-01-17T13:32:30.880472463Z" level=info msg="StartContainer for \"9ded1fb4d4fda8be129d34948e3e0735402cd65c114cc8b5f001b4569bf5f9a1\" returns successfully" Jan 17 13:32:30.900398 systemd[1]: cri-containerd-9ded1fb4d4fda8be129d34948e3e0735402cd65c114cc8b5f001b4569bf5f9a1.scope: Deactivated successfully. Jan 17 13:32:30.943227 containerd[1512]: time="2025-01-17T13:32:30.943093917Z" level=info msg="shim disconnected" id=9ded1fb4d4fda8be129d34948e3e0735402cd65c114cc8b5f001b4569bf5f9a1 namespace=k8s.io Jan 17 13:32:30.943504 containerd[1512]: time="2025-01-17T13:32:30.943221799Z" level=warning msg="cleaning up after shim disconnected" id=9ded1fb4d4fda8be129d34948e3e0735402cd65c114cc8b5f001b4569bf5f9a1 namespace=k8s.io Jan 17 13:32:30.943504 containerd[1512]: time="2025-01-17T13:32:30.943258900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:32:31.300438 kubelet[1920]: E0117 13:32:31.300353 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:31.605478 containerd[1512]: time="2025-01-17T13:32:31.605259087Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 13:32:31.618476 containerd[1512]: time="2025-01-17T13:32:31.618427746Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5891b2c5b509ba74954cbc9c573a175c063ab8c850a860ff7cd5cf05f8b60b7a\"" Jan 17 13:32:31.619250 containerd[1512]: time="2025-01-17T13:32:31.619112791Z" level=info msg="StartContainer for \"5891b2c5b509ba74954cbc9c573a175c063ab8c850a860ff7cd5cf05f8b60b7a\"" Jan 17 13:32:31.666054 systemd[1]: Started cri-containerd-5891b2c5b509ba74954cbc9c573a175c063ab8c850a860ff7cd5cf05f8b60b7a.scope - libcontainer container 5891b2c5b509ba74954cbc9c573a175c063ab8c850a860ff7cd5cf05f8b60b7a. Jan 17 13:32:31.700527 containerd[1512]: time="2025-01-17T13:32:31.700249079Z" level=info msg="StartContainer for \"5891b2c5b509ba74954cbc9c573a175c063ab8c850a860ff7cd5cf05f8b60b7a\" returns successfully" Jan 17 13:32:31.712144 systemd[1]: cri-containerd-5891b2c5b509ba74954cbc9c573a175c063ab8c850a860ff7cd5cf05f8b60b7a.scope: Deactivated successfully. Jan 17 13:32:31.746972 containerd[1512]: time="2025-01-17T13:32:31.746745834Z" level=info msg="shim disconnected" id=5891b2c5b509ba74954cbc9c573a175c063ab8c850a860ff7cd5cf05f8b60b7a namespace=k8s.io Jan 17 13:32:31.746972 containerd[1512]: time="2025-01-17T13:32:31.746891449Z" level=warning msg="cleaning up after shim disconnected" id=5891b2c5b509ba74954cbc9c573a175c063ab8c850a860ff7cd5cf05f8b60b7a namespace=k8s.io Jan 17 13:32:31.746972 containerd[1512]: time="2025-01-17T13:32:31.746910868Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:32:32.301327 kubelet[1920]: E0117 13:32:32.301242 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:32.509489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5891b2c5b509ba74954cbc9c573a175c063ab8c850a860ff7cd5cf05f8b60b7a-rootfs.mount: Deactivated successfully. Jan 17 13:32:32.615761 containerd[1512]: time="2025-01-17T13:32:32.615610749Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 13:32:32.655589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908844706.mount: Deactivated successfully. Jan 17 13:32:32.673743 containerd[1512]: time="2025-01-17T13:32:32.673698911Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1a4e2141a520b0baa656839c37cdc9459cb92b6a668e97066b85748a76f33b22\"" Jan 17 13:32:32.676880 containerd[1512]: time="2025-01-17T13:32:32.675915311Z" level=info msg="StartContainer for \"1a4e2141a520b0baa656839c37cdc9459cb92b6a668e97066b85748a76f33b22\"" Jan 17 13:32:32.710992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704076067.mount: Deactivated successfully. Jan 17 13:32:32.743052 systemd[1]: Started cri-containerd-1a4e2141a520b0baa656839c37cdc9459cb92b6a668e97066b85748a76f33b22.scope - libcontainer container 1a4e2141a520b0baa656839c37cdc9459cb92b6a668e97066b85748a76f33b22. Jan 17 13:32:32.793142 containerd[1512]: time="2025-01-17T13:32:32.793096475Z" level=info msg="StartContainer for \"1a4e2141a520b0baa656839c37cdc9459cb92b6a668e97066b85748a76f33b22\" returns successfully" Jan 17 13:32:32.796984 systemd[1]: cri-containerd-1a4e2141a520b0baa656839c37cdc9459cb92b6a668e97066b85748a76f33b22.scope: Deactivated successfully. Jan 17 13:32:32.857271 containerd[1512]: time="2025-01-17T13:32:32.857166039Z" level=info msg="shim disconnected" id=1a4e2141a520b0baa656839c37cdc9459cb92b6a668e97066b85748a76f33b22 namespace=k8s.io Jan 17 13:32:32.857938 containerd[1512]: time="2025-01-17T13:32:32.857694054Z" level=warning msg="cleaning up after shim disconnected" id=1a4e2141a520b0baa656839c37cdc9459cb92b6a668e97066b85748a76f33b22 namespace=k8s.io Jan 17 13:32:32.857938 containerd[1512]: time="2025-01-17T13:32:32.857787861Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:32:33.302608 kubelet[1920]: E0117 13:32:33.302313 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:33.597521 containerd[1512]: time="2025-01-17T13:32:33.597271692Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:32:33.598616 containerd[1512]: time="2025-01-17T13:32:33.598567857Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907205" Jan 17 13:32:33.599298 containerd[1512]: time="2025-01-17T13:32:33.599153871Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 13:32:33.601434 containerd[1512]: time="2025-01-17T13:32:33.601226614Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.804229002s" Jan 17 13:32:33.601434 containerd[1512]: time="2025-01-17T13:32:33.601287818Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 17 13:32:33.604107 containerd[1512]: time="2025-01-17T13:32:33.604005376Z" level=info msg="CreateContainer within sandbox \"d2ba5c91335b41a811905fbce016c45250fb8df04afe60a8459cd6991030d142\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 13:32:33.628696 containerd[1512]: time="2025-01-17T13:32:33.628582468Z" level=info msg="CreateContainer within sandbox \"d2ba5c91335b41a811905fbce016c45250fb8df04afe60a8459cd6991030d142\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0a7243d2bd53c3c48edf4c044747ad141bcbff098b4a2774b99369088f1d861c\"" Jan 17 13:32:33.629289 containerd[1512]: time="2025-01-17T13:32:33.629243049Z" level=info msg="StartContainer for \"0a7243d2bd53c3c48edf4c044747ad141bcbff098b4a2774b99369088f1d861c\"" Jan 17 13:32:33.650275 containerd[1512]: time="2025-01-17T13:32:33.650201282Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 13:32:33.692194 systemd[1]: Started cri-containerd-0a7243d2bd53c3c48edf4c044747ad141bcbff098b4a2774b99369088f1d861c.scope - libcontainer container 0a7243d2bd53c3c48edf4c044747ad141bcbff098b4a2774b99369088f1d861c. Jan 17 13:32:33.694108 containerd[1512]: time="2025-01-17T13:32:33.693713109Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1a59b11d5e7dfbb86c5707bc91ab34436fdee49609c66a33cb4bbf2ef610ba9e\"" Jan 17 13:32:33.694608 containerd[1512]: time="2025-01-17T13:32:33.694563311Z" level=info msg="StartContainer for \"1a59b11d5e7dfbb86c5707bc91ab34436fdee49609c66a33cb4bbf2ef610ba9e\"" Jan 17 13:32:33.737010 systemd[1]: Started cri-containerd-1a59b11d5e7dfbb86c5707bc91ab34436fdee49609c66a33cb4bbf2ef610ba9e.scope - libcontainer container 1a59b11d5e7dfbb86c5707bc91ab34436fdee49609c66a33cb4bbf2ef610ba9e. Jan 17 13:32:33.745632 containerd[1512]: time="2025-01-17T13:32:33.743189743Z" level=info msg="StartContainer for \"0a7243d2bd53c3c48edf4c044747ad141bcbff098b4a2774b99369088f1d861c\" returns successfully" Jan 17 13:32:33.786631 systemd[1]: cri-containerd-1a59b11d5e7dfbb86c5707bc91ab34436fdee49609c66a33cb4bbf2ef610ba9e.scope: Deactivated successfully. Jan 17 13:32:33.788740 containerd[1512]: time="2025-01-17T13:32:33.788692946Z" level=info msg="StartContainer for \"1a59b11d5e7dfbb86c5707bc91ab34436fdee49609c66a33cb4bbf2ef610ba9e\" returns successfully" Jan 17 13:32:33.841240 containerd[1512]: time="2025-01-17T13:32:33.841154259Z" level=info msg="shim disconnected" id=1a59b11d5e7dfbb86c5707bc91ab34436fdee49609c66a33cb4bbf2ef610ba9e namespace=k8s.io Jan 17 13:32:33.841240 containerd[1512]: time="2025-01-17T13:32:33.841235218Z" level=warning msg="cleaning up after shim disconnected" id=1a59b11d5e7dfbb86c5707bc91ab34436fdee49609c66a33cb4bbf2ef610ba9e namespace=k8s.io Jan 17 13:32:33.841240 containerd[1512]: time="2025-01-17T13:32:33.841250220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 13:32:34.303037 kubelet[1920]: E0117 13:32:34.302958 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:34.355146 kubelet[1920]: E0117 13:32:34.355049 1920 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 13:32:34.655082 containerd[1512]: time="2025-01-17T13:32:34.654942807Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 13:32:34.661757 kubelet[1920]: I0117 13:32:34.661678 1920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-mblzq" podStartSLOduration=3.854733193 podStartE2EDuration="6.661618912s" podCreationTimestamp="2025-01-17 13:32:28 +0000 UTC" firstStartedPulling="2025-01-17 13:32:30.794960662 +0000 UTC m=+82.091848779" lastFinishedPulling="2025-01-17 13:32:33.601846365 +0000 UTC m=+84.898734498" observedRunningTime="2025-01-17 13:32:34.661502744 +0000 UTC m=+85.958390880" watchObservedRunningTime="2025-01-17 13:32:34.661618912 +0000 UTC m=+85.958507043" Jan 17 13:32:34.678111 containerd[1512]: time="2025-01-17T13:32:34.677967150Z" level=info msg="CreateContainer within sandbox \"1a6d299a4bca206a0666e4095bc061fe31aa0b6794441ae683fc79bcbe355a2a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e25bc4be1b4de921465e38334865f58e8b4c597fa366781864408dc6028b35c5\"" Jan 17 13:32:34.679750 containerd[1512]: time="2025-01-17T13:32:34.678638401Z" level=info msg="StartContainer for \"e25bc4be1b4de921465e38334865f58e8b4c597fa366781864408dc6028b35c5\"" Jan 17 13:32:34.723099 systemd[1]: Started cri-containerd-e25bc4be1b4de921465e38334865f58e8b4c597fa366781864408dc6028b35c5.scope - libcontainer container e25bc4be1b4de921465e38334865f58e8b4c597fa366781864408dc6028b35c5. Jan 17 13:32:34.767867 containerd[1512]: time="2025-01-17T13:32:34.767797462Z" level=info msg="StartContainer for \"e25bc4be1b4de921465e38334865f58e8b4c597fa366781864408dc6028b35c5\" returns successfully" Jan 17 13:32:35.303609 kubelet[1920]: E0117 13:32:35.303526 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:35.434865 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 17 13:32:35.688628 kubelet[1920]: I0117 13:32:35.688482 1920 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-txb8n" podStartSLOduration=7.688421771 podStartE2EDuration="7.688421771s" podCreationTimestamp="2025-01-17 13:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 13:32:35.686363005 +0000 UTC m=+86.983251146" watchObservedRunningTime="2025-01-17 13:32:35.688421771 +0000 UTC m=+86.985309914" Jan 17 13:32:36.304117 kubelet[1920]: E0117 13:32:36.304035 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:37.304366 kubelet[1920]: E0117 13:32:37.304283 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:38.305376 kubelet[1920]: E0117 13:32:38.305304 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:39.056272 systemd[1]: run-containerd-runc-k8s.io-e25bc4be1b4de921465e38334865f58e8b4c597fa366781864408dc6028b35c5-runc.w4BsjV.mount: Deactivated successfully. Jan 17 13:32:39.109898 systemd-networkd[1415]: lxc_health: Link UP Jan 17 13:32:39.115098 systemd-networkd[1415]: lxc_health: Gained carrier Jan 17 13:32:39.252362 kubelet[1920]: E0117 13:32:39.252314 1920 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41196->127.0.0.1:38923: write tcp 127.0.0.1:41196->127.0.0.1:38923: write: broken pipe Jan 17 13:32:39.305862 kubelet[1920]: E0117 13:32:39.305761 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:40.307067 kubelet[1920]: E0117 13:32:40.306989 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:40.628062 systemd-networkd[1415]: lxc_health: Gained IPv6LL Jan 17 13:32:41.308146 kubelet[1920]: E0117 13:32:41.308084 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:42.308864 kubelet[1920]: E0117 13:32:42.308757 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:43.309579 kubelet[1920]: E0117 13:32:43.309474 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:43.710062 kubelet[1920]: E0117 13:32:43.709767 1920 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41212->127.0.0.1:38923: write tcp 127.0.0.1:41212->127.0.0.1:38923: write: broken pipe Jan 17 13:32:44.310490 kubelet[1920]: E0117 13:32:44.310378 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:45.311267 kubelet[1920]: E0117 13:32:45.311191 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:46.311425 kubelet[1920]: E0117 13:32:46.311351 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:47.311662 kubelet[1920]: E0117 13:32:47.311559 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 17 13:32:48.312158 kubelet[1920]: E0117 13:32:48.312082 1920 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"